index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
2,800
|
Efficient Unsupervised Learning for Localization and Detection in Object Categories Nicolas Loeff, Himanshu Arora ECE Department University of Illinois at Urbana-Champaign {loeff,harora1}@uiuc.edu Alexander Sorokin, David Forsyth Computer Science Department University of Illinois at Urbana-Champaign {sorokin2,daf}@uiuc.edu Abstract We describe a novel method for learning templates for recognition and localization of objects drawn from categories. A generative model represents the configuration of multiple object parts with respect to an object coordinate system; these parts in turn generate image features. The complexity of the model in the number of features is low, meaning our model is much more efficient to train than comparative methods. Moreover, a variational approximation is introduced that allows learning to be orders of magnitude faster than previous approaches while incorporating many more features. This results in both accuracy and localization improvements. Our model has been carefully tested on standard datasets; we compare with a number of recent template models. In particular, we demonstrate state-of-the-art results for detection and localization. 1 Introduction Building appropriate object models is central to object recognition, which is a fundamental problem in computer vision. Desirable characteristics of a model include good representation of objects, fast and efficient learning algorithms that require as little supervised information as possible. We believe an appropriate representation of an object should allow for both detection of its presence and localization (‘where is it?’). So far the quality of object recognition in the literature has been measured by its detection performance only. Viola and Jones [1] present a fast object detection system boosting Haar filter responses. Another effective discriminative approach is that of a bag of keypoints [2, 3]. It is based on clustering image patches using appearance only, disregarding geometric information. The performance for detection in this algorithm is among the state of the art. However as no geometry cues are used during training, features that do not belong to the object can be incorporated into the object model. This is similar to classic overfitting and typically leads to problems in object localization. Weber et. al. [4] represent an object as a constellation of parts. Fergus et. al. [5] extend the model to account for variability in appearance. The model encodes a template as a set of feature-generating parts. Each part generates at most one feature. As a result the complexity is determined by hardness of part-feature assignment. Heuristic search is used to approximate the solution, but feasible problems are limited to 7 parts with 30 features. Agarwal and Roth [6] learn using SNoW a classifier on a sparse representation of patches extracted around interesting points in the image. In [7], Leibe and Schiele use a voting scheme to predict object configuration from locations of individual patches. Both approaches provide localization, but require manually localizing the objects in training images. Hillel et. al. [8] independently proposed an approach similar to ours. Their model however has higher learning complexity and inferior detection performance despite being of discriminative nature. In this paper, we present a generative probabilistic model for detection and localization of objects that can be efficiently learnt with minimal supervision. The first crucial property of the model is that it represents the configuration of multiple object parts with respect to an unobserved, abstract object root (unlike [9, 10], where an “object root” is chosen as one of the visible parts of the object). This simplifies localization and allows our model to overcome occlusion and errors in feature extraction. The model also becomes symmetric with respect to visible parts. The second crucial assumption of the model is that a single part can generate multiple features in the image (or none). This may seem counterintuitive, but keypoint detectors generally detects several features around interesting areas. This hypothesis also makes an explicit model for part occlusion unnecessary: instead occlusion of a part means implicitly that no feature in the image is produced by it. These assumptions allow us to model all features in the image as being emitted independently conditioned on the object center. As a result the complexity of inference in our model is linear in the number of parts of the model and the number of features in the image, obviating the exponential complexity of combinatoric assignments in other approaches [4, 5, 11]. This means our model is much easier than constellation models to train using Expectation Maximization (EM), which enables the use of more features and more complex models with resulting improvements in both accuracy and localization. Furthermore we introduce a variational (mean-field) approximation during learning that allows it to be hundreds of times faster than previous approaches, with no substantial loss of accuracy. 2 Model Our model of an object category is a template that generates features in the image. Each image is represented as a set {fj} of F features extracted with the scale-saliency point detector [13]. Each feature is described by its location and appearance. Feature extraction and representation will be detailed in section 3. As described in the introduction, we hypothesize that given the object center all features are generated independently: pobj (f1, .., fF ) = P oc P(oc) Q j p(fj|oc). The abstract object center - which does not generate any features - is represented by a hidden random variable oc. For simplicity it takes values in a discrete grid of size Nx × Ny inside the image and oc is assumed to be a priori uniformly distributed in its domain. Conditioned on the object center, each feature is generated by a mixture of P parts plus a background part. A set of hidden variables {ωij} represents which part (i) produced feature fj. These variables ωij then take values {0, 1} restricted to PP +1 i=1 ωij = 1. In other words, ωij = 1 means feature j was produced by part i; each part can produce multiple features, each feature is produced by only one part. The distribution of a feature conditioned on the object center is then p(fj|oc) = P i p(fj, wij = 1|oc) = P i p(fj|wij = 1, oc)πi, where πi is the prior emission probability of part i. πi is subject to PP +1 i=1 πi = 1. Each part has a location distribution with respect to the object center corresponding to a two dimensional full covariance Gaussian, pi L(x|oc). The appearance (see section 3 for details) of a part does not depend on the configuration of the object; we consider two models : Gaussian Model (G) Appearance pi A is modeled as a k dimensional diagonal covariance Gaussian distribution. Local Topic Model (LT) Appearance pi A is modeled as a multinomial distribution on a previously learnt k-word image patch dictionary. This can be considered as a local topic model. Let θ denote the set of parameters. The complete data likelihood (joint distribution) for image n in the object model is then, P obj θ ({ωij}, oc, {fj}) = Y o′ c Y j,i pi L(fj|o′ c)pi A(fj)πi [ωij=1] P(o′ c) [oc=o′ c] (1) where [expr] is one if expr is true and zero otherwise. Marginalizing, the probability of the observed image in the object model is then, P obj θ ({fj}) = X oc P(oc) Y j′ (X i P(fj′, ωij′ = 1|oc) ) (2) The background model assumes all features are produced independently, with uniform location on the image. In the G model of appearance, the appearance is modeled with a k dimensional full covariance matrix Gaussian distribution. In the LT model, we use a multinomial distribution on the k-word image patch dictionary to model the appearance. 2.1 Learning The maximum-likelihood solution for the parameters of the above model does not have a closed form. In order to train the model the parameters are computed numerically using the approach of [14], minimizing a free-energy Fe associated with the model that is an upper bound on the negative log-likelihood. Following [14], we denote v = {fj} as the set of visible and h = {oc, ωij} as the set of hidden variables. Let DKL be the K-L divergence: Fe(Q, θ) = DKL Q(h) Pθ(h|v) −log Pθ(v) = Z h Q(h) log Q(h) Pθ(h, v)dh (3) In this bound, Q(h) can be a simpler approximation of the posterior probability Pθ(h|v), that is used to compute estimates and update parameters. Minimizing eq. 3 with respect to Q and θ under different restrictions, produces a range of algorithms including exact EM, variational learning and others [14]. Table 2.1 shows sample updates and complexity of these algorithms and comparison to other relevant work. The background model is learnt before the object model is trained. As assumed earlier, for Gaussian appearance model the background appearance model is a single gaussian, whose mean and variance are estimated as the sample mean and covariance. For the Local Topic model, the multinomial distribution is estimated as the sample histogram. The model for background feature location is uniform and does not have any parameters. EM Learning for the Object model: In the E-step, the set of parameters θ is fixed and Fe is minimized with respect to Q(h) without restrictions. This is equivalent to computing the actual posteriors in EM [14, 15]. In this case the optimal solution factorizes as Q(h) = Q(oc)Q(ωij|oc) = P(oc|v)P(ωij|oc, v). In the M-step, Fe is minimized with respect to the parameters θ using the current estimate of Q. Due to the conditional independence introduced in the model, inference is tractable and thus the E-step can be computed efficiently. The overall complexity of inference is O(FP · NxNy). Model Update for µi L Complexity Time (F,P) Fergus et al. N/A F P 36 hrs (30, 7) Model (EM) µi L ← P n P oc Q(oc) P j Q(ωji|oc){xj L−oc} P n P oc Q(oc) P j Q(ωji|oc) FP · NxNy 3 hrs (50, 30) (Variational) µi L ← P n{ P j Q(ωji)xj L−P oc Q(oc)oc} P n P oc Q(oc) P j Q(ωji) FP + NxNy 3 mins (100, 30) Table 1: An example of an update, overall complexity and convergence time for our models and [5], for different number of features per image (F) and number of parts in the object model (P). There is an increase in speed of several orders of magnitude with respect to [5] on similar hardware. Variational Learning: In this approach a mean field approximation of Q is considered; in the E-step the parameters θ are fixed and F is minimized with respect to Q under the restriction that it factorizes as Q(h) = Q(oc)Q(wij). This corresponds to a decoupling of location (oc) and part-feature assignment (wij) in the approximation (Q) of the posterior Pθ(h|v). In the M-step θ is fixed and the free energy Fe is minimized with respect to this (mean field) version of Q. A comparison between EM and Variational updates of the mean in location µi L of a part is shown in table 2.1. The overall complexity of inference is now O(FP) + O(NxNy); this represents orders of magnitude of speedup with respect to the already efficient EM learning. The impact on performance of the variational approximation is discussed in section 4. 2.2 Detection and localization For detection of object presence, a natural decision rule is the likelihood ratio test. After the models are learnt, for each test image P obj θ ({fj})/P bg({fj}) is compared to a threshold to make the decision. Once the presence of the object is established, the most likely location is given by the MAP estimate of oc. We assign parts in the model to the object if they exhibit consistent appearance and location. To remove model parts representing background we use a threshold on the entropy of the appearance distribution for the LT model (the determinant of the covariance in location for the G model). The MAP estimate of which features in the image are assigned (marginalizing over the object center) to parts in the model determines the support of the object. Bounding boxes include all keypoints assigned to the object and means of all model parts belonging to the object even if no keypoint is observed to be produced by such part. This explicitly handles occlusion (fig. 1). 3 Experimental setup The performance of the method depends on the feature detector making consistent extraction in different instances of objects of the same type. We use the scale-saliency interest point detector proposed in [13]. This method selects regions exhibiting unpredictable characteristics over both location and scale. The F regions with highest saliency over the image provide the features for learning and recognition. After the keypoints are detected, patches are extracted around this points and scale-normalized. A SIFT descriptor [16] (without orientation) is obtained from these patches. For model G, due to the high dimensionality of resulting space, PCA is performed choosing k = 15 components to represent the appearance of a feature. For model LT, we instead cluster the appearance of features in the original SIFT space with a gaussian mixture model with k = 250 components and use the most likely cluster as feature appearance representation. For all experiments we use P = 30 parts. The number of features is F = 50 for G model and F = 100 for LT model, Nx × Ny = 238. We test our approach on the Caltech 5 dataset: faces, motorbikes, airplanes, spotted cats vs. Caltech background and cars rear 2001 vs. cars background [5]. We initialize appearance and location of the parts with P randomly chosen features from the training set. The stopping criterion is the change in Fe. Figure 1: Local Topic model for faces, motorbikes and airplanes datasets [5]. In (a) the most likely location of the object center is plotted as a black circle. With respect to this reference, the spatial distribution (2D gaussian) of each part associated with the object is plotted in green. In (b) the centers of all features extracted are depicted. Blue ones are assigned by the model to the object, and red ones to the background. The bounding box is plotted in blue. Image (c) shows how many features in the image are assigned to the same part (a property of our model, not shared by [5]): six parts are chosen, their spatial distribution is plotted (green), and the features assigned to them are depicted in blue. Eyes (4,5), mouth (3) and left ear (6) have multiple assignments each. For each these parts, image (d) image shows the best matches in features extracted from the dataset. Note that the local topic model can learn parts uniform in appearance (i.e. eyes) but also more complex parts (i.e. the mouth part includes moustaches, beards and chins). The G appearance model and [5] do not have this property. The images (e) show the robustness of the method in cases with occlusion, missed detections and one caricature of a face. Images (f) and (g) show plots for motorbikes, and (h) and (i) for airplanes. 4 Results Detection: Although we believe that localization is an essential performance criterion, it is useless if the approach cannot detect objects. Figure 2 depicts equal error rate detection performance for our models and [5, 3, 8]. We can not compare our range of performance (for train/test splits), shown on the plot, because this data is not available for other approaches. Our method is robust to initialization (the variance for starting points is negligible compared to train/test split variance). The results show higher detection performance of all our algorithms compared to the generative model presented in [5]. The local topic (LT) model performs better than the model presented in [8]. The purely discriminative approach presented in [3] shows higher detection performance with different (“optimal combination”) features, but performs worse for the features we are using. The LT model showed consistently higher detection performance than the Gaussian (G) model. For both LT and G models the variational approximations showed similar discriminative power to that of the respective exact models. Unlike [5, 3], our model currently is not scale invariant. Nevertheless the probabilistic nature of the model allows for some tolerance to scale changes. In datasets of manageable size, it is inevitable that the background is correlated with the object. The result is that most modern methods that infer the template form partially supervised data can tend to model some background parts as lying on the object (see figure 4). Doing so tends to increase detection performance. It is reasonable to expect this increase will not persist in the face of a dramatic change in background. One symptom of this phenomenon (as in classical overfitting) is that methods that detect very well may be bad at localization, because they cannot separate the object from background. We are able to avoid this difficulty by predicting object extent conditioned on detection using only a subset of parts known to have relatively low variance in location or appearance, given the object center. We do not yet have an estimate of the increase in detection rate resulting from overfitting. This is a topic of ongoing research. In our opinion, if a method can detect but performs poorly at localization, the reason may be overfitting. Localization: Previous work on localization required aligned images (bounding boxes) or segmentation masks [7, 6]. A novel property of our model is that it learns to localize the object and determine its spatial extent without supervision. Figure 1 shows learned models and examples of localization. There is no standard measure to evaluate localization performance in an unsupervised setting. In such a case, the object center can be learnt at any position in the image, provided that this position is consistent across all images. We thus use as our performance measure, the standard deviation of estimated object centers and bounding boxes (obtained as in §2.2), after normalizing the estimates of each image to a coordinate system in which the ground truth bounding box is a unit square (0, 0)−(1, 1). As a baseline we use the rectified center of the image. All objects of interest in both airplane and motorbike datasets are centered in the image. As a result the baseline is a good predictor of the object center and is hard to beat. However in the faces dataset there is much more variation in location; then the advantage of our approach becomes clear. Figure 3 shows the scatterplot of normalized object centers and bounding boxes. The table in figure 2 shows the localization performance results using the proposed metric. Variational approximation comparison: Unusually for a variational approximation it is possible to compare it to the exact model; the results are excellent especially for the G model. This is consistent with our observation that during learning the variational approximation is good in this case (the free energy bound appears tight). On the other hand, for the LT model, the variational bound is loose during learning and localization performance is equivalent, but slightly lower than that of exact LT model. This may be explained by the fact that gaussian appearance model is less flexible then the topic model and thus G model can better tolerate decoupling of location and appearance. 90 91 92 93 94 95 96 97 98 99 Airplanes G GV LT LV C DL B DLc 92 93 94 95 96 97 98 99 100 Motorbikes G GV LT LV C DL B DLc 93 94 95 96 97 98 99 100 Faces GGV LT LV C DL B DLc 86 88 90 92 94 96 98 100 Cars rear G GV LT LV C B 82 84 86 88 90 92 94 96 98 Spotted Cats G GV LT LV C DL DLc Model Bbox(%) Obj. center(%) vert horz vert horz Faces G 8.88 21.88 4.58 16.59 GV 8.64 16.10 4.47 16.10 LT 8.17 13.16 3.92 6.45 LV 7.86 18.62 3.76 11.04 BL 4.50 24.71 Airplanes LT 19.30 9.09 10.06 4.42 BL 10.37 4.47 Motorbikes LT 8.41 7.33 4.93 4.65 BL 5.11 2.01 Figure 2: Plots on the left show detection performance on Caltech 5 datasets [5]. Equal error rate is reported. The original performance of constellation model [5] is denoted by C. We denote by DLc the performance (best in literature) reported by [3] using an optimal combination of feature types, and by DL the performance using our features. The performance of [8] is denoted by B. We show performance for our G model (G), LT model (L) and their variational approximations (GV) and (LV) respectively. We report median performance (×) over 20 runs and performance range excluding 10% best and 10% worst runs. On the right we show localization performance for all models on Faces dataset and performance of the best model (LT) on all datasets. Standard deviation is reported in percentage units with respect to the ground truth bounding box. For bounding boxes we average the standard deviation in each direction. BL denotes baseline performance. Figure 3: The airplane and motorbike datasets are aligned. Thus the image center baseline (b), (d) performs well there. Our localization performs similarly (a), (c). There is more variation in location in faces dataset. Scatterplot (f) shows the baseline performance and (g) shows the performance of our model. (e) shows the bounding boxes computed by our approach (LT model). Object centers and bounding boxes are rectified using the ground truth bounding boxes (blue). No information about location or spatial extent of the object is given to the algorithm. Figure 4: Approaches like [3] do not use geometric constraints during learning. Therefore, correlation between background and object in the dataset is incorporated into the object model. In this case the ellipses represent the features that are used by the algorithm in [3] to decide the presence of a face and motorbike (left images taken from [3]). On the other hand, our model (right images) can estimate the location and support of the object, even though no information about it is provided during learning. Blue circles represent the features assigned by the model to the face, the red points are centers of features assigned to background (plot for Local Topic Model). 5 Conclusions and future work We have presented a novel model for object categories. Our model allows efficient unsupervised learning, bringing the learning time to a few hours for full models and to minutes for variational approximations. The significant reduction in complexity allows to handle many more parts and features than comparable algorithms. The detection performance of our approach compares favorably to the state of the art even when compared to purely discriminative approaches. Also our model is capable of learning the spatial extent of the objects without supervision, with good results. This combination of fast learning and ability to localize is required to tackle challenging problems in computer vision. Among the most interesting applications we see unsupervised segmentation, learning, detection and localization of multiple object categories, deformable objects and objects with varying aspects. References [1] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. Proc. of CVPR, pages 511–518, 2001. [2] G. Csurka, C. Dance, L. Fan, and C. Bray. Visual Categorization with Bags of Keypoints. In Workshop on Stat. Learning in Comp. Vision, ECCV, pages 1–22, 2004. [3] G. Dork´o and C. Schmid. Object class recognition using discriminative local features. Submitted to IEEE trans. on PAMI, 2004. [4] M. Weber, M. Welling, and P. Perona. Unsupervised Learning of Models for Recognition. Proc. of ECCV (1), pages 18–32, 2000. [5] R. Fergus, P. Perona, and A. Zisserman. Object Class Recognition by Unsupervised ScaleInvariant Learning. Proc. of CVPR, pages 264–271, 2003. [6] S. Agarwal and D. Roth. Learning a sparse representation for object detection. In Proc. of ECCV, volume 4, pages 113–130, Copenhagen, Denmark, May 2002. [7] B. Leibe, A. Leonardis, and B. Schiele. Combined object categorization and segmentation with an implicit shape model. In Workshop on Stat. Learning in Comp. Vision, pages 17–32, May 2004. [8] A. B. Hillel, T. Hertz, and D. Weinshall. Efficient learning of relational object class models. In Proc. of ICCV, pages 1762–1769, October 2005. [9] R. Fergus, P. Perona, and A. Zisserman. A sparse object category model for efficient learning and exhaustive recognition. In Proc. of CVPR, pages 380–387, june 2005. [10] D. Crandall, P. Felzenszwalb, and D. Huttenlocher. Spatial Priors for Part-Based Recognition using Statistical Models. In Proc. of CVPR, pages 10–17, 2005. [11] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples an incremental bayesian approach tested on 101 object categories. In Workshop on Generative-Model Based Vision, Washington, DC, June 2004. [12] A. Opelt, M. Fussenegger, A. Pinz, and P. Auer. Generic object recognition with boosting. Technical Report TR-EMT-2004-01, EMT, TU Graz, Austria, 2004. Submitted to the IEEE Trans. on PAMI. [13] T. Kadir and M. Brady. Saliency, Scale and Image Description. IJCV, 45(2):83–105, 2001. [14] B. Frey and N. Jojic. A Comparison of Algorithms for Inference and Learning in Probabilistic Graphical Models. IEEE Trans. on PAMI, 27(9):1392–1416, 2005. [15] R. Neal and G. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in graphical models, pages 355–368. MIT Press, Cambridge, MA, USA, 1999. [16] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91–110, 2004.
|
2005
|
178
|
2,801
|
Beyond Gaussian Processes: On the Distributions of Infinite Networks Ricky Der Department of Mathematics University of Pennsylvania Philadelphia, PA 19104 rickyder@math.upenn.edu Daniel Lee Department of Electrical Engineering University of Pennsylvania Philadelphia, PA 19104 ddlee@seas.upenn.edu Abstract A general analysis of the limiting distribution of neural network functions is performed, with emphasis on non-Gaussian limits. We show that with i.i.d. symmetric stable output weights, and more generally with weights distributed from the normal domain of attraction of a stable variable, that the neural functions converge in distribution to stable processes. Conditions are also investigated under which Gaussian limits do occur when the weights are independent but not identically distributed. Some particularly tractable classes of stable distributions are examined, and the possibility of learning with such processes. 1 Introduction Consider the model fn(x) = 1 sn n X j=1 vjh(x; uj) ≡1 sn n X j=1 vjhj(x) (1) which can be viewed as a multi-layer perceptron with input x, hidden functions h, weights uj, output weights vj, and sn a sequence of normalizing constants. The work of Radford Neal [1] showed that, under certain assumptions on the parameter priors {vj, hj}, the distribution over the implied network functions fn converged to that of a Gaussian process, in the large network limit n →∞. The main feature of this derivation consisted of an invocation of the classical Central Limit Theorem (CLT). While one cavalierly speaks of “the” central limit theorem, there are in actuality many different CLTs, of varying generality and effect. All are concerned with the limits of suitably normalised sums of independent random variables (or where some condition is imposed so that no one variable dominates the sum1), but the limits themselves differ greatly: Gaussian, stable, infinitely divisible, or, discarding the infinitesimal assumption, none of these. It follows that in general, the asymptotic process for (1) may not be Gaussian. The following questions then arise: what is the relationship between choices of distributions on the model priors, and the asymptotic distribution over the induced neural functions? Under what conditions does the Gaussian approximation hold? If there do exist non-Gaussian limit points, is it possible to construct analagous generalizations of Gaussian process regression? 1Typically called an infinitesimal condition — see [4]. Previous work on these problems consists mainly in Neal’s publication [1], which established that when the output weights vj are finite variance and i.i.d., the limiting distribution is a Gaussian process. Additionally, it was shown that when the weights are i.i.d. symmetric stable (SS), the first-order marginal distributions of the functions are also SS. Unfortunately, no mathematical analysis was presented to show that the higher-order distributions converged, though empirical evidence was suggestive of that hypothesis. Moreover, the exact form of the higher-dimensional distributions remained elusive. This paper conducts a further investigation of these questions, with concentration on the cases where the weight priors can be 1) of infinite variance, and 2) non-i.i.d. Such assumptions fall outside the ambit of the classical CLT, but are amenable to more general limit methods. In Section 1, we give a general classification of the possible limiting processes that may arise under an i.i.d. assumption on output weights distributed from a certain class — roughly speaking, those weights with tails asymptotic to a power-law — and provide explicit formulae for all the joint distribution functions. As a byproduct, Neal’s preliminary analysis is completed, a full multivariate prescription attained and the convergence of the finite-dimensional distributions proved. The subsequent section considers non-i.i.d. priors, specifically independent priors where the “identically distributed” assumption is discarded. An example where a finite-variance non-Gaussian process acts as a limit point for a nontrivial infinite network is presented, followed by an investigation of conditions under which the Gaussian approximation is valid, via the Lindeberg-Feller theorem. Finally, we raise the possibility of replacing network models with the processes themselves for learning applications: here, motivated by the foregoing limit theorems, the set of stable processes form a natural generalization to the Gaussian case. Classes of stable stochastic processes are examined where the parameterizations are particularly simple, as well as preliminary applications to the nonlinear regression problem. 2 Neural Network Limits Referring to (1), we make the following assumptions: hj(x) ≡h(x; uj) are uniformly bounded in x (as for instance occurs if h is associated with some fixed nonlinearity), and {uj} is an i.i.d. sequence, so that hj(x) are i.i.d. for fixed x, and independent of {vj}. With these assumptions, the choice of output priors vj will tend to dictate large-network behavior, independently of uj. In the sequel, we restrict ourselves to functions fn(x) : R →R, as the respective proofs for the generalizations of x and fn to higher-dimensional spaces are routine. Finally, all random variables are assumed to be of zero mean whenever first moments exist. For brevity, we only present sketches of proofs. 2.1 Limits with i.i.d. priors The Gaussian distribution has the feature that if X1 and X2 are statistically independent copies of the Gaussian variable X, then their linear combination is also Gaussian, i.e. aX1 + bX2 has the same distribution as cX + d for some c and d. More generally, the stable distributions [5], [6, Chap. 17] are defined to be the set of all distributions satisfying the above “closure” property. If one further demands symmetry of the distribution, then they must have characteristic function Φ(t) = e−σα|t|α, for parameters σ > 0 (called the spread), and 0 < α ≤2, termed the index. Since the characteristic functions are not generally twice differentiable at t = 0, their variances are infinite, the Gaussian distribution being the only finite variance stable distribution, associated to index α = 2. The attractive feature of stable variables, by definition, is closure under the formation of linear combinations: the linear combination of any two independent stable variables is another stable variable of the same index. Moreover, the stable distributions are attraction points of distributions under a linear combiner operator, and indeed, the only such distributions in the following sense: if {Yj} are i.i.d., and an + 1 sn Pn j=1 Yj converges in distribution to X, then X must be stable [5]. This fact already has consequences for our network model (1), and implies that — under i.i.d. priors vj, and assuming (1) converges at all — convergence can occur only to stable variables, for each x. Multivariate analogues are defined similarly: we say a random vector X is (strictly) stable if, for every a, b ∈R, there exists a constant c such that aX1 + bX2 = cX where Xi are independent copies of X and the equality is in distribution. A symmetric stable random vector is one which is stable and for which the distribution of X is the same as −X. The following important classification theorem gives an explicit Fourier domain description of all multivariate symmetric stable distributions: Theorem 1. Kuelbs [5]. X is a symmetric α-stable vector if and only if it has characteristic function Φ(t) = exp ½ − Z Sd−1 |⟨t, s⟩|α dΓ(s) ¾ (2) where Γ is a finite measure on the unit (d −1)-sphere Sd−1, and 0 < α ≤2. Remark: (2) remains unchanged replacing Γ by the symmetrized measure ˜Γ = 1 2(Γ(A) + Γ(−A)), for all Borel sets A. In this case, the (unique) symmetrized measure ˜Γ is called the spectral measure of the stable random vector X. Finally, stable processes are defined as indexed sets of random variables whose finitedimensional distributions are (multivariate) stable. First we establish the following preliminary result. Lemma 1. Let v be a symmetric stable random variable of index 0 < α ≤2, and spread σ > 0. Let h be independent of v and E|h|α < ∞. If y = hv, and {yi} are i.i.d. copies of y, then Sn = 1 n1/α Pn i=1 yi converges in distribution to an α-stable variable with characteristic function Φ(t) = exp{−|σt|αE|h|α}. Proof. This follows by computing the characteristic function ΦSn, then using standard theorems in measure theory (e.g. [4]), to obtain limn→∞log ΦSn(t) = −|σt|αE|h|α. Now we can state the first network convergence theorem. Proposition 1. Let the network (1) have symmetric stable i.i.d. weights vj of index 0 < α ≤2 and spread σ. Then fn(x) = 1 n1/α Pn j=1 vjhj(x) converges in distribution to a symmetric α-stable process f(x) as n →∞. The finite-dimensional stable distribution of (f(x1), . . . , f(xd)), where xi ∈R, has characteristic function: Ψ(t) = exp (−σαEh |⟨t, h⟩|α) (3) where h = (h(x1), . . . , h(xd)), and h(x) is a random variable with the common distribution (across j) of hj(x). Moreover, if h = (h(x1), . . . , h(xd)) has joint probability density p(h) = p(rs), with s on the Sd−1 sphere and r the radial component of h, then the finite measure Γ corresponding to the multivariate stable distribution of (f(x1), . . . , f(xd)) is given by dΓ(s) = µZ ∞ 0 rα+d−1p(rs) dr ¶ ds (4) where ds is Lebesgue measure on Sd−1. Proof. It suffices to show that every finite-dimensional distribution of f(x) converges to a symmetric multivariate stable characteristic function. We have Pd i=1 tifn(xi) = 1 n1/α Pn j=1 vj Pd i=1 tihj(xi) for constants {x1, . . . , xd} and (t1, . . . , td) ∈Rd. An application of Lemma 1 proves the statement. The relation between the expectation in (3) and the stable spectral measure (4) is derived from a change of variable to spherical coordinates in the d-dimensional space of h. Remark: When α = 2, the exponent in the characteristic function (3) is a quadratic form in t, and becomes the usual Gaussian multivariate distribution. The above proposition is the rigorous completion of Neal’s analysis, and gives the explicit form of the asymptotic process under i.i.d. SS weights. More generally, we can consider output weights from the normal domain of attraction of index α, which, roughly, consists of those densities whose tails are asymptotic to |x|−(α+1), 0 < α < 2 [6, pg. 547]. With a similar proof to the previous theorem, one establishes Proposition 2. Let network (1) have i.i.d. weights vj from the normal domain of attraction of an SS variable with index α, spread σ. Then fn(x) = 1 n1/α Pn j=1 vjhj(x) converges in distribution to a symmetric α-stable process f(x), with the joint characteristic functions given as in Proposition 1. 2.1.1 Example: Distributions with step-function priors Let h(x) = sgn(a + ux), where a and u are independent Gaussians with zero mean. From (3) it is clear that the limiting network function f(x) is a constant (in law, hence almost surely), as |x| →∞, so that the interesting behavior occurs in some “central region” |x| < k. Neal in [1] has shown that when the output weights vj are Gaussian, then the choice of the signum nonlinearity for h gives rise to local Brownian motion in the central regime. There is a natural generalization of the Brownian process within the context of symmetric stable processes, called the symmetric α-stable L´evy motion. It is characterised by an indexed sequence {wt : t ∈R} satisfying i) w0 = 0 almost surely, ii) independent increments, and iii) wt −ws is distributed symmetric α-stable with spread σ = |t −s|1/α. As we shall now show, the choice of step-function nonlinearity for h and symmetric α-stable priors for vj lead to locally L´evy stable motion, which provide a theoretical exposition for the empirical observations in [1]. Fix two nearby positions x and y, and select σ = 1 for notational simplicity. From (3) the random variable f(x) −f(y) is symmetric stable with spread parameter [Eh|h(x) − h(y)|α]1/α. For step inputs, |h(x) −h(y)| is non-zero only when the step located at −a/u falls between x and y. For small |x−y| approximate the density of this event to be uniform, so that [Eh|h(x) −h(y)|α] ∼|x −y|. Hence locally, the increment f(x) −f(y) is a symmetric stable variable with spread proportional to |x −y|1/α, which is condition (iii) of L´evy motion. Next let us demonstrate that the increments are independent. Consider the vector (f(x1)−f(x2), f(x2)−f(x3), . . . , f(xn−1)−f(xn)), where x1 < x2 < . . . < xn. Its joint characteristic function in the variables t1, . . . , tn−1 can be calculated to be Φ(t1, . . . , tn−1) = exp (−Eh|t1(h(x1) −h(x2)) + · · · + tn−1(h(xn−1) −h(xn))|α) (5) The disjointness of the intervals (xi−1, xi) implies that the only events which have nonzero probability within the range [x1, xn] are the events |h(xi) −h(xi−1)| = 2 for some i, and zero for all other indices. Letting pi denote the probabilities of those events, (5) reads Φ(t1, . . . , tn−1) = exp (−2α(p1|t1|α + · · · + pn−1|tn−1|α)) (6) which describes a vector of independent α-stable random variables, as the characteristic function splits. Thus the limiting process has independent increments. The differences between sample functions arising from Cauchy priors as opposed to Gaussian priors is evident from Fig. 1, which displays sample paths from Gaussian and 0 2000 4000 6000 8000 10000 −5 0 5 (a) 0 5000 10000 −5000 0 5000 10000 (b) 0 2000 4000 6000 8000 10000 −50 0 50 100 150 (c) 0 2000 4000 6000 8000 10000 −10000 −5000 0 5000 (d) Figure 1: Sample functions: (a) i.i.d. Gaussian, (b) i.i.d. Cauchy, (c) Brownian motion, (d) L´evy Cauchy-Stable motion. Cauchy i.i.d. processes wn and their “integrated” versions Pn i=1 wi, simulating the L´evy motions. The sudden jumps in the Cauchy motion arise from the presence of strong outliers in the respective Cauchy i.i.d. process, which would correspond, in the network, to hidden units with heavy weighting factors vj. 2.2 Limits with non-i.i.d. priors We begin with an interesting example, which shows that if the “identically distributed” assumption for the output weights is dispensed with, the limiting distribution of (1) can attain a non-stable (and non-Gaussian) form. Take vj to be independent random variables with P(vj = 2−j) = P(vj = −2−j) = 1/2. The characteristic functions can easily be computed as E[eitvj] = cos(t/2j). Now recall the Viet´e formula: n Y j=1 cos(t/2j) = sin t 2n sin(t/2n) (7) Taking n →∞shows that the limiting characteristic function is a sinc function, which corresponds with the uniform density. Selecting the signum nonlinearity for h, it is not difficult to show with estimates on the tail of the product (7) that all finite-dimensional distributions of the neural process fn(x) = Pn j=1 vjhj(x) converge, so that fn converges in distribution to a random process whose first-order distributions are uniform2. What conditions are required on independent, but not necessarily identically distributed priors vj for convergence to the Gaussian? This question is answered by the classical Lindeberg-Feller theorem. Theorem 2. Central Limit Theorem (Lindeberg-Feller) [4]. Let vj be a sequence of independent random variables each with zero mean and finite variance, define s2 n = var[Pn j=1 vj], and assume s1 ̸= 0. Then the sequence 1 sn Pn j=1 vj converges in distri2An intuitive proof is as follows: one thinks of P j vj as a binary expansion of real numbers in [-1,1]; the prescription of the probability laws for vj imply all such expansions are equiprobable, manifesting in the uniform distribution. bution to an N(0, 1) variable, if lim n→∞ 1 s2n n X i=1 Z |v|≥ϵsn v2 dFvj(v) = 0 (8) for each ϵ > 0, and where Fvj is the distribution function for vj. Condition (8) is called the Lindeberg condition, and imposes an “infinitesimal” requirement on the sequence {vj} in the sense that no one variable is allowed to dominate the sum. This theorem can be used to establish the following non-i.i.d. network convergence result. Proposition 3. Let the network (1) have independent finite-variance weights vj. Defining s2 n = var[Pn j=1 vj], if the sequence {vj} is Lindeberg then fn(x) = 1 sn Pn j=1 vjhj(x) converges in distribution to a Gaussian process f(x) of mean zero and covariance function C(f(x), f(y)) = E[h(x)h(y)] as n →∞, where h(x) is a variable with the common distribution of the hj(x). Proof. Fix a finite set of points {x1, . . . , xk} in the input space, and look at the joint distribution (fn(x1), . . . , fn(xn)). We want to show these variables are jointly Gaussian in the limit as n →∞, by showing that every linear combination of the components converges in distribution to a Gaussian distribution. Fixing k constants µi, we have Pk i=1 µif(xi) = 1 sn Pn j=1 vj Pk i=1 µihj(xi). Define ξj = Pk i=1 µihj(xi), and ˜s2 n = var(Pn j=1 vjξj) = (Eξ2)s2 n, where ξ is a random variable with the common distribution of ξj. Then for some c > 0: 1 ˜s2n n X j=1 Z |vjξj|≥ϵ˜sn |vj(ω)ξj(ω)|2 dP(ω) ≤ c2 Eξ2 1 s2n n X j=1 Z |vj|≥ϵ (Eξ2)1/2sn c |vj(ω)|2 dP(ω) The right-hand side can be made arbitrarily small, from the Lindeberg assumption on {vj}, hence {vjξj} is Lindeberg, from which the theorem follows. The covariance function is easy to calculate. Corollary 1. If the output weights {vj} are a uniformly bounded sequence of independent random variables, and limn→∞sn = ∞, then fn(x) in (1) converges in distribution to a Gaussian process. The preceding corollary, besides giving an easily verifiable condition for Gaussian limits, demonstrates that the non-Gaussian convergence in the example initialising Section 2.2 was made possible precisely because the weights vj decayed sufficiently quickly with j, with the result that limn sn < ∞. 3 Learning with Stable Processes One of the original reasons for focusing machine learning interest on Gaussian processes consisted in the fact that they act as limit points of suitably constructed parametric models [2], [3]. The problem of learning a regression function, which was previously tackled by Bayesian inference on a modelling neural network, could be reconsidered by directly placing a Gaussian process prior on the fitting functions themselves. Yet already in early papers introducing the technique, reservations had been expressed concerning such wholesale replacement [2]. Gaussian processes did not seem to capture the richness of finite neural networks — for one, the dependencies between multiple outputs of a network vanished in the Gaussian limit. Consider the simplest regression problem, that of the estimation of a state process u(x) from observations y(xi), under the model y(x) = u(x) + ϵ(x) (9) −5 0 5 −5 0 5 (a) −5 0 5 −5 0 5 −5 0 5 −5 0 5 (b) −5 0 5 −5 0 5 −5 0 5 −5 0 5 (c) −5 0 5 −5 0 5 Figure 2: Scatter plots of bivariate symmetric α-stable distributions with discrete spectral measures. Top row: α = 1.5; Bottom row: α = 0.5. Left to right: (a) H = identity, (b) H a rotation, (c) H a 2 × 3 matrix with columns (−1/16, √ 3/16)T , (0, 1)T , (1/16, √ 3/16)T . where ϵ(x) is noise independent of u. The obvious generalization of Gaussian process regression involves the placement of a stable process prior of index α on u, and setting ϵ as i.i.d. stable noise of the same index. Then the observations y also form a stable process of index α. Two advantages come with such generalization. First, the use of a heavytailed distribution for ϵ will tend to produce more robust regression estimates, relative to the Gaussian case; this robustness can be additionally controlled by the stability parameter α. Secondly, a glance at the classification of Theorem 1 indicates that the correlation structure of stable vectors (hence processes), is significantly richer than that of the Gaussian; the space of n-dimensional stable vectors is already characterised by a whole space of measures, rather than an n × n covariance matrix. The use of such priors on the data u afford a significant broadening in the number of interesting dependency relationships that may be assumed. An understanding of the dependency structure of multivariate stable vectors can be first broached by considering the following basic class. Let v be a vector of i.i.d. symmetric stable variables of the same index, and let H be a matrix of appropriate dimension so that x = Hv is well-defined. Then x has a symmetric stable characteristic function, where the spectral measure ˜Γ in Theorem 1 is discrete, i.e. concentrated on a finite number of points. Divergences in the correlation structure are readily apparent even within this class. In the Gaussian case, there is no advantage in the selection of non-square matrices H, since the distribution of x can always be obtained by a square mixing matrix ˜H with the same number of rows as H. Not so when α < 2, for then the characteristic function for x in general possesses n fundamental discontinuities in higher-order derivatives, where n is the number of columns of H. Furthermore, in the square case, replacement of H with HR, where R is any rotation matrix, leaves the distribution invariant when α = 2; for nonGaussian stable vectors, the mixing matrices H and H′ give rise to the same distribution only when |H−1H′| is a permutation matrix, where | · | is defined component-wise. Figure 2 illustrates the variety of dependency structures which can be attained as H is changed. A number of techniques already exist in the statistical literature for the estimation of the spectral measure (and hence the mixing H) of multivariate stable vectors from empirical data. The infinite-dimensional generalization of the above situation gives rise to the set of stable processes produced as time-varying filtered versions of i.i.d. stable noise, and similar to the Gaussian process, are parameterized by a centering (mean) function µ(x) and a bivariate filter function h(x, ν) encoding dependency information. Another simple family of stable processes consist of the so-called sub-Gaussian processes. These are processes defined by u(x) = A1/2G(x) where A is a totally right-skew α/2 stable variable [5], and G a Gaussian process of mean zero and covariance K. The result is a symmetric α-stable random process with finite-dimensional characteristic functions of form Φ(t) = exp(−1 2|⟨t, Kt⟩|α/2) (10) The sub-Gaussian processes are then completely parameterized by the statistics of the subordinating Gaussian process G. Even more, they have the following linear regression property [5]: if Y1, . . . , Yn are jointly sub-Gaussian, then E[Yn|Y1, . . . , Yn−1] = a1Y1 + · · · an−1Yn−1. (11) Unfortunately, the regression is somewhat trivial, because a calculation shows that the coefficients of regression {ai} are the same as the case where Yi are assumed jointly Gaussian! Indeed, this curious property appears anytime the variables take the form Y = BG, for any fixed scalar random variable B and Gaussian vector G. It follows that the predictive mean estimates for (10) employing sub-Gaussian priors are identical to the estimates under a Gaussian hypothesis. On the other hand, the conditional distribution of Yn|Y1, . . . , Yn−1 differs greatly from the Gaussian, and is neither stable nor symmetric about its conditional mean in general. From Fig. 2 one even sees that the conditional distribution may be multimodal, in which case the predictive mean estimates are not particularly valuable. More useful are MAP estimates, which in the Gaussian scenario coincide with the conditional mean. In any case, regression on stable processes suggest the need to compute and investigate the entire a posteriori probability law. The main thrust of our foregoing results indicate that the class of possible limit points of network functions is significantly richer than the family of Gaussian processes, even under relatively restricted (e.g. i.i.d.) hypotheses. Gaussian processes are the appropriate models of large networks with finite variance priors in which no one component dominates another, but when the finite variance assumption is discarded, stable processes become the natural limit points. Non-stable processes can be obtained with the appropriate choice of non-i.i.d. parameters priors, even in an infinite network. Our discussion of the stable process regression problem has principally been confined to an exposition of the basic theoretical issues and principles involved, rather than to algorithmic procedures. Nevertheless, since simple closed-form expressions exist for the characteristic functions, the predictive probability laws can all in principle be computed with multi-dimensional Fourier transform techniques. Stable variables form mathematically natural generalisations of the Gaussian, with some fundamental, but compelling, differences which suggest additional variety and flexibility in learning applications. References [1] R. Neal, Bayesian Learning for Neural Networks. New York: Springer-Verlag, 1996. [2] D. MacKay. Introduction to Gaussian Processes. Extended lecture notes, NIPS 1997. [3] M. Seeger, Gaussian Processes for Machine Learning. International Journal of Neural Systems 14(2), 2004, 69–106. [4] C. Burrill, Measure, Integration and Probability. New York: McGraw-Hill, 1972. [5] G. Samorodnitsky & M. Taqqu, Stable Non-Gaussian Random Processes. New York: Chapman & Hall, 1994. [6] W. Feller, An Introduction to Probability Theory and Its Applications, Vol. 2. New York: John Wiley & Sons, 1966.
|
2005
|
179
|
2,802
|
Learning Depth from Single Monocular Images Ashutosh Saxena, Sung H. Chung, and Andrew Y. Ng Computer Science Department Stanford University Stanford, CA 94305 asaxena@stanford.edu, {codedeft,ang}@cs.stanford.edu Abstract We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a discriminatively-trained Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models both depths at individual points as well as the relation between depths at different points. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps. 1 Introduction Recovering 3-D depth from images is a basic problem in computer vision, and has important applications in robotics, scene understanding and 3-D reconstruction. Most work on visual 3-D reconstruction has focused on binocular vision (stereopsis) [1] and on other algorithms that require multiple images, such as structure from motion [2] and depth from defocus [3]. Depth estimation from a single monocular image is a difficult task, and requires that we take into account the global structure of the image, as well as use prior knowledge about the scene. In this paper, we apply supervised learning to the problem of estimating depth from single monocular images of unstructured outdoor environments, ones that contain forests, trees, buildings, people, buses, bushes, etc. In related work, Michels, Saxena & Ng [4] used supervised learning to estimate 1-D distances to obstacles, for the application of autonomously driving a remote control car. Nagai et al. [5] performed surface reconstruction from single images for known, fixed, objects such as hands and faces. Gini & Marchi [6] used single-camera vision to drive an indoor robot, but relied heavily on known ground colors and textures. Shape from shading [7] offers another method for monocular depth reconstruction, but is difficult to apply to scenes that do not have fairly uniform color and texture. In work done independently of ours, Hoiem, Efros and Herbert (personal communication) also considered monocular 3-D reconstruction, but focused on generating 3-D graphical images rather than accurate metric depthmaps. In this paper, we address the task of learning full depthmaps from single images of unconstrained environments. Markov Random Fields (MRFs) and their variants are a workhorse of machine learning, and have been successfully applied to numerous problems in which local features were insufficient and more contextual information had to be used. Examples include text segmentation [8], object classification [9], and image labeling [10]. To model spatial dependencies in images, Kumar and Hebert’s Discriminative Random Fields algorithm [11] uses logistic regression to identify man-made structures in natural images. Because MRF learning is intractable in general, most of these model are trained using pseudo-likelihood. Our approach is based on capturing depths and relationships between depths using an MRF. We began by using a 3-D distance scanner to collect training data, which comprised a large set of images and their corresponding ground-truth depthmaps. Using this training set, the MRF is discriminatively trained to predict depth; thus, rather than modeling the joint distribution of image features and depths, we model only the posterior distribution of the depths given the image features. Our basic model uses L2 (Gaussian) terms in the MRF interaction potentials, and captures depths and interactions between depths at multiple spatial scales. We also present a second model that uses L1 (Laplacian) interaction potentials. Learning in this model is approximate, but exact MAP posterior inference is tractable (similar to Gaussian MRFs) via linear programming, and it gives significantly better depthmaps than the simple Gaussian model. 2 Monocular Cues Humans appear to be extremely good at judging depth from single monocular images. [12] This is done using monocular cues such as texture variations, texture gradients, occlusion, known object sizes, haze, defocus, etc. [4, 13, 14] For example, many objects’ texture will look different at different distances from the viewer. Texture gradients, which capture the distribution of the direction of edges, also help to indicate depth.1 Haze is another depth cue, and is caused by atmospheric light scattering. Most of these monocular cues are “contextual information,” in the sense that they are global properties of an image and cannot be inferred from small image patches. For example, occlusion cannot be determined if we look at just a small portion of an occluded object. Although local information such as the texture and color of a patch can give some information about its depth, this is usually insufficient to accurately determine its absolute depth. For another example, if we take a patch of a clear blue sky, it is difficult to tell if this patch is infinitely far away (sky), or if it is part of a blue object. Due to ambiguities like these, one needs to look at the overall organization of the image to determine depths. 3 Feature Vector In our approach, we divide the image into small patches, and estimate a single depth value for each patch. We use two types of features: absolute depth features—used to estimate the absolute depth at a particular patch—and relative features, which we use to estimate relative depths (magnitude of the difference in depth between two patches). We chose features that capture three types of local cues: texture variations, texture gradients, and haze. Texture information is mostly contained within the image intensity channel,2 so we apply Laws’ masks [15, 4] to this channel to compute the texture energy (Fig. 1). Haze is reflected in the low frequency information in the color channels, and we capture this by applying a local averaging filter (the first Laws mask) to the color channels. Lastly, to compute an 1For example, a tiled floor with parallel lines will appear to have tilted lines in an image. The distant patches will have larger variations in the line orientations, and nearby patches will have smaller variations in line orientations. Similarly, a grass field when viewed at different distances will have different texture gradient distributions. 2We represent each image in YCbCr color space, where Y is the intensity channel, and Cb and Cr are the color channels. Figure 1: The convolutional filters used for texture energies and gradients. The first nine are 3x3 Laws’ masks. The last six are the oriented edge detectors spaced at 300 intervals. The nine Law’s masks are used to perform local averaging, edge detection and spot detection. Figure 2: The absolute depth feature vector for a patch, which includes features from its immediate neighbors and its more distant neighbors (at larger scales). The relative depth features for each patch use histograms of the filter outputs. estimate of texture gradient that is robust to noise, we convolve the intensity channel with six oriented edge filters (shown in Fig. 1). 3.1 Features for absolute depth Given some patch i in the image I(x, y), we compute summary statistics for it as follows. We use the output of each of the 17 (9 Laws’ masks, 2 color channels and 6 texture gradients) filters Fn(x, y), n = 1, ..., 17 as: Ei(n) = P (x,y)∈patch(i) |I(x, y) ∗Fn(x, y)|k, where k = {1, 2} give the sum absolute energy and sum squared energy respectively. This gives us an initial feature vector of dimension 34. To estimate the absolute depth at a patch, local image features centered on the patch are insufficient, and one has to use more global properties of the image. We attempt to capture this information by using image features extracted at multiple scales (image resolutions). (See Fig. 2.) Objects at different depths exhibit very different behaviors at different resolutions, and using multiscale features allows us to capture these variations [16].3 In addition to capturing more global information, computing features at multiple spatial scales also help accounts for different relative sizes of objects. A closer object appears larger in the image, and hence will be captured in the larger scale features. The same object when far away will be small and hence be captured in the small scale features. Such features may be strong indicators of depth. To capture additional global features (e.g. occlusion relationships), the features used to predict the depth of a particular patch are computed from that patch as well as the four neighboring patches. This is repeated at each of the three scales, so that the feature vector 3For example, blue sky may appear similar at different scales; but textured grass would not. at a patch includes features of its immediate neighbors, and its far neighbors (at a larger scale), and its very far neighbors (at the largest scale), as shown in Fig. 2. Lastly, many structures (such as trees and buildings) found in outdoor scenes show vertical structure, in the sense that they are vertically connected to themselves (things cannot hang in empty air). Thus, we also add to the features of a patch additional summary features of the column it lies in. For each patch, after including features from itself and its 4 neighbors at 3 scales, and summary features for its 4 column patches, our vector of features for estimating depth at a particular patch is 19 ∗34 = 646 dimensional. 3.2 Features for relative depth We use a different feature vector to learn the dependencies between two neighboring patches. Specifically, we compute a histogram (with 10 bins) of each of the 17 filter outputs |I(x, y) ∗Fn(x, y)|, giving us a total of 170 features yi for each patch i. These features are used to estimate how the depths at two different locations are related. We believe that learning these estimates requires less global information than predicting absolute depth,4 but more detail from the individual patches. Hence, we use as our relative depth features the differences between the histograms computed from two neighboring patches yij = yi −yj. 4 The Probabilistic Model The depth of a particular patch depends on the features of the patch, but is also related to the depths of other parts of the image. For example, the depths of two adjacent patches lying in the same building will be highly correlated. We will use an MRF to model the relation between the depth of a patch and the depths of its neighboring patches. In addition to the interactions with the immediately neighboring patches, there are sometimes also strong interactions between the depths of patches which are not immediate neighbors. For example, consider the depths of patches that lie on a large building. All of these patches will be at similar depths, even if there are small discontinuities (such as a window on the wall of a building). However, when viewed at the smallest scale, some adjacent patches are difficult to recognize as parts of the same object. Thus, we will also model interactions between depths at multiple spatial scales. Our first model will be a jointly Gaussian MRF. To capture the multiscale depth relations, let us define di(s) as follows. For each of three scales s = 1, 2, 3, define di(s + 1) = (1/5) P j∈Ns(i)∪{i} dj(s). Here, Ns(i) are the 4 neighbors of patch i at scale s. I.e., the depth at a higher scale is constrained to be the average of the depths at lower scales. Our model over depths is as follows: P(d|X; θ, σ) = 1 Z exp − M X i=1 (di(1) −xT i θr)2 2σ2 1r − 3 X s=1 M X i=1 X j∈Ns(i) (di(s) −dj(s))2 2σ2 2rs (1) Here, M is the total number of patches in the image (at the lowest scale); xi is the absolute depth feature vector for patch i; and θ and σ are parameters of the model. In detail, we use different parameters (θr, σ1r, σ2r) for each row in the image, because the images we consider are taken from a horizontally mounted camera, and thus different rows of the image have different statistical properties.5 Z is the normalization constant for the model. 4For example, given two adjacent patches of a distinctive, unique, color and texture, we may be able to safely conclude that they are part of the same object, and thus that their depths are close, even without more global features. 5For example, a blue patch might represent sky if it is in upper part of image, and might be more likely to be water if in the lower part of the image. We estimate the parameters θr in Eq. 1 by maximizing the conditional likelihood p(d|X; θr) of the training data. Since the model is a multivariate Gaussian, the maximum likelihood estimate of parameters θr is obtained by solving a linear least squares problem. The first term in the exponent above models depth as a function of multiscale features of a single patch i. The second term in the exponent places a soft “constraint” on the depths to be smooth. If the variance term σ2 2rs is a fixed constant, the effect of this term is that it tends to smooth depth estimates across nearby patches. However, in practice the dependencies between patches are not the same everywhere, and our expected value for (di −dj)2 may depend on the features of the local patches. Therefore, to improve accuracy we extend the model to capture the “variance” term σ2 2rs in the denominator of the second term as a linear function of the patches i and j’s relative depth features yijs (discussed in Section 3.2). We use σ2 2rs = uT rs|yijs|. This helps determine which neighboring patches are likely to have similar depths. E.g., the “smoothing” effect is much stronger if neighboring patches are similar. This idea is applied at multiple scales, so that we learn different σ2 2rs for the different scales s (and rows r of the image). The parameters urs are chosen to fit σ2 2rs to the expected value of (di(s) −dj(s))2, with a constraint that urs ≥0 (to keep the estimated σ2 2rs non-negative). Similar to our discussion on σ2 2rs, we also learn the variance parameter σ2 1r = vT r xi as a linear function of the features. The parameters vr are chosen to fit σ2 1r to the expected value of (di(r) −θT r xi)2, subject to vr ≥0.6 This σ2 1r term gives a measure of the uncertainty in the first term, and depends on the features. This is motivated by the observation that in some cases, depth cannot be reliably estimated from the local features. In this case, one has to rely more on neighboring patches’ depths to infer a patch’s depth (as modeled by the second term in the exponent). After learning the parameters, given a new test-set image we can find the MAP estimate of the depths by maximizing Eq. 1 in terms of d. Since Eq. 1 is Gaussian, log P(d|X; θ, σ) is quadratic in d, and thus its maximum is easily found in closed form (taking at most 2-3 seconds per image, including feature computation time). 4.1 Laplacian model We now present a second model that uses Laplacians instead of Gaussians to model the posterior distribution of the depths. Our motivation for doing so is three-fold. First, a histogram of the relative depths (di −dj) empirically appears Laplacian, which strongly suggests that it is better modeled as one. Second, the Laplacian distribution has heavier tails, and is therefore more robust to outliers in the image features and error in the trainingset depthmaps (collected with a laser scanner; see Section 5.1). Third, the Gaussian model was generally unable to give depthmaps with sharp edges; in contrast, Laplacians tend to model sharp transitions/outliers better. Our model is as follows: P(d|X; θ, λ) = 1 Z exp − M X i=1 |di(1) −xT i θr| λ1r − 3 X s=1 M X i=1 X j∈Ns(i) |di(s) −dj(s)| λ2rs (2) Here, the parameters are the same as Eq. 1, except for the variance terms. Here, λ1r and λ2rs are the Laplacian spread parameters. Maximum-likelihood parameter estimation for the Laplacian model is not tractable (since the partition function depends on θr). But by analogy to the Gaussian case, we approximate this by solving a linear system of equations Xrθr ≈dr to minimize L1 (instead of L2) error. Here Xr is the matrix of absolute-depth features. Following the Gaussian model, we also learn the Laplacian spread parameters in the denominator in the same way, except that the instead of estimating the expected value of (di −dj)2, we estimate the expected value of |di −dj|. Even though maximum 6The absolute depth features xir are non-negative; thus, the estimated σ2 1r is also non-negative. likelihood parameter estimation for θr is intractable in the Laplacian model, given a new test-set image, MAP inference for the depths d is tractable. Specifically, P(d|X; θ, λ) is easily maximized in terms of d using linear programming. Remark. We can also extend these models to combine Gaussian and Laplacian terms in the exponent, for example by using a L2 norm term for absolute depth, and a L1 norm term for the interaction terms. MAP inference remains tractable in this setting, and can be solved using convex optimization as a QP (quadratic program). 5 Experiments 5.1 Data collection We used a 3-D laser scanner to collect images and their corresponding depthmaps. The scanner uses a SICK 1-D laser range finder mounted on a motor to get 2D scans. We collected a total of 425 image+depthmap pairs, with an image resolution of 1704x2272 and a depthmap resolution of 86x107. In the experimental results reported here, 75% of the images/depthmaps were used for training, and the remaining 25% for hold-out testing. Due to noise in the motor system, the depthmaps were not perfectly aligned with the images, and had an alignment error of about 2 depth patches. Also, the depthmaps had a maximum range of 81m (the maximum range of the laser scanner), and had minor additional errors due to reflections and missing laser scans. Prior to running our learning algorithms, we transformed all the depths to a log scale so as to emphasize multiplicative rather than additive errors in training. In our earlier experiments (not reported here), learning using linear depth values directly gave poor results. 5.2 Results We tested our model on real-world test-set images of forests (containing trees, bushes, etc.), campus areas (buildings, people, and trees), and indoor places (such as corridors). The algorithm was trained on a training set comprising images from all of these environments. Table 1 shows the test-set results when using different feature combinations. We see that using multiscale and column features significantly improves the algorithm’s performance. Including the interaction terms further improved its performance, and the Laplacian model performs better than the Gaussian one. Empirically, we also observed that the Laplacian model does indeed give depthmaps with significantly sharper boundaries (as in our discussion in Section 4.1; also see Fig. 3). Table 1 shows the errors obtained by our algorithm on a variety of forest, campus, and indoor images. The results on the test set show that the algorithm estimates the depthmaps with a average error of 0.132 orders of magnitude. It works well even in the varied set of environments as shown in Fig. 3 (last column). It also appears to be very robust towards variations caused by shadows. Informally, our algorithm appears to predict the relative depths of objects quite well (i.e., their relative distances to the camera), but seems to make more errors in absolute depths. Some of the errors can be attributed to errors or limitations of the training set. For example, the training set images and depthmaps are slightly misaligned, and therefore the edges in the learned depthmap are not very sharp. Further, the maximum value of the depths in the training set is 81m; therefore, far-away objects are all mapped to the one distance of 81m. Our algorithm appears to incur the largest errors on images which contain very irregular trees, in which most of the 3-D structure in the image is dominated by the shapes of the leaves and branches. However, arguably even human-level performance would be poor on these images. 6 Conclusions We have presented a discriminatively trained MRF model for depth estimation from single monocular images. Our model uses monocular cues at multiple spatial scales, and also Figure 3: Results for a varied set of environments, showing original image (column 1), ground truth depthmap (column 2), predicted depthmap by Gaussian model (column 3), predicted depthmap by Laplacian model (column 4). (Best viewed in color) Table 1: Effect of multiscale and column features on accuracy. The average absolute errors (RMS errors gave similar results) are on a log scale (base 10). H1 and H2 represent summary statistics for k = 1, 2. S1, S2 and S3 represent the 3 scales. C represents the column features. Baseline is trained with only the bias term (no features). FEATURE ALL FOREST CAMPUS INDOOR BASELINE .295 .283 .343 .228 GAUSSIAN (S1,S2,S3, H1,H2,no neighbors) .162 .159 .166 .165 GAUSSIAN (S1, H1,H2) .171 .164 .189 .173 GAUSSIAN (S1,S2, H1,H2) .155 .151 .164 .157 GAUSSIAN (S1, S2,S3, H1,H2) .144 .144 .143 .144 GAUSSIAN (S1,S2,S3, C, H1) .139 .140 .141 .122 GAUSSIAN (S1,S2,S3, C, H1,H2) .133 .135 .132 .124 LAPLACIAN .132 .133 .142 .084 incorporates interaction terms that model relative depths, again at different scales. In addition to a Gaussian MRF model, we also presented a Laplacian MRF model in which MAP inference can be done efficiently using linear programming. We demonstrated that our algorithm gives good 3-D depth estimation performance on a variety of images. Acknowledgments We give warm thanks to Jamie Schulte, who designed the 3-D scanner, for help in collecting the data used in this work. We also thank Larry Jackel for helpful discussions. This work was supported by the DARPA LAGR program under contract number FA8650-04-C-7134. References [1] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int’l Journal of Computer Vision, 47:7–42, 2002. [2] David A. Forsyth and Jean Ponce. Computer Vision : A Modern Approach. Prentice Hall, 2003. [3] S. Das and N. Ahuja. Performance analysis of stereo, vergence, and focus as depth cues for active vision. IEEE Trans Pattern Analysis & Machine Intelligence, 17:1213–1219, 1995. [4] J. Michels, A. Saxena, and A.Y. Ng. High speed obstacle avoidance using monocular vision and reinforcement learning. In ICML, 2005. [5] T. Nagai, T. Naruse, M. Ikehara, and A. Kurematsu. Hmm-based surface reconstruction from single images. In Proc IEEE Int’l Conf Image Processing, volume 2, 2002. [6] G. Gini and A. Marchi. Indoor robot navigation with single camera vision. In PRIS, 2002. [7] M. Shao, T. Simchony, and R. Chellappa. New algorithms from reconstruction of a 3-d depth map from one or more images. In Proc IEEE CVPR, 1988. [8] J. Lafferty, A. McCallum, and F. Pereira. Discriminative fields for modeling spatial dependencies in natural images. In ICML, 2001. [9] K. Murphy, A. Torralba, and W.T. Freeman. Using the forest to see the trees: A graphical model relating features, objects, and scenes. In NIPS 16, 2003. [10] Xuming He, Richard S. Zemel, and Miguel A. Carreira-Perpinan. Multiscale conditional random fields for image labeling. In proc. CVPR, 2004. [11] S. Kumar and M. Hebert. Discriminative fields for modeling spatial dependencies in natural images. In NIPS 16, 2003. [12] J.M. Loomis. Looking down is looking up. Nature News and Views, 414:155–156, 2001. [13] B. Wu, T.L. Ooi, and Z.J. He. Perceiving distance accurately by a directional process of integrating ground information. Letters to Nature, 428:73–77, 2004. [14] P. Sinha I. Blthoff, H. Blthoff. Top-down influences on stereoscopic depth-perception. Nature Neuroscience, 1:254–257, 1998. [15] E.R. Davies. Laws’ texture energy in TEXTURE. In Machine Vision: Theory, Algorithms, Practicalities 2nd Edition. Academic Press, San Diego, 1997. [16] A.S. Willsky. Multiresolution markov models for signal and image processing. IEEE, 2002.
|
2005
|
18
|
2,803
|
Preconditioner Approximations for Probabilistic Graphical Models Pradeep Ravikumar John Lafferty School of Computer Science Carnegie Mellon University Abstract We present a family of approximation techniques for probabilistic graphical models, based on the use of graphical preconditioners developed in the scientific computing literature. Our framework yields rigorous upper and lower bounds on event probabilities and the log partition function of undirected graphical models, using non-iterative procedures that have low time complexity. As in mean field approaches, the approximations are built upon tractable subgraphs; however, we recast the problem of optimizing the tractable distribution parameters and approximate inference in terms of the well-studied linear systems problem of obtaining a good matrix preconditioner. Experiments are presented that compare the new approximation schemes to variational methods. 1 Introduction Approximate inference techniques are enabling sophisticated new probabilistic models to be developed and applied to a range of practical problems. One of the primary uses of approximate inference is to estimate the partition function and event probabilities for undirected graphical models, which are natural tools in many domains, from image processing to social network modeling. A central challenge is to improve the accuracy of existing approximation methods, and to derive rigorous rather than heuristic bounds on probabilities in such graphical models. In this paper, we present a simple new approach to the approximate inference problem, based upon non-iterative procedures that have low time complexity. We follow the variational mean field intuition of focusing on tractable subgraphs, however we recast the problem of optimizing the tractable distribution parameters as a generalized linear system problem. In this way, the task of deriving a tractable distribution conveniently reduces to the well-studied problem of obtaining a good preconditioner for a matrix (Boman and Hendrickson, 2003). This framework has the added advantage that tighter bounds can be obtained by reducing the sparsity of the preconditioners, at the expense of increasing the time complexity for computing the approximation. In the following section we establish some notation and background. In Section 3, we outline the basic idea of our proposed framework, and explain how to use preconditioners for deriving tractable approximate distributions. In Sections 3.1 and 4, we then describe the underlying theory, which we call the generalized support theory for graphical models. In Section 5 we present experiments that compare the new approximation schemes to some of the standard variational and optimization based methods. 2 Notation and Background Consider a graph G = (V, E), where V denotes the set of nodes and E denotes the set of edges. Let Xi be a random variable associated with node i, for i ∈V , yielding a random vector X = {X1, . . . , Xn}. Let φ = {φα, α ∈I} denote the set of potential functions or sufficient statistics, for a set I of cliques in G. Associated with φ is a vector of parameters θ = {θα, α ∈I}. With this notation, the exponential family of distributions of X, associated with φ and G, is given by p(x; θ) = exp X α θαφα −Ψ(θ) ! . (1) For traditional reasons through connections with statistical physics, Z = exp Ψ(θ) is called the partition function. As discussed in (Yedidia et al., 2001), at the expense in increasing the state space one can assume without loss of generality that the graphical model is a pairwise Markov random field, i.e., the set of cliques I is the set of edges {(s, t) ∈E}. We shall assume a pairwise random field, and thus can express the potential function and parameter vectors in more compact form as matrices: Θ := θ11 . . . θ1n ... ... ... θn1 . . . θnn Φ(x) := φ11(x1, x1) . . . φ1n(x1, xn) ... ... ... φn1(xn, x1) . . . φnn(xn, xn) (2) In the following we will denote the trace of the product of two matrices A and B by the inner product ⟨⟨A, B⟩⟩. Assuming that each Xi is finite-valued, the partition function Z(Θ) is then given by Z(Θ) = P x∈χ exp ⟨⟨Θ, Φ(x)⟩⟩. The computation of Z(Θ) has a complexity exponential in the tree-width of the graph G and hence is intractable for large graphs. Our goal is to obtain rigorous upper and lower bounds for this partition function, which can then be used to obtain rigorous upper and lower bounds for general event probabilities; this is discussed further in (Ravikumar and Lafferty, 2004). 2.1 Preconditioners in Linear Systems Consider a linear system, Ax = c, where the variable x is n dimensional, and A is an n × n matrix with m non-zero entries. Solving for x via direct methods such as Gaussian elimination has a computational complexity O(n3), which is impractical for large values of n. Multiplying both sides of the linear system by the inverse of an invertible matrix B, we get an equivalent “preconditioned” system, B−1Ax = B−1c. If B is similar to A, B−1A is in turn similar to I, the identity matrix, making the preconditioned system easier to solve. Such an approximating matrix B is called a preconditioner. The computational complexity of preconditioned conjugate gradient is given by T(A) = p κ(A, B) (m + T(B)) log 1 ǫ (3) where T(A) is the time required for an ǫ-approximate solution; κ(A, B) is the condition number of A and B which intuitively corresponds to the quality of the approximation B, and T(B) is the time required to solve By = c. Recent developments in the theory of preconditioners are in part based on support graph theory, where the linear system matrix is viewed as the Laplacian of a graph, and graphbased techniques can be used to obtain good approximations. While these methods require diagonally dominant matrices (Aii ≥P j̸=i |Aij|), they yield “ultra-sparse” (tree plus a constant number of edges) preconditioners with a low condition number. In our experiments, we use two elementary tree-based preconditioners in this family, Vaidya’s Spanning Tree preconditioner Vaidya (1990), and Gremban-Miller’s Support Tree preconditioner Gremban (1996). 3 Graphical Model Preconditioners Our proposed framework follows the generalized mean field intuition of looking at sparse graph approximations of the original graph, but solving a different optimization problem. We begin by outlining the basic idea, and then develop the underlying theory. Consider the graphical model with graph G, potential-function matrix Φ(x), and parameter matrix Θ. For purposes of intuition, think of the graphical model “energy” ⟨⟨Θ, Φ(x)⟩⟩as the matrix norm x⊤Θx. We would like to obtain a sparse approximation B for Θ. If B approximates Θ well, then the condition number κ is small: κ(Θ, B) = max x x⊤Θx x⊤Bx min x x⊤Θx x⊤Bx = λmax(Θ, B) /λmin(Θ, B) (4) This suggests the following procedure for approximate inference. First, choose a matrix B that minimizes the condition number with Θ (rather than KL divergence as in mean-field). Then, scale B appropriately, as detailed in the following sections. Finally, use the scaled matrix B as the parameter matrix for approximate inference. Note that if B corresponds to a tree, approximate inference has linear time complexity. 3.1 Generalized Eigenvalue Bounds Given a graphical model with graph G, potential-function matrix Φ(x), and parameter matrix Θ, our goal is to obtain parameter matrices ΘU and ΘL, corresponding to sparse graph approximations of G, such that Z(ΘL) ≤Z(Θ) ≤Z(ΘU). (5) That is, the partition functions of the sparse graph parameter matrices ΘU and ΘL are upper and lower bounds, respectively, of the partition function of the original graph. However, we will instead focus on a seemingly much stronger condition; in particular, we will look for ΘL and ΘU that satisfy ⟨⟨ΘL, Φ(x)⟩⟩≤⟨⟨Θ, Φ(x)⟩⟩≤⟨⟨ΘU, Φ(x)⟩⟩ (6) for all x. By monotonicity of exp, this stronger condition implies condition (5) on the partition function, by summing over the values of X. However, this stronger condition will give us greater flexibility, and rigorous bounds for general event probabilities since then exp ⟨⟨ΘL, Φ(x)⟩⟩ Z(ΘU) ≤p(x; Θ) ≤exp ⟨⟨ΘU, Φ(x)⟩⟩ Z(ΘL) . (7) In contrast, while variational methods give bounds on the log partition function, the derived bounds on general event probabilities via the variational parameters are only heuristic. Let S be a set of sparse graphs; for example, S may be the set of all trees. Focusing on the upper bound, we for now would like to obtain a graph G′ ∈S with parameter matrix B, which approximates G, and whose partition function upper bounds the partition function of the original graph. Following (6), we require, ⟨⟨Θ, Φ(x)⟩⟩≤⟨⟨B, Φ(x)⟩⟩, such that G(B) ∈S (8) where G(B) denotes the graph corresponding to the parameter matrix B. Now, we would like the distribution corresponding to B to be as close as possible to the distribution corresponding to Θ; that is, ⟨⟨B, Φ(x)⟩⟩should not only upper bound ⟨⟨Θ, Φ(x)⟩⟩but should be close to it. The distance measure we use for this is the minimax distance. In other words, while the upper bound requires that ⟨⟨Θ, Φ(x)⟩⟩ ⟨⟨B, Φ(x)⟩⟩≤1, (9) we would like min x ⟨⟨Θ, Φ(x)⟩⟩ ⟨⟨B, Φ(x)⟩⟩ (10) to be as high as possible. Expressing these desiderata in the form of an optimization problem, we have B⋆= arg max B: G(B)∈S min x ⟨⟨Θ,Φ(x)⟩⟩ ⟨⟨B,Φ(x)⟩⟩, such that ⟨⟨Θ,Φ(x)⟩⟩ ⟨⟨B,Φ(x)⟩⟩≤1. Before solving this problem, we first make some definitions, which are generalized versions of standard concepts in linear systems theory. Definition 3.1. For a pairwise Markov random field with potential function matrix Φ(x); the generalized eigenvalues of a pair of parameter matrices (A, B) are defined as λΦ max(A, B) = max x: ⟨⟨B,Φ(x)⟩⟩̸=0 ⟨⟨A, Φ(x)⟩⟩ ⟨⟨B, Φ(x)⟩⟩ (11) λΦ min(A, B) = min x: ⟨⟨B,Φ(x)⟩⟩̸=0 ⟨⟨A, Φ(x)⟩⟩ ⟨⟨B, Φ(x)⟩⟩. (12) Note that λΦ max(A, αB) = max x: ⟨⟨αB,Φ(x)⟩⟩̸=0 ⟨⟨A, Φ(x)⟩⟩ ⟨⟨αB, Φ(x)⟩⟩ (13) = 1 α max x: ⟨⟨B,Φ(x)⟩⟩̸=0 ⟨⟨A, Φ(x)⟩⟩ ⟨⟨B, Φ(x)⟩⟩= α−1λΦ max(A, B). (14) We state the basic properties of the generalized eigenvalues in the following lemma. Lemma 3.2. The generalized eigenvalues satisfy λΦ min(A, B) ≤⟨⟨A, Φ(x)⟩⟩ ⟨⟨B, Φ(x)⟩⟩≤λΦ max(A, B) (15) λΦ max(A, αB) = α−1λΦ max(A, B) (16) λΦ min(A, αB) = α−1λΦ min(A, B) (17) λΦ min(A, B) = 1 λΦ max(B, A) . (18) In the following, we will use A to generically denote the parameter matrix Θ of the model. We can now rewrite the optimization problem for the upper bound in equation (11) as (Problem Λ1) max B: G(B)∈S λΦ min(A, B), such that λΦ max(A, B) ≤1 (19) We shall express the optimal solution of Problem Λ1 in terms of the optimal solution of a companion problem. Towards that end, consider the optimization problem (Problem Λ2) min C: G(C)∈S λΦ max(A, C) λΦ min(A, C) . (20) The following proposition shows the sense in which these problems are equivalent. Proposition 3.3. If bC attains the optimum in Problem Λ2, then eC = λΦ max(A, bC) bC attains the optimum of Problem Λ1. Proof. For any feasible solution B of Problem Λ1, we have λΦ min(A, B) ≤ λΦ min(A, B) λΦ max(A, B) (since λΦ max(A, B) ≤1) (21) ≤ λΦ min(A, bC) λΦ max(A, bC) (since bC is the optimum of Problem Λ2) (22) = λΦ min A, λΦ max(A, bC) bC (from Lemma 3.2) (23) = λΦ min(A, eC). (24) Thus, eC upper bounds all feasible solutions in Problem Λ1. However, it itself is a feasible solution, since λΦ max(A, eC) = λΦ max A, λΦ max(A, bC) bC = 1 λΦ max(A, bC) λΦ max(A, bC) = 1 (25) from Lemma 3.2. Thus, eC attains the maximum in the upper bound Problem Λ1. □ The analysis for obtaining an upper bound parameter matrix B for a given parameter matrix A carries over for the lower bound; we need to replace a maximin problem with a minimax problem. For the lower bound, we want a matrix B such that B⋆ = min B: G(B)∈S max {x: ⟨⟨B,Φ(x)⟩⟩̸=0} ⟨⟨A, Φ(x)⟩⟩ ⟨⟨B, Φ(x)⟩⟩, such that ⟨⟨A, Φ(x)⟩⟩ ⟨⟨B, Φ(x)⟩⟩≥1 (26) This leads to the following lower bound optimization problem. (Problem Λ3) min B: G(B)∈S λΦ max(A, B), such that λΦ min(A, B) ≥1. (27) The proof of the following statement closely parallels the proof of Proposition 3.3. Proposition 3.4. If ˆC attains the optimum in Problem Λ2, then C = λΦ min(A, ˆC) ˆC attains the optimum of the lower bound Problem Λ3. Finally, we state the following basic lemma, whose proof is easily verified. Lemma 3.5. For any pair of parameter-matrices (A, B), we have λΦ min(A, B)B, Φ(x) ≤⟨⟨A, Φ(x)⟩⟩≤ λΦ max(A, B)B, Φ(x) . (28) 3.2 Main Procedure We now have in place the machinery necessary to describe the procedure for solving the main problem in equation (6), to obtain upper and lower bound matrices for a graphical model. Lemma 3.5 shows how to obtain upper and lower bound parameter matrices with respect to any matrix B, given a parameter matrix A, by solving a generalized eigenvalue problem. Propositions 3.3 and 3.4 tell us, in principle, how to obtain the optimal such upper and lower bound matrices. We thus have the following procedure. First, obtain a parameter matrix C such that G(C) ∈S, which minimizes λΦ max(Θ, C)/λΦ min(Θ, C). Then λΦ max(Θ, C) C gives the optimal upper bound parameter matrix and λΦ min(Θ, C) C gives the optimal lower bound parameter matrix. However, as things stand, this recipe appears to be even more challenging to work with than the generalized mean field procedures. The difficulty lies in obtaining the matrix C. In the following section we offer a series of relaxations that help to simplify this task. 4 Generalized Support Theory for Graphical Models In what follows, we begin by assuming that the potential function matrix is positive semidefinite, Φ(x) ⪰0, and later extend our results to general Φ. Definition 4.1. For a pairwise MRF with potential function matrix Φ(x) ⪰0, the generalized support number of a pair of parameter matrices (A, B), where B ⪰0, is σΦ(A, B) = min {τ ∈R | ⟨⟨τB, Φ(x)⟩⟩≥⟨⟨A, Φ(x)⟩⟩for all x} (29) The generalized support number can be thought of as the “number of copies” τ of B required to “support” A so that ⟨⟨τB −A, Φ(x)⟩⟩≥0. The usefulness of this definition is demonstrated by the following result. Proposition 4.2. If B ⪰0 then λΦ max(A, B) ≤σΦ(A, B). Proof. From the definition of the generalized support number for a graphical model, we have that σΦ(A, B)B −A, Φ(x) ≥0. Now, since we assume that Φ(x) ⪰0, if also B ⪰0 then ⟨⟨B, Φ(x)⟩⟩≥0. Therefore, it follows that ⟨⟨A,Φ(x)⟩⟩ ⟨⟨B,Φ(x)⟩⟩≤σΦ(A, B), and thus λΦ max(A, B) = max x ⟨⟨A, Φ(x)⟩⟩ ⟨⟨B, Φ(x)⟩⟩≤σΦ(A, B) (30) giving the statement of the proposition. □ This leads to our first relaxation of the generalized eigenvalue bound for a model. From Lemma 3.2 and Proposition 4.2 we see that λΦ max(A, B) λΦ min(A, B) = λΦ max(A, B)λΦ max(B, A) ≤σΦ(A, B)σΦ(B, A) (31) Thus, this result suggests that to approximate the graphical model (Θ, Φ) we can search for a parameter matrix B⋆, with corresponding simple graph G(B⋆) ∈S, such that B⋆= arg min B σΦ(Θ, B)σΦ(B, Θ) (32) While this relaxation may lead to effective bounds, we will now go further, to derive an additional relaxation that relates our generalized graphical model support number to the “classical” support number. Proposition 4.3. For a potential function matrix Φ(x) ⪰0, σΦ(A, B) ≤σ(A, B), where σ(A, B) = min{τ | (τB −A) ⪰0}. Proof. Since σ(A, B)B−A ⪰0 by definition and Φ(x) ⪰0 by assumption, we have that ⟨⟨σ(A, B)B −A, Φ(x)⟩⟩≥0. Therefore, σΦ(A, B) ≤σ(A, B) from the definition of generalized support number. □ The above result reduces the problem of approximating a graphical model to the problem of minimizing classical support numbers, the latter problem being well-studied in the scientific computing literature (Boman and Hendrickson, 2003; Bern et al., 2001), where the expression σ(A, C)σ(C, A) is called the condition number, and a matrix that minimizes it within a simple family of graphs is called a preconditioner. We can thus plug in any algorithm for finding a sparse preconditioner for Θ, carrying out the optimization B⋆= arg min B σ(Θ, B) σ(B, Θ) (33) and then use that matrix B⋆in our basic procedure. One example is Vaidya’s preconditioner Vaidya (1990), which is essentially the maximum spanning tree of the graph. Another is the support tree of Gremban (1996), which introduces Steiner nodes, in this case auxiliary nodes introduced via a recursive partitioning of the graph. We present experiments with these basic preconditioners in the following section. Before turning to the experiments, we comment that our generalized support number analysis assumed that the potential function matrix Φ(x) was positive semi-definite. The case when it is not can be handled as follows. We first add a large positive diagonal matrix D so that Φ′(x) = Φ(x) + D ⪰0. Then, for a given parameter matrix Θ, we use the above machinery to get an upper bound parameter matrix B such that ⟨⟨A, Φ(x) + D⟩⟩≤⟨⟨B, Φ(x) + D⟩⟩⇒⟨⟨A, Φ(x)⟩⟩≤⟨⟨B, Φ(x)⟩⟩+ ⟨⟨B −A, D⟩⟩. (34) Exponentiating and summing both sides over x, we then get the required upper bound for the parameter matrix A; the same can be done for the lower bound. 5 Experiments As the previous sections detailed, the preconditioner based bounds are in principle quite easy to compute—we compute a sparse preconditioner for the parameter matrix (typically O(n) to O(n3)) and use the preconditioner as the parameter matrix for the bound computation (which is linear if the preconditioner matrix corresponds to a tree). This yields a simple, non-iterative deterministic procedure as compared to the more complex propagation-based or iterative update procedures. In this section we evaluate these bounds on small graphical models for which exact answers can be readily computed, and compare the bounds to variational approximations. We show simulation results averaged over a randomly generated set of graphical models. The graphs used were 2D grid graphs, and the edge potentials were selected according to a uniform distribution Uniform(−2dcoup, 0) for various coupling strengths dcoup. We report the relative error, (bound −log-partition-function)/log-partition-function. As a baseline, we use the mean field and structured mean field methods for the lower bound, and the Wainwright et al. (2003) tree-reweighted belief propagation approximation for the upper bound. For the preconditioner based bounds, we use two very simple preconditioners, (a) Vaidya’s maximum spanning tree preconditioner (Vaidya, 1990), which assumes the input parameter matrix to be a Laplacian, and (b) Gremban (1996)’s support tree preconditioner, which also gives a sparse parameter matrix corresponding to a tree, with Steiner (auxiliary) nodes. To compute bounds over these larger graphs with Steiner nodes we average an internal node over its children; this is the technique used with such preconditioners for solving linear systems. We note that these preconditioners are quite basic, and the use of better preconditioners (yielding a better condition number) has the potential to achieve much better bounds, as shown in Propositions 3.3 and 3.4. We also reiterate that while our approach can be used to derive bounds on event probabilities, the variational methods yield bounds only for the partition function, and only apply heuristically to estimating simple event probabilities such as marginals. As the plots in Figure 1 show, even for the simple preconditioners used, the new bounds are quite close to the actual values, outperforming the mean field method and giving comparable results to the tree-reweighted belief propagation method. The spanning tree preconditioner provides a good lower bound, while the support tree preconditioner provides a good upper bound, however not as tight as the bound obtained using tree-reweighted belief propagation. Although we cannot compute the exact solution for large graphs, we can 0.6 0.8 1.0 1.4 2.0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Coupling strength Average relative error Spanning Tree Preconditioner Structured Mean Field Support Tree Preconditioner Mean Field 0.6 0.8 1.0 1.4 2.0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Coupling strength Average relative error Support Tree Preconditioner Tree BP 200 400 600 800 0 500 1000 1500 Number of nodes in graph Lower bound on partition function Spanning tree Preconditioner Structured Mean Field Support Tree Preconditioner Mean Field Figure 1: Comparison of lower bounds (top left), and upper bounds (top right) for small grid graphs, and lower bounds for grid graphs of increasing size (left). compare bounds. The bottom plot of Figure 1 compares lower bounds for graphs with up to 900 nodes; a larger bound is necessarily tighter, and the preconditioner bounds are seen to outperform mean field. Acknowledgments We thank Gary Miller for helpful discussions. Research supported in part by NSF grants IIS-0312814 and IIS-0427206. References M. Bern, J. R. Gilbert, B. Hendrickson, N. Nguyen, and S. Toledo. Support-graph preconditioners. Submitted to SIAM J. Matrix Anal. Appl., 2001. E. G. Boman and B. Hendrickson. Support theory for preconditioning. SIAM Journal on Matrix Analysis and Applications, 25, 2003. K. Gremban. Combinatorial preconditioners for sparse, symmetric, diagonally dominant linear systems. Ph.D. Thesis, Carnegie Mellon University, 1996, 1996. P. Ravikumar and J. Lafferty. Variational Chernoff bounds for graphical models. Proceedings of Uncertainty in Artificial Intelligence (UAI), 2004. P. M. Vaidya. Solving linear equations with symmetric diagonally dominant matrices by constructing good preconditioners. 1990. Unpublished manuscript, UIUC. M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Tree-reweighted belief propagation and approximate ML estimation by pseudo-moment matching. 9th Workshop on Artificial Intelligence and Statistics, 2003. J. S. Yedidia, W. T. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. IJCAI 2001 Distinguished Lecture track, 2001.
|
2005
|
180
|
2,804
|
Structured Prediction via the Extragradient Method Ben Taskar Computer Science UC Berkeley, Berkeley, CA 94720 taskar@cs.berkeley.edu Simon Lacoste-Julien Computer Science UC Berkeley, Berkeley, CA 94720 slacoste@cs.berkeley.edu Michael I. Jordan Computer Science and Statistics UC Berkeley, Berkeley, CA 94720 jordan@cs.berkeley.edu Abstract We present a simple and scalable algorithm for large-margin estimation of structured models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem as a convex-concave saddle-point problem and apply the extragradient method, yielding an algorithm with linear convergence using simple gradient and projection calculations. The projection step can be solved using combinatorial algorithms for min-cost quadratic flow. This makes the approach an efficient alternative to formulations based on reductions to a quadratic program (QP). We present experiments on two very different structured prediction tasks: 3D image segmentation and word alignment, illustrating the favorable scaling properties of our algorithm. 1 Introduction The scope of discriminative learning methods has been expanding to encompass prediction tasks with increasingly complex structure. Much of this recent development builds upon graphical models to capture sequential, spatial, recursive or relational structure, but as we will discuss in this paper, the structured prediction problem is broader still. For graphical models, two major approaches to discriminative estimation have been explored: (1) maximum conditional likelihood [13] and (2) maximum margin [6, 1, 20]. For the broader class of models that we consider here, the conditional likelihood approach is intractable, but the large margin formulation yields tractable convex problems. We interpret the term structured output model very broadly, as a compact scoring scheme over a (possibly very large) set of combinatorial structures and a method for finding the highest scoring structure. In graphical models, the scoring scheme is embodied in a probability distribution over possible assignments of the prediction variables as a function of input variables. In models based on combinatorial problems, the scoring scheme is usually a simple sum of weights associated with vertices, edges, or other components of a structure; these weights are often represented as parametric functions of a set of features. Given training instances labeled by desired structured outputs (e.g., matchings) and a set of features that parameterize the scoring function, the learning problem is to find parameters such that the highest scoring outputs are as close as possible to the desired outputs. Example of prediction tasks solved via combinatorial optimization problems include bipartite and non-bipartite matching in alignment of 2D shapes [5], word alignment in natural language translation [14] and disulfide connectivity prediction for proteins [3]. All of these problems can be formulated in terms of a tractable optimization problem. There are also interesting subfamilies of graphical models for which large-margin methods are tractable whereas likelihood-based methods are not; an example is the class of Markov random fields with restricted potentials used for object segmentation in vision [12, 2]. Tractability is not necessarily sufficient to obtain algorithms that work effectively in practice. In particular, although the problem of large margin estimation can be formulated as a quadratic program (QP) in several cases of interest [2, 19], and although this formulation exploits enough of the problem structure so as to achieve a polynomial representation in terms of the number of variables and constraints, off-the-shelf QP solvers scale poorly with problem and training sample size for these models. To solve large-scale machine learning problems, researchers often turn to simple gradient-based algorithms, in which each individual step is cheap in terms of computation and memory. Examples of this approach in the structured prediction setting include the Structured Sequential Minimal Optimization algorithm [20, 18] and the Structured Exponentiated Gradient algorithm [4]. These algorithms are first-order methods for solving QPs arising from low-treewidth Markov random fields and other decomposable models. They are able to scale to significantly larger problems than off-the-shelf QP solvers. However, they are limited in scope in that they rely on dynamic programming to compute essential quantities such as gradients. They do not extend to models in which dynamic programming is not applicable, for example, to problems such as matchings and min-cuts. In this paper, we present an estimation methodology for structured prediction problems that does not require a general-purpose QP solver. We propose a saddle-point formulation which allows us to exploit simple gradient-based methods [11] with linear convergence guarantees. Moreover, we show that the key computational step in these methods—a certain projection operation—inherits the favorable computational complexity of the underlying optimization problem. This important result makes our approach viable computationally. In particular, for matchings and min-cuts, projection involves a min-cost quadratic flow computation, a problem for which efficient, highly-specialized algorithms are available. We illustrate the effectiveness of this approach on two very different large-scale structured prediction tasks: 3D image segmentation and word alignment in translation. 2 Structured models We begin by discussing two special cases of the general framework that we subsequently present: (1) a class of Markov networks used for segmentation, and (2) a bipartite matching model for word alignment. Despite significant differences in the setup for these models, they share the property that in both cases the problem of finding the highest-scoring output can be formulated as a linear program (LP). Markov networks. We consider a special class of Markov networks, common in vision applications, in which inference reduces to a tractable min-cut problem [7]. Focusing on binary variables, y = {y1, . . . , yN}, and pairwise potentials, we define a joint distribution over {0, 1}N via P(y) ∝Q j∈V φj(yj) Q jk∈E φjk(yj, yk), where (V, E) is an undirected graph, and where {φj(yj); j ∈V} are the node potentials and {φjk(yj, yk), jk ∈E} are the edge potentials. In image segmentation (see Fig. 1(a)), the node potentials capture local evidence about the label of a pixel or laser scan point. Edges usually connect nearby pixels in an image, and serve to correlate their labels. Assuming that such correlations tend to be positive What is the anticipated cost of collecting fees under the new proposal ? En vertu de les nouvelles propositions , quel est le coût prévu de perception de les droits ? (a) (b) Figure 1: Examples of structured prediction applications: (a) articulated object segmentation and (b) word alignment in machine translation. (connected nodes tend to have the same label), we restrict the form of edge potentials to be of the form φjk(yj, yk) = exp{−sjk1I(yj ̸= yk)}, where sjk is a non-negative penalty for assigning yj and yk different labels. Expressing node potentials as φj(yj) = exp{sjyj}, we have P(y) ∝exp nP j∈V sjyj −P jk∈E sjk1I(yj ̸= yk) o . Under this restriction of the potentials, it is known that the problem of computing the maximizing assignment, y∗= arg max P(y | x), has a tractable formulation as a min-cut problem [7]. In particular, we obtain the following LP: max 0≤z≤1 X j∈V sjzj − X jk∈E sjkzjk s.t. zj −zk ≤zjk, zk −zj ≤zjk, ∀jk ∈E. (1) In this LP, a continuous variable zj is a relaxation of the binary variable yj. Note that the constraints are equivalent to |zj −zk| ≤zjk. Because sjk is positive, zjk = |zk −zj| at the maximum, which is equivalent to 1I(zj ̸= zk) if the zj, zk variables are binary. An integral optimal solution always exists, as the constraint matrix is totally unimodular [17] (that is, the relaxation is not an approximation). We can parametrize the node and edge weights sj and sjk in terms of user-provided features xj and xjk associated with the nodes and edges. In particular, in 3D range data, xj might be spin image features or spatial occupancy histograms of a point j, while xjk might include the distance between points j and k, the dot-product of their normals, etc. The simplest model of dependence is a linear combination of features: sj = w⊤ n fn(xj) and sjk = w⊤ e fe(xjk), where wn and we are node and edge parameters, and fn and fe are node and edge feature mappings, of dimension dn and de, respectively. To ensure non-negativity of sjk, we assume the edge features fe to be non-negative and restrict we ≥0. This constraint is easily incorporated into the formulation we present below. We assume that the feature mappings f are provided by the user and our goal is to estimate parameters w from labeled data. We abbreviate the score assigned to a labeling y for an input x as w⊤f(x, y) = P j yjw⊤ n fn(xj) −P jk∈E yjkw⊤ e fe(xjk), where yjk = 1I(yj ̸= yk). Matchings. Consider modeling the task of word alignment of parallel bilingual sentences (see Fig. 1(b)) as a maximum weight bipartite matching problem, where the nodes V = Vs ∪Vt correspond to the words in the “source” sentence (Vs) and the “target” sentence (Vt) and the edges E = {jk : j ∈Vs, k ∈Vt} correspond to possible alignments between them. For simplicity, assume that each word aligns to one or zero words in the other sentence. The edge weight sjk represents the degree to which word j in one sentence can translate into the word k in the other sentence. Our objective is to find an alignment that maximizes the sum of edge scores. We represent a matching using a set of binary variables yjk that are set to 1 if word j is assigned to word k in the other sentence, and 0 otherwise. The score of an assignment is the sum of edge scores: s(y) = P jk∈E sjkyjk. The maximum weight bipartite matching problem, arg maxy∈Y s(y), can be found by solving the following LP: max 0≤z≤1 X jk∈E sjkzjk s.t. X j∈Vs zjk ≤1, ∀k ∈Vt; X k∈Vt zjk ≤1, ∀j ∈Vs, (2) where again the continuous variables zjk correspond to the relaxation of the binary variables yjk. As in the min-cut problem, this LP is guaranteed to have integral solutions for any scoring function s(y) [17]. For word alignment, the scores sjk can be defined in terms of the word pair jk and input features associated with xjk. We can include the identity of the two words, relative position in the respective sentences, part-of-speech tags, string similarity (for detecting cognates), etc. We let sjk = w⊤f(xjk) for some user-provided feature mapping f and abbreviate w⊤f(x, y) = P jk yjkw⊤f(xjk). General structure. More generally, we consider prediction problems in which the input x ∈X is an arbitrary structured object and the output is a vector of values y = (y1, . . . , yLx), for example, a matching or a cut in the graph. We assume that the length Lx and the structure of y depend deterministically on the input x. In our word alignment example, the output space is defined by the length of the two sentences. Denote the output space for a given input x as Y(x) and the entire output space as Y = S x∈X Y(x). Consider the class of structured prediction models H defined by the linear family: hw(x) = arg maxy∈Y(x) w⊤f(x, y), where f(x, y) is a vector of functions f : X × Y 7→IRn. This formulation is very general. Indeed, it is too general for our purposes—for many f, Y pairs, finding the optimal y is intractable. Below, we specialize to the class of models in which the arg max problem can be solved in polynomial time using linear programming (and more generally, convex optimization); this is still a very large class of models. 3 Max-margin estimation We assume a set of training instances S = {(xi, yi)}m i=1, where each instance consists of a structured object xi (such as a graph) and a target solution yi (such as a matching). Consider learning the parameters w in the conditional likelihood setting. We can define Pw(y | x) = 1 Zw(x) exp{w⊤f(x, y)}, where Zw(x) = P y′∈Y(x) exp{w⊤f(x, y′)}, and maximize the conditional log-likelihood P i log Pw(yi | xi), perhaps with additional regularization of the parameters w. However, computing the partition function Zw(x) is #P-complete [23, 10] for the two structured prediction problems we presented above, matchings and min-cuts. Instead, we adopt the max-margin formulation of [20], which directly seeks to find parameters w such that: yi = arg maxy′ i∈Yi w⊤f(xi, y′ i), ∀i, where Yi = Y(xi) and yi denotes the appropriate vector of variables for example i. The solution space Yi depends on the structured object xi; for example, the space of possible matchings depends on the precise set of nodes and edges in the graph. As in univariate prediction, we measure the error of prediction using a loss function ℓ(yi, y′ i). To obtain a convex formulation, we upper bound the loss ℓ(yi, hw(xi)) using the hinge function: maxy′ i∈Yi[w⊤fi(y′ i) + ℓi(y′ i)] −w⊤fi(yi), where ℓi(y′ i) = ℓ(yi, y′ i), and fi(y′ i) = f(xi, y′ i). Minimizing this upper bound will force the true structure yi to be optimal with respect to w for each instance i. We add a standard L2 weight penalty ||w||2 2C : min w∈W ||w||2 2C + X i max y′ i∈Yi[w⊤fi(y′ i) + ℓi(y′ i)] −w⊤fi(yi), (3) where C is a regularization parameter and W is the space of allowed weights (for example, W = IRn or W = IRn +). Note that this formulation is equivalent to the standard formulation using slack variables ξ and slack penalty C presented in [20, 19]. The key to solving Eq. (3) efficiently is the loss-augmented inference problem, maxy′ i∈Yi[w⊤fi(y′ i) + ℓi(y′ i)]. This optimization problem has precisely the same form as the prediction problem whose parameters we are trying to learn—maxy′ i∈Yi w⊤fi(y′ i)— but with an additional term corresponding to the loss function. Tractability of the lossaugmented inference thus depends not only on the tractability of maxy′ i∈Yi w⊤fi(y′ i), but also on the form of the loss term ℓi(y′ i). A natural choice in this regard is the Hamming distance, which simply counts the number of variables in which a candidate solution y′ i differs from the target output yi. In general, we need only assume that the loss function decomposes over the variables in yi. For example, in the case of bipartite matchings the Hamming loss counts the number of different edges in the matchings yi and y′ i and can be written as: ℓH i (y′ i) = P jk yi,jk + P jk(1 −2y′ i,jk)yi,jk. Thus the loss-augmented matching problem for example i can be written as an LP similar to Eq. (2) (without the constant term P jk yi,jk): max 0≤z≤1 X jk zi,jk[w⊤f(xi,jk) + 1 −2yi,jk] s.t. X j zi,jk ≤1, X k zi,jk ≤1. Generally, when we can express maxy′ i∈Yi w⊤fi(y′ i) as an LP, maxzi∈Zi w⊤Fizi, where Zi = {zi : Aizi ≤bi, zi ≥0}, for appropriately defined constraints Ai, bi and feature matrix Fi, we have a similar LP for the loss-augmented inference for each example i: di + maxzi∈Zi(w⊤Fi + ci)⊤zi for appropriately defined di, Fi, ci, Ai, bi. Let z = {z1, . . . , zm}, Z = Z1 × . . . × Zm. We could proceed by making use of Lagrangian duality, which yields a joint convex optimization problem; this is the approach described in [19]. Instead we take a different tack here, posing the problem in its natural saddle-point form: min w∈W max z∈Z ||w||2 2C + X i w⊤Fizi + c⊤ i zi −w⊤fi(yi) . (4) As we discuss in the following section, this approach allows us to exploit the structure of W and Z separately, allowing for efficient solutions for a wider range of structure spaces. 4 Extragradient method The key operations of the method we present below are gradient calculations and Euclidean projections. We let L(w, z) = ||w||2 2C +P i w⊤Fizi + c⊤ i zi −w⊤fi(yi) , with gradients given by: ∇wL(w, z) = w C + P i Fizi −fi(yi) and ∇ziL(w, z) = F⊤ i w + ci. We denote the projection of a vector zi onto Zi as πZi(zi) = arg minz′ i∈Zi ||z′ i −zi|| and similarly, the projection onto W as πW(w′) = arg minw∈W ||w′ −w||. A well-known solution strategy for saddle-point optimization is provided by the extragradient method [11]. An iteration of the extragradient method consists of two very simple steps, prediction (w, z) →(wp, zp) and correction (wp, zp) →(wc, zc): wp = πW(w −β∇wL(w, z)); zp i = πZi(zi + β∇ziL(w, z)); (5) wc = πW(w −β∇wL(wp, zp)); zc i = πZi(zi + β∇ziL(wp, zp)); (6) where β is an appropriately chosen step size. The algorithm starts with a feasible point w = 0, zi’s that correspond to the assignments yi’s and step size β = 1. After each prediction step, it computes r = β ||∇L(w,z)−∇L(wp,zp)|| (||w−wp||+||z−zp||) . If r is greater than a threshold ν, the step size is decreased using an Armijo type rule: β = (2/3)β min(1, 1/r), and a new prediction step is computed until r ≤ν, where ν ∈(0, 1) is a parameter of the algorithm. Once a suitable β is found, the correction step is taken and (wc, zc) becomes the new (w, z). The method is guaranteed to converge linearly to a solution w∗, z∗[11, 9]. See the longer version of this paper at http://www.cs.berkeley.edu/˜taskar/extragradient.pdf for details. By comparison, Exponentiated Gradient [4] has sublinear convergence rate guarantees, while Structured SMO [18] has none. The key step influencing the efficiency of the algorithm is the Euclidean projection onto the feasible sets W and Zi. In case W = IRn, the projection is the identity operation; projecting onto IRn + consists of clipping negative weights to zero. Additional problemspecific constraints on the weight space can be efficiently incorporated in this step (although linear convergence guarantees only hold for polyhedral W). In case of word alignment, Zi is the convex hull of bipartite matchings and the problem reduces to the much-studied minimum cost quadratic flow problem. The projection zi = πZi(z′ i) is given by min 0≤z≤1 X jk 1 2(z′ i,jk −zi,jk)2 s.t. X j zi,jk ≤1, X k zi,jk ≤1. We use a standard reduction of bipartite matching to min-cost flow by introducing a source node s linked to all the nodes in Vs i (words in the “source” sentence), and a sink node t linked from all the nodes in Vt i (words in the “target” sentence), using edges of capacity 1 and cost 0. The original edges jk have a quadratic cost 1 2(z′ i,jk −zi,jk)2 and capacity 1. Minimum (quadratic) cost flow from s to t is the projection of z′ i onto Zi. The reduction of the projection to minimum quadratic cost flow for the min-cut polytope Zi is shown in the longer version of the paper. Algorithms for solving this problem are nearly as efficient as those for solving regular min-cost flow problems. In case of word alignment, the running time scales with the cube of the sentence length. We use publicly-available code for solving this problem [8] (see http://www.math.washington.edu/˜tseng/netflowg_nl/). 5 Experiments We investigate two structured models we described above: bipartite matchings for word alignments and restricted potential Markov nets for 3D segmentation. A commercial QPsolver, MOSEK, runs out of memory on the problems we describe below using the QP formulation [19]. We compared the extragradient method with the averaged perceptron algorithm [6]. A question which arises in practice is how to choose the regularization parameter C. The typical approach is to run the algorithm for several values of the regularization parameter and pick the best model using a validation set. For the averaged perceptron, a standard method is to run the algorithm tracking its performance on a validation set, and selecting the model with best performance. We use the same training regime for the extragradient by running it with C = ∞. Object segmentation. We test our algorithm on a 3D scan segmentation problem using the class of Markov networks with potentials that were described above. The dataset is a challenging collection of cluttered scenes containing articulated wooden puppets [2]. It contains eleven different single-view scans of three puppets of varying sizes and positions, with clutter and occluding objects such as rope, sticks and rings. Each scan consists of around 7, 000 points. Our goal was to segment the scenes into two classes— puppet and background. We use five of the scenes for our training data, three for validation and three for testing. Sample scans from the training and test set can be seen at http://www.cs.berkeley.edu/˜taskar/3DSegment/. We computed spin images of size 10 × 5 bins at two different resolutions, then scaled the values and performed PCA to obtain 45 principal components, which comprised our node features. We used the surface links output by the scanner as edges between points and for each edge only used a 0 100 200 300 400 500 600 0 0.05 0.1 0.15 0.2 Iterations Test Error percep − error extrag − error 0 100 200 300 400 500 6000 0.05 0.1 0.15 0.2 Train Loss / # nodes extrag − loss 0 100 200 300 400 500 600 0.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 Iterations Test AER percep − AER extrag − AER 0 100 200 300 400 500 6000.06 0.07 0.08 0.09 0.1 0.11 0.12 0.13 0.14 0.15 Train Loss / # edges extrag − loss (a) (b) Figure 2: Both plots show test error for the averaged perceptron and the extragradient (left y-axis) and training loss per node or edge for the extragradient (right y-axis) versus number of iterations for (a) object segmentation task and (b) word alignment task. single feature, set to a constant value of 1 for all edges. This results in all edges having the same potential. The training data contains approximately 37, 000 nodes and 88, 000 edges. Training time took about 4 hours for 600 iterations on a 2.80GHz Pentium 4 machine. Fig. 2(a) shows that the extragradient has a consistently lower error rate (about 3% for extragradient, 4% for averaged perceptron), using only slightly more expensive computations per iteration. Also shown is the corresponding decrease in the hinge-loss upperbound on the training data as the extragradient progresses. Word alignment. We also tested our learning algorithm on word-level alignment using a data set from the 2003 NAACL set [15], the English-French Hansards task. This corpus consists of 1.1M automatically aligned sentences, and comes with a validation set of 39 sentence pairs and a test set of 447 sentences. The validation and test sentences have been hand-aligned and are marked with both sure and possible alignments. Using these alignments, alignment error rate (AER) is calculated as: AER(A, S, P) = 1 −|A∩S|+|A∩P | |A|+|S| . Here, A is a set of proposed index pairs, S is the set of sure gold pairs, and P is the set of possible gold pairs (where S ⊆P). We used the intersection of the predictions of the English-to-French and French-to-English IBM Model 4 alignments (using GIZA++ [16]) on the first 5000 sentence pairs from the 1.1M sentences. The number of edges for 5000 sentences was about 555,000. We tested on the 347 hand-aligned test examples, and used the validation set to select the stopping point. The features on the word pair (ej, fk) include measures of association, orthography, relative position, predictions of generative models (see [22] for details). It took about 3 hours to perform 600 training iterations on the training data using a 2.8GHz Pentium 4 machine. Fig. 2(b) shows the extragradient performing slightly better (by about 0.5%) than average perceptron. 6 Conclusion We have presented a general solution strategy for large-scale structured prediction problems. We have shown that these problems can be formulated as saddle-point optimization problems, problems that are amenable to solution by the extragradient algorithm. Key to our approach is the recognition that the projection step in the extragradient algorithm can be solved by network flow algorithms. Network flow algorithms are among the most well-developed in the field of combinatorial optimization, and yield stable, efficient algorithmic platforms. We have exhibited the favorable scaling of this overall approach in two concrete, large-scale learning problems. It is also important to note that the general approach extends to a much broader class of problems. In [21], we show how to apply this approach efficiently to other types of models, including general Markov networks and weighted context-free grammars, using Bregman projections. Acknowledgments We thank Paul Tseng for kindly answering our questions about his min-cost flow code. This work was funded by the DARPA CALO project (03-000219) and Microsoft Research MICRO award (05-081). SLJ was also supported by an NSERC graduate sholarship. References [1] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden Markov support vector machines. In Proc. ICML, 2003. [2] D. Anguelov, B. Taskar, V. Chatalbashev, D. Koller, D. Gupta, G. Heitz, and A. Ng. Discriminative learning of Markov random fields for segmentation of 3d scan data. In CVPR, 2005. [3] P. Baldi, J. Cheng, and A. Vullo. Large-scale prediction of disulphide bond connectivity. In Proc. NIPS, 2004. [4] P. Bartlett, M. Collins, B. Taskar, and D. McAllester. Exponentiated gradient algorithms for large-margin structured classification. In NIPS, 2004. [5] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell., 24, 2002. [6] M. Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proc. EMNLP, 2002. [7] D. M. Greig, B. T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for binary images. J. R. Statist. Soc. B, 51, 1989. [8] F. Guerriero and P. Tseng. Implementation and test of auction methods for solving generalized network flow problems with separable convex cost. Journal of Optimization Theory and Applications, 115(1):113–144, October 2002. [9] B.S. He and L. Z. Liao. Improvements of some projection methods for monotone nonlinear variational inequalities. JOTA, 112:111:128, 2002. [10] M. Jerrum and A. Sinclair. Polynomial-time approximation algorithms for the Ising model. SIAM J. Comput., 22, 1993. [11] G. M. Korpelevich. The extragradient method for finding saddle points and other problems. Ekonomika i Matematicheskie Metody, 12:747:756, 1976. [12] S. Kumar and M. Hebert. Discriminative fields for modeling spatial dependencies in natural images. In NIPS, 2003. [13] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [14] E. Matusov, R. Zens, and H. Ney. Symmetric word alignments for statistical machine translation. In Proc. COLING, 2004. [15] R. Mihalcea and T. Pedersen. An evaluation exercise for word alignment. In Proceedings of the HLT-NAACL 2003 Workshop, Building and Using parallel Texts: Data Driven Machine Translation and Beyond, pages 1–6, Edmonton, Alberta, Canada, 2003. [16] F. Och and H. Ney. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1), 2003. [17] A. Schrijver. Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003. [18] B. Taskar. Learning Structured Prediction Models: A Large Margin Approach. PhD thesis, Stanford University, 2004. [19] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models: a large margin approach. In ICML, 2005. [20] B. Taskar, C. Guestrin, and D. Koller. Max margin Markov networks. In NIPS, 2003. [21] B. Taskar, S. Lacoste-Julien, and M. Jordan. Structured prediction, dual extragradient and Bregman projections. Technical report, UC Berkeley Statistics Department, 2005. [22] B. Taskar, S. Lacoste-Julien, and D. Klein. A discriminative matching approach to word alignment. In EMNLP, 2005. [23] L. G. Valiant. The complexity of computing the permanent. Theoretical Computer Science, 8:189–201, 1979.
|
2005
|
181
|
2,805
|
Noise and the two-thirds power law Uri Maoz1,2,3, Elon Portugaly3, Tamar Flash2 and Yair Weiss3,1 1 Interdisciplinary Center for Neural Computation, The Hebrew University of Jerusalem, Edmond Safra Campus, Givat Ram Jerusalem 91904, Israel; 2 Department of Computer Science and Applied Mathematics, The Weizmann Institute of Science, PO Box 26 Rehovot 76100, Israel; 3 School of Computer Science and Engineering, The Hebrew University of Jerusalem, Edmond Safra Campus, Givat Ram Jerusalem 91904, Israel Abstract The two-thirds power law, an empirical law stating an inverse non-linear relationship between the tangential hand speed and the curvature of its trajectory during curved motion, is widely acknowledged to be an invariant of upper-limb movement. It has also been shown to exist in eyemotion, locomotion and was even demonstrated in motion perception and prediction. This ubiquity has fostered various attempts to uncover the origins of this empirical relationship. In these it was generally attributed either to smoothness in hand- or joint-space or to the result of mechanisms that damp noise inherent in the motor system to produce the smooth trajectories evident in healthy human motion. We show here that white Gaussian noise also obeys this power-law. Analysis of signal and noise combinations shows that trajectories that were synthetically created not to comply with the power-law are transformed to power-law compliant ones after combination with low levels of noise. Furthermore, there exist colored noise types that drive non-power-law trajectories to power-law compliance and are not affected by smoothing. These results suggest caution when running experiments aimed at verifying the power-law or assuming its underlying existence without proper analysis of the noise. Our results could also suggest that the power-law might be derived not from smoothness or smoothness-inducing mechanisms operating on the noise inherent in our motor system but rather from the correlated noise which is inherent in this motor system. 1 Introduction A number of regularities have been empirically observed for the motion of the end-point of the human upper-limb during curved and drawing movements. One of these has been termed “the two-thirds power law” ([1]). It can be formulated as: v(t) = const · κ(t)β (1) or, in log-space log (v(t)) = const + β log (κ(t)) (2) where v is the tangential end-point speed, κ is the instantaneous curvature of the path, and β is approximately −1 3. The various studies that lend support to this power-law go beyond its simple verification. There are those that suggest it as a tool to extract natural segmentation into primitives of complex movements ([2], [3]). Others show the development of the power-law with age for children ([4]). There is also research that suggests it appears for three-dimensional (3D) drawings under isometric force conditions ([5]). It was even found in neural population coding in the monkey motor brain area controlling the hand ([6]). Other studies have located the power-law elsewhere than the hand. It was found to apply in eye-motion ([7]) and even in motion perception ([8],[9]) and movement prediction based on biological motion ([10]). Recent studies have also found it in locomotion ([11]). This power-law has thus been widely accepted as an important invariant in biological movement trajectories, so much so that it has become an evaluation criterion for the quality of models (e.g. [12]). This has motivated various attempts to find some deeper explanation that supposedly underlies this regularity. The power-law was shown to possibly be a result of minimization of jerk ([13],[14]), jerk along a predefined path ([15]), or endpoint variability due to noise inherent in the motor system ([12]). Others have claimed that it stems from forward kinematics of sinusoidal movements at the joints ([16]). Another explanation has to do with the mathematically interesting fact that motion according to the power-law maintains constant affine velocity ([17],[18]). We were thus very much surprised by the following: Observation: Given a time series (xi, yi)n i=1in which xi, yi ∼N(0, 1) i.i.d.(xi,xj,yi,yj) for i ̸= j, and assuming that the series is of equal time intervals, calculate κ and v in order to obtain β from the linear regression of log(v) versus log(κ). The linear regression plot of log(v) versus log(κ) is within range both in its regression coefficient and R2 value to what experimentalists consider as compliance with the power-law (see figure 1b). Therefore this white Gaussian noise trajectory seems to fit the two-thirds power law model in equation (1) above. 1.1 Problem formulation For any regular planar curve parameterized with t, we get from the Frenet-Serret formulas (see [19])1: κ(t) = |¨x ˙y −˙x¨y| ( ˙x2 + ˙y2) 3 2 = |¨x ˙y −˙x¨y| v3(t) (3) where v(t) = p ˙x2 + ˙y2. Denoting α (t) = |¨x ˙y −˙x¨y| we obtain: v(t) = α(t) 1 3 · κ(t)−1 3 (4) or, in log-space2: log (v(t)) = 1 3 log (α(t)) −1 3 log (κ(t)) (5) Given a trajectory for which α is constant, the power-law in equation (1) above is obtained exactly (the term α is in fact the affine velocity of [17],[18], and thus a trajectory that yields a constant α would mean movement at constant affine velocity). 1Though there exists a definition for signed curvature for planar curves (i.e. without the absolute value in the numerator of equation 3), we refer to the absolute value of the curvature, as done in the power-law. Therefore, in our case, κ(t) is the absolute value of the instantaneous curvature. 2Non-linear regression in (4) should naturally be performed instead of log-space linear regression in (5). However, this linear regression in log-space is the method of choice in the motor-control literature, despite the criticism of [16]. We therefore opted for it here as well. (a) −5 0 5 10 −8 −6 −4 −2 0 log(κ) log(α) Slope: 0.14 (b) −5 0 5 −4 −3 −2 −1 0 1 2 log(κ) log(v) Slope: −0.29 Figure 1: Given a trajectory composed of normally distributed position data with constant time intervals, we calculate and plot: (a) log(α) versus log(κ) and (b) log(v) versus log(κ) with their linear regression lines. The correlation coefficient in (a) is 0.14, entailing the one in (b) to be -0.29 (see text). Moreover, the R2 value in (a) is 0.04, much smaller than in the 0.57 value in (b). Denoting the linear regression coefficient of log(v) versus log(κ) by β and the linear regression coefficient of log(α) versus log(κ) by ξ, it can be easily shown that (5) entails: β = −1 3 + ξ 3 (6) Hence, if log(α) and log(κ) are statistically uncorrelated, the linear regression coefficient between them, which we termed ξ, would be 0, and thus from (6) the linear regression coefficient of log(v) versus log(κ), which we named β, would be exactly −1 3. Therefore, any trajectory that produces log(α) and log(κ) that are statistically uncorrelated would precisely conform to the power-law in (1)3. If log(α) and log(κ) are weakly correlated, such that ξ (the linear regression coefficient of log(α) versus log(κ)) is small, the effect on β (the linear regression coefficient of log(v) versus log(κ)) would result in a positive offset of ξ 3 from the −1 3 value of the power-law. Below, we analyze ξ for random position data, and show that it is indeed small and that β takes values close to −1 3. Figure 1 portrays a typical log(v) versus log(κ) linear regression plot for the case of a trajectory composed of random data sampled from an i.i.d. normal distribution. 2 Power-law analysis for trajectories composed of normally distributed samples Let us take the time-series (xi, yi)n i=1 where xi, yi ∼N(0, 1), i.i.d. Let ti denote the time at sample i and for all i let ti+1 −ti = 1. From this time series we calculate α, κ and v by central finite differences4. Again, we denote the linear regression coefficient of log(α) versus log(κ) by ξ, and thus d log(α) = const + ξ log(κ), where ξ = Covariance[log(κ),log(α)] Variance[log(κ)] . And from (6) we know that a linear regression of log(v) versus log(κ) would result in β = −1 3 + ξ 3. The fact that ξ is scaled down three-fold to give the offset of β from −1 3 is significant. It means that for β to achieve values far from −1 3, ξ would need to be very big. For example, 3α being constant is naturally a special case of uncorrelated log(α) and log(κ). 4We used the central finite differencing technique here mainly for ease of use and analysis. Other differentiation techniques, either utilizing more samples (e.g. the Lagrange 5-point method) or analytic differentiation of smoothing functions (e.g. smoothing splines), yielded similar results. In a more general sense, smoothing techniques introduce local correlations (between neighboring samples in time) into the trajectory. Yet globally, for large time series, the correlation remains weak. in order for β to be 0 (i.e. motion at constant tangential speed), ξ would need to be 1, which requires perfect correlation between log(α), which is a time-dependant variable, and log(κ), which is a geometric one. This could be taken to suggest that for a control system to maintain movement at β values that are remote from −1 3 would require some non-trivial control of the correlation between log(α) and log(κ). Running 100 Monte-Carlo simulations5, each drawing a time series of 1,000,000 normally distributed points, we estimated ξ = 0.1428 ± 0.0013 (R2 = 0.0357 ± 0.0006). ξ’s magnitude and its corresponding R2 value suggest that log(α) and log(κ) are only weakly correlated (hence the ball-like shape in Figure 1a). The same type of simulations gave β = −0.2857±0.0004 (R2 = 0.5715±0.0011), as expected. Both β and its R2 magnitudes are within what is considered by experimentalists to be the range of applicable values for the power-law. Moreover, standard outlier detection and removal techniques as well as robust linear regression make β approach closer to −1 3 and increase the R2 value. Measurements of human drawing movements in 3D also exhibit the power-law ([20],[16]). We therefore decided to repeat the same analysis procedure for 3D data (i.e. drawing timeseries (xi, yi, zi)n i=1 i.i.d. from N(0, 1), and extracting v, α and κ according to their 3D definitions). This time we obtained ξ = −0.0417 ± 0.0009 (R2 = 0.0036 ± 0.0002) and as expected β = −0.3472 ± 0.0003 (R2 = 0.6944 ± 0.0006). This is even closer to the power-law values, as defined in (1). This phenomenon also occurs when we repeat the procedure for trajectories composed of uniformly distributed samples with constant time intervals. The linear regression of log(v) versus log(κ) for planar trajectories gives β = −0.2859 ± 0.0004 (R2 = 0.5724 ± 0.0009). 3D trajectories of uniformly distributed samples give us β = −0.3475 ± 0.0003 (R2 = 0.6956±0.0007) under the same simulation procedure. In both cases the parameters obtained for the uniform distribution are very close to those of the normal distribution. 3 Analysis of signal and noise combinations 3.1 Original (Non-filtered) signal and noise combinations Another interesting question has to do with the combination of signal and noise. Every experimentally measured signal has some noise incorporated in it, be it measurement-device noise or noise internal to the human motor system. But how much noise must be present to transform a signal that does not conform to the power-law to one that does? We took a planar ellipse with a major axis of 0.35m and minor axis of 0.13m (well within the standard range of dimensions used as templates for measuring the power-law for humans, see figure 2b), and spread 120 equispaced samples over its perimeter (a typical number of samples for the sampling rate given below). The time intervals were constant at 0.01s (100 Hz is of the order of magnitude of contemporary measurement equipment). This elliptic trajectory is thus traversed at constant speed, despite not having constant curvature. It therefore does not obey the power-law (a “sanity check” of our simulations gave β = −0.0003, R2 = 0.0028). At this stage, normally distributed noise with various standard-deviations was added to this ellipse. We ran 100 simulations for every noise magnitude and averaged the powerlaw parameters β and R2 obtained from log(v) versus log(κ) linear regressions for each noise magnitude (see figure 2a). The level of noise required to drive the non-power-lawcompliant trajectory to obey the power-law is rather small; a standard deviation of about 5The Matlab code for this simple simulation can be found at: http://www.cs.huji.ac.il/~urim/NIPS_2005/Monte_Carlo.m (a) 0 5 10 x 10 −3 −0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0 Noise magnitude β 0 5 10 x 10 −3 0 0.2 0.4 0.6 Noise magnitude R2 (b) (c) 0 5 10 x 10 −3 −0.35 −0.3 −0.25 −0.2 −0.15 −0.1 −0.05 Noise magnitude β 0 5 10 x 10 −3 0 0.2 0.4 0.6 0.8 Noise magnitude R2 (d) Figure 2: (a) β and R2 values of the power-law fit for trajectories composed of the non-power-law planar ellipse given in (b) combined with various magnitudes of noise (portrayed as the standard deviations of the normally distributed noise that was added). (c) β and R2 values for the non-powerlaw 3D bent ellipse given in (d). All distances are measured in meters. 0.005m is sufficient6. The same procedure was performed for a 3D bent-ellipse of similar proportions and perimeter in order to test the effects of noise on spatial trajectories (see figure 2c and d). We placed the samples on this bent ellipse in an equispaced manner, so that it would be traversed at constant speed (and indeed β = −0.0042, R2 = 0.1411). This time the standard deviation of the noise which was required for power-law-like behavior in 3D was 0.003m or so, a bit smaller than in the planar case. Naturally, had we chosen smaller ellipses, less noise would have been required to make them obey the power-law (for instance, if we take a 0.1 by 0.05m ellipse, the same effect would be obtained with noise of about 0.002m standard deviation for the planar case and 0.0015m for a 3D bent ellipse of the same magnitude). Note that both for the planar and spatial shapes, the noise-level that drives the non-power-law signal to conform to the power-law is in the order of magnitude of the average displacement between consecutive samples. 3.2 Does filtering solve the problem? All the analysis above was for raw data, whereas it is common practice to low-pass filter experimentally obtained trajectories before extracting κ and v. If we take a non-power-law signal, contaminate it with enough noise for it to comply with the power-law and then filter it, would the resulting signal obey the power-law or not? We attempted to answer this question by contaminating the constant-speed bent-ellipse of the previous subsection (reminder: β = −0.0003, R2 = 0.0028) with Gaussian noise of standard deviation 0.005m. This resulted in a trajectory with β = −0.3154 ± 0.0323 (R2 = 0.6303 ± 0.0751) for 100 simulation runs (actually a bit closer to the power-law than the noise alone). We then low-pass filtered each trajectory with a zero-lag second-order Butterworth filter with 10 60.005m is about half the distance between consecutive samples in this trajectory and sampling rate. Figure 3: a graphical outline of Procedure 1 (from the top to the bottom left) and Procedure 2 (from the top to the bottom right). (a) The original non-power-law signal (b) Signal in (a) plus white noise (c) Signal in (b) after smoothing. (d) Signal in (a) with added correlated noise (e) Signal in (d) after smoothing. All signals but (a) and (c) obey the power-law. Hz cutoff frequency. This returned a signal essentially without power-law compliance, i.e. β = −0.0472 ± 0.0209 (R2 = 0.1190 ± 0.0801) . Let us name this process Procedure 1. But what if the noise at hand is more resistant to smoothing? Taking 3D Gaussian noise and smoothing it (using the same type of Butterworth filtering as above) does not make the resulting signal any less compliant to the power-law. Monte-Carlo simulations of smoothed random trajectories (100 repetitions of 1,000,000 samples each) resulted in β = −0.3473± 0.0003 (R2 = 0.6945 ± 0.0007), which is the same as the original noise (which had β = −0.3472 ± 0.0003, R2 = 0.6944 ± 0.0006)7. We therefore ran the signal plus noise simulations again, this time adding smoothed-noise (increasing its magnitude five-fold to compensate for the loss of energy of this noise due to the filtering) to the constant-speed bent-ellipse. This time the combined signal yielded a power-law fit of β = −0.3175 ± 0.0414 (R2 = 0.6260 ± 0.0798), leaving it power-law compliant. However, this time the same filtering procedure as above left us with a signal that could still be considered to obey the power-law, with β = −0.2747 ± 0.0481 (R2 = 0.5498 ± 0.0698). We name this process Procedure 2. Procedures 1 and 2 are portrayed graphically in figure 3. If we continue and increase the noise magnitude the effect of the smoothing at the end of Procedure 2 becomes less apparent, with the smoothed trajectories sometimes conforming to the power-law (mainly in terms of R2) better than before the smoothing. 3.3 Levels of noise inherent to upper limb movement in human data We conducted a preliminary experiment to explore the level of noise intrinsic to the human motor system. Subjects were instructed to repetitively and continuously trace ellipses in 3D while seated with their trunk restrained to the back of a rigid chair and their eyes closed (to avoid spatial cues in the room, see [16]). They were to suppose that there exists a spatial elliptical pattern before them, which they were to traverse with their hand. The 3D hand position was recorded at 100 Hz using NDI’s Optotrak 2010. Given their goal, it is reasonable to assume that the variance between the trajectories in the different iterations is composed of measurement noise as well as noise internal to the subject’s motor systems (since the inter-repetition drift was removed by PCA alignment after segmenting the continuous motion into underlying iterations). We thus analyzed the recorded time series in order to find that variance (see figure 4a) and compared this to the average variance of the synthetic equispaced bent ellipse combined with correlated noise (see figure 4b). While more careful experiments may be needed to extract the exact SNR of human limb movements, it appears that the level of noise in human limb movement is comparable to the level of noise that can cause power-law behavior even for non power-law signals. 7This result goes hand in hand with what was said before. Introducing local correlations between samples does not alter the power-law-from-noise phenomenon. (a) (b) 0 50 100 150 0.008 0.01 0.012 0.014 0.016 0.018 0.02 STD of intersection points Synthetic Human Figure 4: Noise level in repetitive upper limb movement. The variance of the different iterations was measured at 10 different positions along the ellipse, defined by a plane passing through the origin, perpendicular to the first two principle components of the ellipses. The angles between every two neighboring planes were equal. (a) For each position, the intersection of each iteration of the trajectory with the plane was calculated. (b) The standard deviation of the different intersections of each plane was measured, and is depicted for synthetic and human data. 4 Discussion We do not suggest that the power-law, which stems from analysis of human data, is a bogus phenomenon, resulting only from measurement noise. Yet our results do suggest caution when carrying out experiments that either aim to verify the power-law or assume its existence. When performing such experimentation, one should always be sure to verify that the signal-to-noise ratio in the system is well within the bounds where it does not drive the results toward the power-law. It might further be wise to conduct Monte-Carlo simulations with the specific parameters of the problem to ascertain this. If we focus on the measurement device noise alone, it should be noted that whereas many modern devices for planar motion measurement tend to have a measurement accuracy superior to the 0.002m or so (which we have shown to be enough to produce power-law from noise), the same cannot be said for contemporary 3D measurement devices. There, errors of magnitudes in the order of about 0.002m can certainly occur. In addition, one should keep in mind that even for smaller noise magnitudes some drift toward the power-law does occur. This must be taken into consideration when analyzing the results. Last, muscle-tremor must also be borne in mind as another source of noise, especially when dealing with pathologies. Moreover, following the results above, it is clear that when a significant amount of noise is incorporated into the system, simply applying an off-the-shelf smoothing procedure would not necessarily satisfactorily remove it, especially if it is correlated (i.e. not white). Moreover, the smoothing procedure will most likely distort the signal to some degree, even if the noise is white. Therefore smoothing is not an easy “magic cure” for the power-law-fromnoise phenomenon. Another interesting aspect of our results has to do with the light they shed on the origins of the power-law. Previous works showed that the power-law can be derived from smoothness criteria for human trajectories, be it the assumption that these minimize the end-point’s jerk ([14]), jerk along a predefined path ([15]) or variability due to noise inherent in the motor system itself ([12]), or that the power law is due to smoothing inherent in the human motor system (especially the muscles, [21]) or to smooth joint oscillations ([16]). The results presented here suggest the opposite might be true as well. The power-law can be derived from the noise itself, which is inherent in our motor system (and which is likely to be correlated noise), rather than from any smoothing mechanisms which damp it. Acknowledgements This research was supported in part by the HFSPO grant to T.F.; E.P. is supported by an Eshkol fellowship of the Israeli Ministry of Science. References [1] F. Lacquaniti, C. Terzuolo, and P. Viviani. The law relating kinematic and figural aspects of drawing movements. Acta Psychologica, 54:115–130, 1983. [2] P. Viviani. Do units of motor action really exist. Experimental Brain Research, 15:201–216, 1986. [3] P. Viviani and M. Cenzato. Segmentation and coupling in complex movements. Journal of Experimental Psychology: Human Perception and Performance, 11(6):828–845, 1985. [4] P. Viviani and R. Schneider. A developmental study of the relationship between geometry and kinematics in drawing movements. Journal of Experimental Psychology, 17:198–218, 1991. [5] J. T. Massey, J. T. Lurito, G. Pellizzer, and A. P. Georgopoulos. Three-dimensional drawings in isometric conditions: relation between geometry and kinematics. Experimental Brain Research, 88(3):685–690, 1992. [6] A. B. Schwartz. Direct cortical representation of drawing. Science, 265(5171):540–542, 1994. [7] C. deSperati and P. Viviani. The relationship between curvature and velocity in two dimensional smooth pursuit eye movement. The Journal of Neuroscience, 17(10):3932–3945, 1997. [8] P. Viviani and N. Stucchi. Biological movements look uniform: evidence of motor perceptual interactions. Journal of Experimental Psychology: Human Perception and Performance, 18(3):603–626, 1992. [9] P. Viviani, G. Baud Bovoy, and M. Redolfi. Perceiving and tracking kinesthetic stimuli: further evidence of motor perceptual interactions. Journal of Experimental Psychology: Human Perception and Performance, 23(4):1232–1252, 1997. [10] S. Kandel, J. Orliaguet, and P. Viviani. Perceptual anticipation in handwriting: The role of implicit motor competence. Perception and Psychophysics, 62(4):706–716, 2000. [11] S. Vieilledent, Y. Kerlirzin, S. Dalbera, and A. Berthoz. Relationship between velocity and curvature of a human locomotor trajectory. Neuroscience Letters, 305(1):65–69, 2001. [12] C. M. Harris and D. M. Wolpert. Signal-dependent noise determines motor planning. Nature, 394(6695):780–784, 1998. [13] P. Viviani and T. Flash. Minimum-jerk, two-thirds power law, and isochrony: converging approaches to movement planning. Journal of Experimental Psychology: Human Perception and Performance, 21(1):32–53, 1995. [14] M. J. Richardson and T. Flash. Comparing smooth arm movements with the two-thirds power law and the related segmented-control hypothesis. Journal of Neuroscience, 22(18):8201–8211, 2002. [15] E. Todorov and M. Jordan. Smoothness maximization along a predefined path accurately predicts the speed profiles of complex arm movements. Journal of Neurophysiology, 80(2):696– 714, 1998. [16] S. Schaal and D. Sternad. Origins and violations of the 2/3 power law in rhythmic threedimensional arm movements. Experimental Brain Research, 136(1):60–72, 2001. [17] A. A. Handzel and T. Flash. Geometric methods in the study of human motor control. Cognitive Studies, 6:1–13, 1999. [18] F. E. Pollick and G. Sapiro. Constant affine velocity predicts the 1/3 power law of planar motion perception and generation. Vision Research, 37(3):347–353, 1997. [19] J. Oprea. Differential geometry and its applications. Prentice-Hall, 1997. [20] U. Maoz and T. Flash. Power-laws of three-dimensional movement. Unpublished manuscript. [21] P. L. Gribble and D. J. Ostry. Origins of the power law relation between movement velocity and curvature: modeling the effects of muscle mechanics and limb dynamics. Journal of Neurophysiology, 76:2853–2860, 1996.
|
2005
|
182
|
2,806
|
From Lasso regression to Feature vector machine Fan Li1, Yiming Yang1 and Eric P. Xing1,2 1 LTI and 2CALD, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA USA 15213 {hustlf,yiming,epxing}@cs.cmu.edu Abstract Lasso regression tends to assign zero weights to most irrelevant or redundant features, and hence is a promising technique for feature selection. Its limitation, however, is that it only offers solutions to linear models. Kernel machines with feature scaling techniques have been studied for feature selection with non-linear models. However, such approaches require to solve hard non-convex optimization problems. This paper proposes a new approach named the Feature Vector Machine (FVM). It reformulates the standard Lasso regression into a form isomorphic to SVM, and this form can be easily extended for feature selection with non-linear models by introducing kernels defined on feature vectors. FVM generates sparse solutions in the nonlinear feature space and it is much more tractable compared to feature scaling kernel machines. Our experiments with FVM on simulated data show encouraging results in identifying the small number of dominating features that are non-linearly correlated to the response, a task the standard Lasso fails to complete. 1 Introduction Finding a small subset of most predictive features in a high dimensional feature space is an interesting problem with many important applications, e.g. in bioinformatics for the study of the genome and the proteome, and in pharmacology for high throughput drug screening. Lasso regression ([Tibshirani et al., 1996]) is often an effective technique for shrinkage and feature selection. The loss function of Lasso regression is defined as: L = X i (yi − X p βpxip)2 + λ X p ||βp||1 where xip denotes the pth predictor (feature) in the ith datum, yi denotes the value of the response in this datum, and βp denotes the regression coefficient of the pth feature. The norm-1 regularizer P p ||βp||1 in Lasso regression typically leads to a sparse solution in the feature space, which means that the regression coefficients for most irrelevant or redundant features are shrunk to zero. Theoretical analysis in [Ng et al., 2003] indicates that Lasso regression is particularly effective when there are many irrelevant features and only a few training examples. One of the limitations of standard Lasso regression is its assumption of linearity in the feature space. Hence it is inadequate to capture non-linear dependencies from features to responses (output variables). To address this limitation, [Roth, 2004] proposed “generalized Lasso regressions” (GLR) by introducing kernels. In GLR, the loss function is defined as L = X i (yi − X j αjk(xi, xj))2 + λ X i ||αi||1 where αj can be regarded as the regression coefficient corresponding to the jth basis in an instance space (more precisely, a kernel space with its basis defined on all examples), and k(xi, xj) represents some kernel function over the “argument” instance xi and the “basis” instance xj. The non-linearity can be captured by a non-linear kernel. This loss function typically yields a sparse solution in the instance space, but not in feature space where data was originally represented. Thus GLR does not lead to compression of data in the feature space. [Weston et al., 2000], [Canu et al., 2002] and [Krishnapuram et al., 2003] addressed the limitation from a different angle. They introduced feature scaling kernels in the form of: Kθ(xi, xj) = φ(xi ∗θ)φ(xj ∗θ) = K(xi ∗θ, xj ∗θ) where xi ∗θ denotes the component-wise product between two vectors: xi ∗θ = (xi1θ1, ..., xipθp). For example, [Krishnapuram et al., 2003] used a feature scaling polynomial kernel: Kγ(xi, xj) = (1 + X p γpxipxjp)k, where γp = θ2 p. With a norm-1 or norm-0 penalizer on γ in the loss function of a feature scaling kernel machine, a sparse solution is supposed to identify the most influential features. Notice that in this formalism the feature scaling vector θ is inside the kernel function, which means that the solution space of θ could be non-convex. Thus, estimating θ in feature scaling kernel machines is a much harder problem than the convex optimization problem in conventional SVM of which the weight parameters to be estimated are outside of the kernel functions. What we are seeking for here is an alternative approach that guarantees a sparse solution in the feature space, that is sufficient for capturing both linear and non-linear relationships between features and the response variable, and that does not involve parameter optimization inside of kernel functions. The last property is particularly desirable in the sense that it will allow us to leverage many existing works in kernel machines which have been very successful in SVM-related research. We propose a new approach where the key idea is to re-formulate and extend Lasso regression into a form that is similar to SVM except that it generates a sparse solution in the feature space rather than in the instance space. We call our newly formulated and extended Lasso regression ”Feature Vector Machine” (FVM). We will show (in Section 2) that FVM has many interesting properties that mirror SVM. The concepts of support vectors, kernels and slack variables can be easily adapted in FVM. Most importantly, all the parameters we need to estimate for FVM are outside of the kernel functions, ensuring the convexity of the solution space, which is the same as in SVM. 1 When a linear kernel is put to use with no slack variables, FVM reduces to the standard Lasso regression. 1Notice that we can not only use FVM to select important features from training data, but also use it to predict the values of response variables for test data (see section 5). We have shown that we only need convex optimization in the training phase of FVM. In the test phase, FVM makes a prediction for each test example independently. This only involves with a one-dimensional optimization problem with respect to the response variable for the test example. Although the optimization in the test phase may be non-convex, it will be relatively easy to solve because it is only one-dimensional. This is the price we pay for avoiding the high dimensional non-convex optimization in the training phase, which may involve thousands of model parameters. We notice that [Hochreiter et al., 2004] has recently developed an interesting feature selection technique named ”potential SVM”, which has the same form as the basic version of FVM (with linear kernel and no slack variables). However, they did not explore the relationship between ”potential SVM” and Lasso regression. Furthermore, their method does not work for feature selection tasks with non-linear models since they did not introduce the concepts of kernels defined on feature vectors. In section 2, we analyze some geometric similarities between the solution hyper-planes in the standard Lasso regression and in SVM. In section 3, we re-formulate Lasso regression in a SVM style form. In this form, all the operations on the training data can be expressed by dot products between feature vectors. In section 4, we introduce kernels (defined for feature vectors) to FVM so that it can be used for feature selection with non-linear models. In section 5, we give some discussions on FVM. In section 6, we conduct experiments and in section 7 we give conclusions. 2 Geometric parity between the solution hyper-planes of Lasso regression and SVM Formally, let X = [x1, . . . , xN] denote a sample matrix, where each column xi = (x1, . . . , xK)T represents a sample vector defined on K features. A feature vector can be defined as a transposed row in the sample matrix, i.e., fq = (x1q, . . . , xNq)T (corresponding to the q row of X). Note that we can write XT = [f1, . . . , fK] = F. For convenience, let y = (y1, . . . , yn)T denote a response vector containing the responses corresponding to all the samples. Now consider an example space of which each basis is represented by an xi in our sample matrix (note that this is different from the space “spanned” by the sample vectors). Under the example space, both the features fq and the response vector y can be regarded as a point in this space. It can be shown that the solution of Lasso regression has a very intuitive meaning in the example space: the regression coefficients can be regarded as the weights of feature vectors in the example space; moreover, all the non-zero weighted feature vectors are on two parallel hyper-planes in the example space. These feature vectors, together with the response variable, determine the directions of these two hyper-planes. This geometric view can be drawn from the following recast of the Lasso regression due to [Perkins et al., 2003]: | X i (yi − X p βpxip)xiq| ≤λ 2 , ∀q ⇒ |fq(y −[f1, . . . , fK]β)| ≤λ 2 , ∀q. (1) It is apparent from the above equation that y −[f1, . . . , fK]β defines the orientation of a separation hyper-plane. It can be shown that equality only holds for non-zero weighted features, and all the zero weighted feature vectors are between the hyper-planes with λ/2 margin (Fig. 1a). The separating hyper-planes due to (hard, linear) SVM have similar properties as those of the regression hyper-planes described above, although the former are now defined in the feature space (in which each axis represents a feature and each point represents a sample) instead of the example space. In an SVM, all the non-zero weighted samples are also on the two λ/2-margin separating hyper-planes (as is the case in Lasso regression), whereas all the zero-weighted samples are now outside the pair of hyper-planes (Fig 1b). It’s well known that the classification hyper-planes in SVM can be extended to hyper-surfaces by introducing kernels defined for example vectors. In this way, SVM can model non-linear dependencies between samples and the classification boundary. Given the similarity of the feature f feature b feature a feature c X2 feature e feature d response variable X1 X5 X4 X8 feature b X1 X3 feature a X6 X2 (a) (b) Figure 1: Lasso regression vs. SVM. (a) The solution of Lasso regression in the example space. X1 and X2 represent two examples. Only feature a and d have non-zero weights, and hence the support features. (b)The solution of SVM in the feature space. Sample X1, X3 and X5 are in one class and X2, X4, X6 and X8 are in the other. X1 and X2 are the support vectors (i.e., with non-zero weights). geometric structures of Lasso regression and SVM, it is nature to pursue in parallel how one can apply similar “kernel tricks” to the feature vectors in Lasso regression, so that its feature selection power can be extended to non-linear models. This is the intension of this paper, and we envisage full leverage of much of the computational/optimization techniques well-developed in the SVM community in our task. 3 A re-formulation of Lasso regression akin to SVM [Hochreiter et al., 2004] have proposed a ”potential SVM” as follows: ( minβ 1 2 P i(P p βpxip)2 s.t. | P i(yi −P p βpxip)xiq| ≤λ 2 ∀q. (2) To clean up a little bit, we rewrite Eq. (2) in linear algebra format: ( minβ 1 2∥[f T 1 , . . . , f T K]β∥2 s.t. |fq(y −[f1, . . . , fK]β)| ≤λ 2 , ∀q. (3) A quick eyeballing of this formulation reveals that it shares the same constrain function needed to be satisfied in Lasso regression. Unfortunately, this connection was not further explored in [Hochreiter et al., 2004], e.g., to relate the objection function to that of the Lasso regression, and to extend the objective function using kernel tricks in a way similar to SVM. Here we show that the solution to Eq. (2) is exactly the same as that of a standard Lasso regression. In other words, Lasso regression can be re-formulated as Eq. (2). Then, based on this re-formulation, we show how to introduce kernels to allow feature selection under a non-linear Lasso regression. We refer to the optimization problem defined by Eq. (3), and its kernelized extensions, as feature vector machine (FVM). Proposition 1: For a Lasso regression problem minβ P i(P p xipβp −yi)2 + λ P p |βp|, if we have β such that: if βq = 0, then | P i(P p βpxip −yi)xiq| < λ 2 ; if βq < 0, then P i(P p βpxip −yi)xiq = λ 2 ; and if βq > 0, then P i(P p βpxip −yi)xiq = −λ 2 , then β is the solution of the Lasso regression defined above. For convenience, we refer to the aforementioned three conditions on β as the Lasso sandwich. Proof: see [Perkins et al., 2003]. Proposition 2: For Problem (3), its solution β satisfies the Lasso sandwich Sketch of proof: Following the equivalence between feature matrix F and sample matrix X (see the begin of §2), Problem (3) can be re-written as: minβ 1 2||XT β||2 s.t. X(XTβ −y) −λ 2 e ≤0 X(XTβ −y) + λ 2 e ≥0 , (4) where e is a one-vector of K dimensions. Following the standard constrained optimization procedure, we can derive the dual of this optimization problem. The Lagrange L is given by L = 1 2βT XXTβ −αT +(X(XTβ −y) + λ 2 e) + αT −(X(XT β −y) + λ 2 e) where α+ and α−are K × 1 vectors with positive elements. The optimizer satisfies: ∇βL = XXTβ −XXT(α+ −α−) = 0 Suppose the data matrix X has been pre-processed so that the feature vectors are centered and normalized. In this case the elements of XX T reflect the correlation coefficients of feature pairs and XXT is non-singular. Thus we know β = α+ −α−is the solution of this loss function. For any element βq > 0, obviously α+q should be larger than zero. From the KKT condition, we know P i(yi −P p βpxip)xiq = −λ 2 holds at this time. For the same reason we can get when βq < 0, α−q should be larger than zero thus P i(yi − P p βpxip)xiq = λ 2 holds. When βq = 0, α+q and α−q must both be zero (it’s easy to see they can not be both non-zero from KKT condition), thus from KKT condition, both P i(yi −P p βpxip)xiq > −λ 2 and P i(yi −P p βpxip)xiq < λ 2 hold now, which means | P i(yi −P p βpxip)xiq| < λ 2 at this time. Theorem 3: Problem (3) ≡Lasso regression. Proof. Follows from proposition 1 and proposition 2. 4 Feature kernels In many cases, the dependencies between feature vectors are non-linear. Analogous to the SVM, here we introduce kernels that capture such non-linearity. Note that unlike SVM, our kernels are defined on feature vectors instead of the sampled vectors (i.e., the rows rather than the columns in the data matrix). Such kernels can also allow us to easily incorporate certain domain knowledge into the classifier. Suppose that two feature vectors fp and fq have a non-linear dependency relationship. In the absence of linear interaction between fp and fq in the the original space, we assume that they can be mapped to some (higher dimensional, possibly infinite-dimensional) space via transformation φ(·), so that φ(fq) and φ(fq) interact linearly, i.e., via a dot product φ(fp)T φ(fq). We introduce kernel K(fq, fp) = φ(fp)T φ(fq) to represent the outcome of this operation. Replacing f with φ(f) in Problem (3), we have ( minβ 1 2 P p,q βpβqK(fp, fp) s.t. ∀q, | P p βpK(fq, fp) −K(fq, y)| ≤λ 2 (5) Now, in Problem 5, we no longer have φ(·), which means we do not have to work in the transformed feature space, which could be high or infinite dimensional, to capture nonlinearity of features. The kernel K(·, ·) can be any symmetric semi-positive definite matrix. When domain knowledge from experts is available, it can be incorporated into the choice of kernel (e.g., based on the distribution of feature values). When domain knowledge is not available, we can use some general kernels that can detect non-linear dependencies without any distribution assumptions. In the following we give one such example. One possible kernel is the mutual information [Cover et al., 1991] between two feature vectors: K(fp, fq) = MI(fp, fq). This kernel requires a pre-processing step to discritize the elements of features vectors because they are continuous in general. In this paper, we discritize the continuous variables according to their ranks in different examples. Suppose we have N examples in total. Then for each feature, we sort its values in these N examples. The first m values (the smallest m values) are assigned a scale 1. The m + 1 to 2m values are assigned a scale 2. This process is iterated until all the values are assigned with corresponding scales. It’s easy to see that in this way, we can guarantee that for any two features p and q, K(fp, fp) = K(fq, fq), which means the feature vectors are normalized and have the same length in the φ space (residing on a unit sphere centered at the origin). Mutual information kernels have several good properties. For example, it is symmetric (i.e., K(fp, fq) = K(fq, fp), non-negative, and can be normalized. It also has intuitive interpretation related to the redundancy between features. Therefore, a non-linear feature selection using generalized Lasso regression with this kernel yields human interpretable results. 5 Some extensions and discussions about FVM As we have shown, FVM is a straightforward feature selection algorithm for nonlinear features captured in a kernel; and the selection can be easily done by solving a standard SVM problem in the feature space, which yield an optimal vector β of which most elements are zero. It turns out that the same procedure also seemlessly leads to a Lasso-style regularized nonlinear regression capable of predicting the response given data in the original space. In the prediction phase, all we have to do is to keep the trained β fixed, and turn the optimization problem (5) into an analogous one that optimizes over the response y. Specifically, given a new sample xt of unknown response, our sample matrix X grows by one column X →[X, xt], which means all our feature vectors gets one more dimension. We denote the newly elongated features by F ′ = {f ′ q}q∈A (note that A is the pruned index set corresponding to features whose weight βq is non-zero). Let y′ denote the elongated response vector due to the newly given sample: y′ = (y1, ..., yN, yt)T , it can be shown that the optimum response yt can be obtained by solving the following optimization problem 2: minytK(y′, y′) −2 X p∈A βpK(y′, f ′ p) (6) When we replace the kernel function K with a linear dot product, FVM reduces to Lasso regression. Indeed, in this special case, it is easy to see from Eq. (6) that yt = P p∈A βpxtp, which is exactly how Lasso regression would predict the response. In this case one predicts yt according to β and xt without using the training data X. However, when a more complex kernel is used, solving Eq. (6) is not always trivial. In general, to predict yt, we need not only xt and β, but also the non-zero weight features extracted from the training data. 2For simplicity we omit details here, but as a rough sketch, note that Eq. (5) can be reformed as minβ||φ(y′) − X p βpφ(f ′ p)||2 + X p ||βp||1. Replacing the opt. argument β with y and dropping terms irrelevant to yt, we will arrive at Eq. (6). As in SVM, we can introduce slack variables into FVM to define a “soft” feature surface. But due to space limitation, we omit details here. Essentially, most of the methodologies developed for SVM can be easily adapted to FVM for nonlinear feature selection. 6 Experiments We test FVM on a simulated dataset with 100 features and 500 examples. The response variable y in the simulated data is generated by a highly nonlinear rule: y = sin(10 ∗f1 −5) + 4 ∗ q 1 −f 2 2 −3 ∗f3 + ξ. Here feature f1 and f3 are random variables following a uniform distribution in [0, 1]; feature f2 is a random variable uniformly distributed in [−1, 1]; and ξ represents Gaussian noise. The other 97 features f4, f5, ..., f100 are conditionally independent of y given the three features f1, f2 and f3. In particular, f4, ..., f33 are all generated by the rule fj = 3∗f1+ξ; f34, ..., f72 are all generated by the rule fj = sin(10∗f2)+ξ; and the remaining features (f73, ..., f100) simply follow a uniform distribution in [0, 1]. Fig. 2 shows our data projected in a space spanned by f1 and f2 and y. We use a mutual information kernel for our FVM. For each feature, we sort its value in different examples and use the rank to discritize these values into 10 scales (thus each scale corresponds to 50 data points). An FVM can be solved by quadratic programming, but more efficient solutions exist. [Perkins et al., 2003] has proposed a fast grafting algorithm to solve Lasso regression, which is a special case of FVM when linear kernel is used. In our implementation, we extend the idea of fast grafting algorithm to FVM with more general kernels. The only difference is that, each time when we need to calculate P i xpixqi, we calculate K(fp, fq) instead. We found that fast grafting algorithm is very efficient in our case because it uses the sparse property of the solution of FVM. We apply both standard Lasso regression and FVM with mutual information kernel on this dataset. The value of the regularization parameter λ can be tuned to control the number of non-zero weighted features. In our experiment, we tried two choices of the λ, for both FVM and the standard Lasso regression. In one case, we set λ such that only 3 non-zero weighted features are selected; in another case, we relaxed a bit and allowed 10 features. The results are very encouraging. As shown in Fig. (3), under stringent λ, FVM successfully identified the three correct features, f1, f2 and f3, whereas Lasso regression has missed f1 and f2, which are non-linearly correlated with y. Even when λ was relaxed, Lasso regression still missed the right features, whereas FVM was very robust. 0 0.2 0.4 0.6 0.8 1 −1 −0.5 0 0.5 1 −3 −2 −1 0 1 2 3 4 5 6 f1 f2 response variable y 0 0.2 0.4 0.6 0.8 1 −1 −0.5 0 0.5 1 −3 −2 −1 0 1 2 3 4 5 6 f1 f2 response variable y Figure 2: The responses y and the two features f1 and f2 in our simulated data. Two graphs from different angles are plotted to show the distribution more clearly in 3D space. 7 Conclusions In this paper, we proposed a novel non-linear feature selection approach named FVM, which extends standard Lasso regression by introducing kernels on feature vectors. FVM 0 20 40 60 80 100 −3 −2.5 −2 −1.5 −1 −0.5 0 x 10−3 Weight assigned to features Lasso (3 features) 0 20 40 60 80 100 −20 −15 −10 −5 0 5 x 10−3 Lasso (10 features) 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 x 10−3 Feature id FVM (3 features) 0 20 40 60 80 100 0 0.002 0.004 0.006 0.008 0.01 FVM (10 features) Feature id Figure 3: Results of FVM and the standard Lasso regression on this dataset. The X axis represents the feature IDs and the Y axis represents the weights assigned to features. The two left graphs show the case when 3 features are selected by each algorithm and the two right graphs show the case when 10 features are selected. From the down left graph, we can see that FVM successfully identified f1,f2 and f3 as the three non-zero weighted features. From the up left graph, we can see that Lasso regression missed f1 and f2, which are non-linearly correlated with y. The two right graphs show similar patterns. has many interesting properties that mirror the well-known SVM, and can therefore leverage many computational advantages of the latter approach. Our experiments with FVM on highly nonlinear and noisy simulated data show encouraging results, in which it can correctly identify the small number of dominating features that are non-linearly correlated to the response variable, a task the standard Lasso fails to complete. References [Canu et al., 2002] Canu, S. and Grandvalet, Y. Adaptive Scaling for Feature Selection in SVMs NIPS 15, 2002 [Hochreiter et al., 2004] Hochreiter, S. and Obermayer, K. Gene Selection for Microarray Data. In Kernel Methods in Computational Biology, pp. 319-355, MIT Press, 2004. [Krishnapuram et al., 2003] Krishnapuram, B. et al. Joint classifier and feature optimization for cancer diagnosis using gene expression data. The Seventh Annual International Conference on Research in Computational Molecular Biology (RECOMB) 2003, ACM press, April 2003 [Ng et al., 2003] Ng, A. Feature selection, L1 vs L2 regularization, and rotational invariance. ICML 2004 [Perkins et al., 2003] Perkins, S., Lacker, K. & Theiler, J. Grafting: Fast,Incremental Feature Selection by gradient descent in function space JMLR 2003 1333-1356 [Roth, 2004] Roth, V. The Generalized LASSO. IEEE Transactions on Neural Networks (2004), Vol. 15, NO. 1. [Tibshirani et al., 1996] Tibshirani, R. Optimal Reinsertion:Regression shrinkage and selection via the lasso. J.R.Statist. Soc. B(1996), 58,No.1, 267-288 [Cover et al., 1991] Cover, TM. and Thomas, JA. Elements in Information Theory. New York: John Wiley & Sons Inc (1991). [Weston et al., 2000] Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T. and Vapnik V. Feature Selection for SVMs NIPS 13, 2000
|
2005
|
183
|
2,807
|
Maximum Margin Semi-Supervised Learning for Structured Variables Y. Altun, D. McAllester TTI at Chicago Chicago, IL 60637 altun,mcallester@tti-c.org M. Belkin Department of Computer Science University of Chicago Chicago, IL 60637 misha@cs.uchicago.edu Abstract Many real-world classification problems involve the prediction of multiple inter-dependent variables forming some structural dependency. Recent progress in machine learning has mainly focused on supervised classification of such structured variables. In this paper, we investigate structured classification in a semi-supervised setting. We present a discriminative approach that utilizes the intrinsic geometry of input patterns revealed by unlabeled data points and we derive a maximum-margin formulation of semi-supervised learning for structured variables. Unlike transductive algorithms, our formulation naturally extends to new test points. 1 Introduction Discriminative methods, such as Boosting and Support Vector Machines have significantly advanced the state of the art for classification. However, traditionally these methods do not exploit dependencies between class labels where more than one label is predicted. Many real-world classification problems, on the other hand, involve sequential or structural dependencies between multiple labels. For example labeling the words in a sentence with their part-of-speech tags involves sequential dependency between part-of-speech tags; finding the parse tree of a sentence involves a structural dependency among the labels in the parse tree. Recently, there has been a growing interest in generalizing kernel methods to predict structured and inter-dependent variables in a supervised learning setting, such as dual perceptron [7], SVMs [2, 15, 14] and kernel logistic regression [1, 11]. These techniques combine the efficiency of dynamic programming methods with the advantages of the state-of-the-art learning methods. In this paper, we investigate classification of structured objects in a semi-supervised setting. The goal of semi-supervised learning is to leverage the learning process from a small sample of labeled inputs with a large sample of unlabeled data. This idea has recently attracted a considerable amount of interest due to ubiquity of unlabeled data. In many applications from data mining to speech recognition it is easy to produce large amounts of unlabeled data, while labeling is often manual and expensive. This is also the case for many structured classification problems. A variety of methods ranging from Naive Bayes [12], Cotraining [4], to Transductive SVM [9] to Cluster Kernels [6] and graph-based approaches [3] and references therein, have been proposed. The intuition behind many of these methods is that the classification/regression function should be smooth with respect to the geometry of the data, i. e. the labels of two inputs x and ¯x are likely to be the same if x and ¯x are similar. This idea is often represented as the cluster assumption or the manifold assumption. The unlabeled points reveal the intrinsic structure, which is then utilized by the classification algorithm. A discriminative approach to semi-supervised learning was developed by Belkin, Sindhwani and Niyogi [3, 13], where the Laplacian operator associated with unlabeled data is used as an additional penalty (regularizer) on the space of functions in a Reproducing Kernel Hilbert Space. The additional regularization from the unlabeled data can be represented as a new kernel — a “graph regularized” kernel. In this paper, building on [3, 13], we present a discriminative semi-supervised learning formulation for problems that involve structured and inter-dependent outputs and give experimental results on max-margin semi-supervised structured classification using graph-regularized kernels. The solution of the optimization problem that utilizes both labeled and unlabeled data is a linear combination of the graph regularized kernel evaluated at the parts of the labeled inputs only, leading to a large reduction in the number of parameters. It is important to note that our classification function is defined on all input points whereas some previous work is only defined for the input points in the (labeled and unlabeled) training sample, as they use standard graph kernels, which are restricted to in-sample data points by definition. There is an the extensive literature on semi-supervised learning and the growing number of studies on learning structured and inter-dependent variables. Delaleau et. al. [8] propose a semi-supervised learning method for standard classification that extends to out-of-sample points. Brefeld et. al. [5] is one of the first studies investigating semi-supervised structured learning problem in a discriminative framework. The most relevant previous work is the transductive structured learning proposed by Lafferty et. al. [11]. 2 Supervised Learning for Structured Variables In structured learning, the goal is to learn a mapping h : X →Y from structured inputs to structured response values, where the inputs and response values form a dependency structure. For each input x, there is a set of feasible outputs, Y(x) ⊆Y. For simplicity, let us assume that Y(x) is finite for all x ∈X, which is the case in many real world problems and in all our examples. We denote the set of feasible input-output pairs by Z ⊆X × Y. It is common to construct a discriminant function F : Z →ℜwhich maps the feasible input-output pairs to a compatibility score of the pair. To make a prediction for x, this score is maximized over the set of feasible outputs, h(x) = argmax y∈Y(x) F(x, y). (1) The score of an ⟨x, y⟩pair is computed from local fragments, or “parts”, of ⟨x, y⟩. In Markov random fields, x is a graph, y is a labeling of the nodes of x and a local fragment (a part) of ⟨x, y⟩is a clique in x and its labeling y. In parsing with probabilistic context free grammars, a local fragment (a part) of ⟨x, y⟩consist of a branch of the tree y, where a branch is an internal node in y together with its children, plus all pairs of a leaf node in y with the word in x labeled by that node. Note that a given branch structure, such as NP →Det N, can occur more than once in a given parse tree. In general, we let P be a set of (all possible) parts. We assume a “counting function”, c, such that for p ∈P and ⟨x, y⟩∈Z, c(p, ⟨x, y⟩) gives the number of times that the part p occurs in the pair ⟨x, y⟩(the count of p in ⟨x, y⟩). For a Mercer kernel k : P × P →ℜon P, there is an associated RHKS Hk of functions f : P →ℜ, where f measures the goodness of a part p. For any f ∈Hk, we define a function Ff on Z as Ff(x, y) = X p∈P c(p, ⟨x, y⟩)f(p). (2) Consider a simple chain example. Let Γ be a set of possible observations and Σ be a set of possible hidden states. We take the input x to be a sequence x1, . . . , xℓ with xi ∈Γ and we take Y(x) to be the set of all sequences y1, . . . , yℓwith the same length as x and with yi ∈Σ. We can take P to be the set of all pairs ⟨s, ¯s⟩plus all pairs ⟨s, u⟩with s, ¯s ∈Σ and u ∈Γ. Often Σ is taken to be a finite set of “states” and Γ = ℜd is a set of possible feature vectors. k(p, p′) is commonly defined as k(⟨s, ¯s⟩, ⟨s′, ¯s′⟩) = δ(s, s′)δ(¯s, ¯s′), (3) k(⟨s, u⟩, ⟨s′, u′⟩) = δ(s, s′)ko(u, u′), (4) where δ(w, w′) denotes the Kronecker-δ. Note that in this example there are two types of parts — pairs of hidden states and pairs of a hidden state and an observation. Here we take k(p, p′) to be 0 if p and p′ are of different types. In the supervised learning scenario, we are given a sample S of ℓpairs (⟨x1, y1⟩, . . ., ⟨xℓ, yℓ⟩) drawn i. i. d. from an unknown but fixed probability distribution P on Z. The goal is to learn a function f on the local parts P with small expected loss EP [L(x, y, f)] where L is a prescribed loss function. This is commonly realized by learning f that minimizes the regularized loss functional f ∗= argmin f∈Hk ℓ X i=1 L(xi, yi, f) + λ∥f∥2 k, (5) where ∥.∥k is the norm corresponding to Hk measuring the complexity of f. A variety of loss functions L have been considered in the literature. In kernel conditional random fields (CRFs) [11], the loss function is given by L(x, y, f) = −Ff(x, y) + log X ˆy∈Y(x) exp(Ff(x, ˆy)) In structured Support Vector Machines (SVM), the loss function is given by L(x, y, f) = max ˆy∈Y(x) ∆(x, y, ˆy) + Ff(x, ˆy) −Ff(x, y), (6) where ∆(x, y, ˆy) is some measure of distance between y and ˆy for a given observation x. A natural choice for ∆is to take ∆(x, y, ˆy) to be the indicator 1[y̸=ˆy] [2]. Another choice is to take ∆(x, y, ˆy) to be the size of the symmetric difference between the sets P(⟨x, y⟩) and P(⟨x, ˆy⟩) [14]. Let P(x) ⊆P be the set of parts having nonzero count in some pair ⟨x, y⟩for y ∈Y(x). Let P(S) be the union of all sets P(xi) for xi in the sample. Then, we have following straightforward variant of the Representer Theorem [10], which was also presented in [11]. Definition: A loss L is local if L(x, y, f) is determined by the value of f on the set P(x), i.e., for f, g : P →ℜwe have that if f(p) = g(p) for all p ∈P(x) then L(x, y, f) = L(x, y, g). Theorem 1. For any local loss function L and sample S there exist weights αp for p ∈P(S) such that f ∗as defined by (5) can be written as follows. f ∗(p) = X p′∈P(S) αp′k(p′, p) (7) Thus, even though the set of feasible outputs for x generally scales exponentially with the size of output, the solution can be represented in terms of the parts of the sample, which commonly scales polynomially. This is true for any loss function that partitions into parts, which is the case for loss functions discussed above. 3 A Semi-Supervised Learning Approach to Structured Variables In semi-supervised learning, we are given a sample S consisting of l input-output pairs {(x1, y1), . . . , (xℓ, yℓ)} drawn i. i. d. from the probability distribution P on Z and u unlabeled input patterns {xℓ+1, . . . , xℓ+u} drawn i. i. d from the marginal distribution PX , where usually l < u. Let X(S) be the set {x1, . . . , xℓ+u} and let Z(S) be the set of all pairs ⟨x, y⟩with x ∈X(S) and y ∈Y(x). If the true classification function is smooth wrt the underlying marginal distribution, one can utilize unlabeled data points to favor functions that are smooth in this sense. Belkin et. al. [3] implement this assumption by introducing a new regularizer to the standard RHKS optimization framework (as opposed to introducing a new kernel as discussed in Section 5) f ∗= argmin f∈Hk ℓ X i=1 L(xi, yi, f) + λ1||f||2 k + λ2||f||2 kS, (8) where kS is a kernel representing the intrinsic measure of the marginal distribution. Sindhwani et. al.[13] prove that the minimizer of (8) is in the span of a new kernel function (details below) evaluated at labeled data only. Here, we generalize this framework to structured variables and give a simplified derivation of the new kernel. The smoothness assumption in the structured setting states that f should be smooth on the underlying density on the parts P, thus we enforce f to assign similar goodness scores to two parts p and p′, if p and p′ are similar, for all parts of Z(S). Let P(S) be the union of all sets P(z) for z ∈Z(S) and let W be symmetric matrix where Wp,p′ represents the similarity of p and p′ for p, p′ ∈P(S). f ∗ = argmin f∈Hk ℓ X i=1 L(xi, yi, f) + λ1||f||2 k + λ2 X p,p′∈P(S) Wp,p′(f(p) −f(p′))2 = argmin f∈Hk ℓ X i=1 L(xi, yi, f) + λ1||f||2 k + λ2f T Lf (9) Here W is a similarity matrix (like a nearest neighbor graph) and L is the Laplacian of W, L = D −W, where D is a diagonal matrix defined by Dp,p = P p′ Wp,p′. f denotes the vector of f(p) for all p ∈P(S). Note that the last term depends only on the value of f on the parts in the set P(S). Then, for any local loss L(x, y, f), we immediately have the following Representer Theorem for the semi-supervised structured case where S includes the labeled and the unlabeled data. f ∗ α(p) = X p′∈P(S) αp′k(p′, p) (10) Substituting (10) into (9) leads to the following optimization problem α∗= argmin α ℓ X i=1 L(xi, yi, fα) + αT Qα, (11) where Q = λ1K + λ2KLK, K is the matrix of k(p, p′) for all p, p′ ∈P(S) and fα, as a vector in the space Hk, is a linear function of the vector α. Note that (11) applies to any local loss function and if L(x, y, f) is convex in f, as in the case for logistic or hinge loss, then (11) is convex in α. We now have a loss function over labeled data regularized by the L2 norm (wrt the inner product Q), for which we can re-evoke the Representer Theorem. Let Sℓ be the set of labeled inputs {x1, . . . , xℓ}, Z(Sℓ) be the set of all pairs ⟨x, y⟩with x ∈X(Sℓ) and y ∈Y(x) and P(Sℓ) be the set of al parts having nonzero count for some pair in Z(Sℓ). Let δp be a vector whose pth component is 1 and 0 elsewhere. Using the standard orthogonality argument, let α∗decompose into two: the vector in the span of γp = δpKQ−1 for all p ∈P(Sℓ), and the vector in the orthogonal component (under the inner product Q). α = X p∈P(Sℓ) βpγp + α⊥ α⊥can only increase the quadratic term in the optimization problem. Notice that the first term in (11) depends only on fα(p) for p ∈P(Sℓ), fα(p) = δpKα = (δpKQ−1)Qα = γpQα. Since γpQα⊥= 0, we conclude that the optimal solution to (11) is given by α∗ = X p∈P(Sℓ) βpγp = βKQ−1, (12) where β is required to be sparse, such that only parts from the labeled data are nonzero. Plugging this into original equations we get ˜k(p, p′) = kpQ−1kp′ (13) fβ(p′) = X p∈P(Sℓ) βp˜k(p, p′) (14) β∗ = argmin β L(Sℓ, fβ) + βT ˜Kβ (15) where kp is the vector of k(p, p′) for all p′ ∈P(S) and ˜K is the matrix of ˜k(p, p′) for all p, p′ in P(Sℓ). ˜k is the same as in [13]. We call ˜k the graph-regularized kernel, in which unlabeled data points are used to augment the base kernel k wrt the standard graph kernel to take the underlying density on parts into account. This kernel is defined over the complete part space, where as standard graph kernels are restricted to P(S) only. Given the graph-regularized kernel, the semi-supervised structured learning problem is reduced to supervised structured learning. Since in semi-supervised learning problems, in general, labeled data points are far fewer than unlabeled data, the dimensionality of the optimization problems is greatly reduced by this reduction. 4 Structured Max-Margin Learning We now investigate optimizing the hinge loss as defined by (6) using graphregularized kernel ˜k. Defining γx,y to be the vector where γx,y p = c(p, ⟨x, y⟩) is the count of p in ⟨x, y⟩, the linear discriminant can be written in matrix notation for x ∈Sℓas Ffβ(x, y) = βT ˜Kγx,y. Then, the optimization problem for margin maximization is β∗ = argmin β min ξ l X i=1 ξi + βT ˜Kβ ξi ≥ max ˆy∈Y(xi) △(ˆy, yi) −βT ˜K γxi,yi −γxi,ˆy ∀i ≤l. This gives a convex quadratic program over the vectors indexed by P(S), a polynomial size problem in terms of the size of the structures. Following [2], we replace the convex constraints by linear constraints for all y ∈Y(x) and using Lagrangian duality techniques, we get the following dual Quadratic program: θ∗ = argmin θ θT dR θ −∆T θ (16) θ(xi,y) ≥0, X y∈Y(x) θ(xi,y) = 1, ∀y ∈Y(xi), ∀i ≤l, where ∆is a vector of △(y, ˆy) for all y ∈Y(x) of all labeled observations x, dγ is a matrix whose (xi, y)th column dγ., (xi, y) = γxi,yi −γxi,y and dR = dγT ˜Kdγ. Due to the sparse structure of the constraint matrix, even though this is an exponential sized QP, the algorithm proposed in [2] is proven to solve (16) to η proximity in polynomial time in P(Sl) and 1 η[15]. 5 Semi-Supervised vs Transductive Learning Since one major contribution of this paper is learning a classifier for structured objects that is defined over the complete part space P, we now examine the differences of semi-supervised and transductive learning in more detail. The most common approach to realize the smoothness assumption is to construct a data dependent kernel kS derived from the graph Laplacian on a nearest neighbor graph on the labeled and unlabeled input patterns in the sample S. Thus, kS is not defined on observations that are out of the sample. Given kS, one can construct a function ˜f ∗on S as ˜f ∗= argmin f∈HkS ℓ X i=1 L(xi, yi, f) + λ||f||2 kS. (17) It is well known that kernels can be combined linearly to yield new kernels. This observation in the transductive setting leads to the following optimization problem, when the kernel of the optimization problem is taken to be a linear combination of a graph kernel kS and a standard kernel k restricted to P(S). ¯f ∗ = argmin f∈H(µ1k+µ2kS ) ℓ X i=1 L(xi, yi, f) + λ||f||2 (µ1k+µ2kS) (18) A structured semi-supervised algorithm based on (18) has been evaluated in [11]. The kernel is (18) is the weighted mean of k and kS, whereas the graph-regularized kernel, resulting from weighted mean of two regularizers, is the harmonic mean of k and kS [16]. An important distinction between ¯f ∗and f ∗in (8), the optimization performed in this paper, is that ¯f ∗is only defined on P(S) (only on observations in the training data) while f ∗is defined on all of P and can be used for novel (out of sample) inputs x. We note that in general P is infinite. Out-of-sample extension is already a serious limitation for transductive learning, but it is even more severe in the structured case where parts of P can be composed of multiple observation tokens. 6 Experiments Similarity Graph: We build the similarity matrix W over P(S) using K-nearest neighborhood relationship. Wp,p′ is 0 if p and p′ are not in the K-nearest neighborhood of each other or if p and p′ are of different types. Otherwise, the similarity is given by a heat kernel. In our applications, the structure is a simple chain, therefore the cliques involved single observation label pairs, Wp,p′ = δ(y(up), y(u′ p′))e ∥up−u′ p′ ∥2 t , (19) where up denotes the observation part of p and y(u) denotes the labeling of u 1. In cases where k(p, p′) = Wp,p′ = 0 for p, p′ of different types, as in our experiments, the Gram matrix K and the Laplacian L can be presented as block diagonal matrices, which significantly reduces the computational complexity, the computation of Q−1 in particular. Applications: We performed experiments using a simple chain model for pitch accent (PA) prediction and OCR. In PA prediction, Y(x) = {0, 1}T with T = |x| and xt ∈ℜ31, ∀t. In OCR, xt ∈{0, 1}128 and |Σ| = 15. PA U:0 U:80 U:0 U:80 U:200 SVM 65.92 68.83 70.34 71.27 73.68 69.94 72.00 73.11 STR 65.81 70.28 72.15 74.92 76.37 70.72 75.66 77.45 Table 1: Per-label accuracy for Pitch Accent. We ran experiments comparing semi-supervised structured (referred as STR) and unstructured (referred as SVM) max-margin optimization. For both SVM and STR, we used RBF kernel as the base kernel ko in (4) and a 5-nearest neighbor graph to construct the Laplacian. We chose the width of the RBF kernel by cross-validation on SVM and used the same value for STR. Following [3], we fixed λ1 : λ2 ratio at 1 : 9. We report the average results of experiments with 5 random selection of labeled sequences in Table 1 and 2, with number of labeled sequences 4 on the left side of Table 1, 40 on the right side, and 10 in Table 2. We varied the number of unlabeled sequences and reported the per-label accuracy of test sequences (on top of each cell) and of unlabeled sequences (bottom) (when U > 0). The results in pitch accent prediction shows the advantage of a sequence model over a non-structured 1For more complicated parts, different measures can apply. For example, in sequence classification, if the classifier is evaluated wrt the correctly classified individual labels in the sequence, W can be s. t. Wp,p′ = P u∈p,u′∈p′ δ(y(u), y(u′))˜s(u, u′) where ˜s denotes some similarity measure such as the heat kernel. If the evaluation is over segments of the sequence, the similarity can be Wp,p′ = δ(y(p), y′(p′)) P u∈p,u′∈p′ ˜s(u, u′) where y(p) denotes all the label nodes in the part p. model, where STR consistently performs better than SVM. We also observe the usefulness of unlabeled data both in the structured and unstructured models, where as U increases, so does the accuracy. The improvement from unlabeled data and from structured classification can be considered as additive. The small difference between the accuracy of in-sample unlabeled data and the test data indicates the natural extension of our framework to new data points. OCR U:0 U:412 SVM 43.62 49.96 47.56 STR 49.25 49.91 49.65 Table 2: OCR In OCR, on the other hand, STR does not improve over SVM. Even though unlabeled data improves accuracy, performing sequence classification is not helpful due to the sparsity of structural information. Since |Σ| = 15 and there are only 10 labeled sequences with average length 8.3, the statistics of label-label dependency is quite noisy. 7 Conclusions We presented a discriminative approach to semi-supervised learning of structured and inter-dependent response variables. In this framework, we derived a maximum margin formulation and presented experiments for a simple chain model. Our approach naturally extends to the classification of unobserved structured inputs and this is supported by our empirical results which showed similar accuracy on in-sample unlabeled data and out-of-sample test data. References [1] Y. Altun, T. Hofmann, and A. Smola. Gaussian process classification for segmenting and annotating sequences. In ICML, 2004. [2] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In ICML, 2003. [3] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: a geometric framework for learning from examples. Technical Report 06, UChicago CS, 2004. [4] Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with cotraining. In COLT, 1998. [5] U. Brefeld, C. B¨uscher, and T. Scheffer. Multi-view discriminative sequential learning. In (ECML), 2005. [6] O. Chappelle, J. Weston, and B. Scholkopf. Cluster kernels for semi-supervised learning. In (NIPS), 2002. [7] M. Collins and N.l Duffy. Convolution kernels for natural language. In (NIPS), 2001. [8] Olivier Delalleau, Yoshua Bengio, and Nicolas Le Roux. Efficient non-parametric function induction in semi-supervised learning. In Proceedings of AISTAT, 2005. [9] Thorsten Joachims. Transductive inference for text classification using support vector machines. In (ICML), pages 200–209, 1999. [10] G. Kimeldorf and G. Wahba. Some results on tchebychean spline functions. Journal of Mathematics Analysis and Applications, 33:82–95, 1971. [11] John Lafferty, Yan Liu, and Xiaojin Zhu. Kernel conditional random fields: Representation, clique selection, and semi-supervised learning. In (ICML), 2004. [12] K. Nigam, A. K. McCallum, S. Thrun, and T. M. Mitchell. Learning to classify text from labeled and unlabeled documents. In Proceedings of AAAI-98, pages 792–799, Madison, US, 1998. [13] V. Sindhwani, P. Niyogi, and M. Belkin. Beyond the point cloud: from transductive to semi-supervised learning. In (ICML), 2005. [14] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2004. [15] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In (ICML), 2004. [16] T. Zhang. personal communication.
|
2005
|
184
|
2,808
|
Generalization in Clustering with Unobserved Features Eyal Krupka and Naftali Tishby School of Computer Science and Engineering, Interdisciplinary Center for Neural Computation The Hebrew University Jerusalem, 91904, Israel {eyalkr,tishby}@cs.huji.ac.il Abstract We argue that when objects are characterized by many attributes, clustering them on the basis of a relatively small random subset of these attributes can capture information on the unobserved attributes as well. Moreover, we show that under mild technical conditions, clustering the objects on the basis of such a random subset performs almost as well as clustering with the full attribute set. We prove a finite sample generalization theorems for this novel learning scheme that extends analogous results from the supervised learning setting. The scheme is demonstrated for collaborative filtering of users with movies rating as attributes. 1 Introduction Data clustering is unsupervised classification of objects into groups based on their similarity [1]. Often, it is desirable to have the clusters to match some labels that are unknown to the clustering algorithm. In this context, a good data clustering is expected to have homogeneous labels in each cluster, under some constraints on the number or complexity of the clusters. This can be quantified by mutual information (see e.g. [2]) between the objects’ cluster identity and their (unknown) labels, for a given complexity of clusters. Since the clustering algorithm has no access to the labels, it is unclear how the algorithm can optimize the quality of the clustering. Even worse, the clustering quality depends on the specific choice of the unobserved labels. For example a good documents clustering with respect to topics is very different from a clustering with respect to authors. In our setting, instead of trying to cluster by some “arbitrary” labels, we try to predict unobserved features from observed ones. In this sense our target “labels” are yet other features that “happened” to be unobserved. For example, when clustering fruits based on their observed features, such as shape, color and size, the target of clustering is to match unobserved features, such as nutritional value and toxicity. In order to theoretically analyze and quantify this new learning scheme, we make the following assumptions. Consider an infinite set of features, and assume that we observe only a random subset of n features, called observed features. The other features are called unobserved features. We assume that the random selection of features is done uniformly and independently. Table 1: Analogy with supervised learning Training set n randomly selected features (observed features) Test set Unobserved features Learning algorithm Cluster the instances into k clusters Hypothesis class All possible partitions of m instances into k clusters Min generalization error Max expected information on unobserved features ERM Observed Information Maximization (OIM) Good generalization Mean observed and unobserved information are similar The clustering algorithm has access only to the observed features of m instances. After the clustering, one of the unobserved features is randomly and uniformly selected to be a target label, i.e. clustering performance is measured with respect to this feature. Obviously, the clustering algorithm cannot be directly optimized for this specific feature. The question is whether we can optimize the expected performance on the unobserved feature, based on the observed features alone. The expectation is over the random selection of the target feature. In other words, can we find clusters that match as many unobserved features as possible? Perhaps surprisingly, for large enough number of observed features, the answer is yes. We show that for any clustering algorithm, the average performance of the clustering with respect to the observed and unobserved features, is similar. Hence we can indirectly optimize clustering performance with respect to the unobserved features, in analogy to generalization in supervised learning. These results are universal and do not require any additional assumptions such as underling model or a distribution that created the instances. In order to quantify these results, we define two terms: the average observed information and the expected unobserved information. Let T be the variable which represents the cluster for each instance, and {X1, ..., X∞} the set of random variables which denotes the features. The average observed information, denoted by Iob, is the average mutual information between T and each of the observed features. In other words, if the observed features are {X1, ..., Xn} then Iob = 1 n Pn j=1 I(T; Xj). The expected unobserved information, denoted by Iun, is the expected value of the mutual information between T and a randomly selected unobserved feature, i.e. Ej{I(T; Xj)}. Note that whereas Iob can be measured directly, this paper deals with the question of how to infer and maximize Iun. Our main results consist of two theorems. The first is a generalization theorem. It gives an upper bound on the probability of large difference between Iob and Iun for all possible clusterings. It also states a uniform convergence in probability of |Iob −Iun| as the number of observed features increases. Conceptually, the observed mean information, Iob, is analogous to the training error in standard supervised learning [3], whereas the unobserved information, Iun, is similar to the generalization error. The second theorem states that under constraint on the number of clusters, and large enough number of observed features, one can achieve nearly the best possible performance, in terms of Iun. Analogous to the principle of Empirical Risk Minimization (ERM) in statistical learning theory [3], this is done by maximizing Iob. Table 1 summarizes the correspondence of our setting to that of supervised learning. The key difference is that in supervised learning, the set of features is fixed and the training instances (samples) are assumed to be randomly drawn from some distribution. In our setting, the set of instances is fixed, but the set of observed features is assumed to be randomly selected. Our new theorems are evaluated empirically in section 3, on a data set of movie ratings. This empirical test also suggests one future research direction: use the framework suggested in this paper for collaborative filtering. Our main point in this paper, however, is the new conceptual framework and not a specific algorithm or experimental performance. Related work The idea of an information tradeoff between complexity and information on target variables is similar to the idea of the information bottleneck [4]. But unlike the bottleneck method, here we are trying to maximize information on unobserved variables, using finite samples. In the framework of learning with labeled and unlabeled data [5], a fundamental issue is the link between the marginal distribution P(x) over examples x and the conditional P(y|x) for the label y [6]. From this point of view our approach assumes that y is a feature in itself. 2 Mathematical Formulation and Analysis Consider a set of discrete random variables {X1, ..., XL}, where L is very large (L → ∞). We randomly, uniformly and independently select n << √ L variables from this set. These variables are the observed features and their indexes are denoted by {q1, ..., qn}. The remaining L −n variables are the unobserved features. A clustering algorithm has access only to the observed features over m instances {x[1], ..., x[m]}. The algorithm assigns a cluster label ti ∈{1, ..., k} for each instance x[i], where k is the number of clusters. Let T denote the cluster label assigned by the algorithm. Shannon’s mutual information between two variables is a function of their joint distribution, defined as I(T; Xj) = P t,xj P(t, xj) log P (t,xj) P (t)P (xj) . Since we are dealing with a finite number of samples, m, the distribution P is taken as the empirical joint distribution of (T, Xj), for every j. For a random j, this empirical mutual information is a random variable on its own. The average observed information, Iob, is now defined as Iob = 1 n Pn i=1 I(T; Xqi). In general, Iob is higher when clusters are more coherent, i.e. elements within each cluster have many similar attributes. The expected unobserved information, Iun, is defined as Iun = Ej {I(T; Xj)}. We can assume that the unobserved feature is with high probability from the unobserved set. Equivalently, Iun can be the mean mutual information between the clusters and each of the unobserved features, Iun = 1 L−n P j /∈{q1,...,qn} I(T; Xj). The goal of the clustering algorithm is to find cluster labels {t1, ..., tm}, that maximize Iun, subject to a constraint on their complexity - henceforth considered as the number of clusters (k ≤D) for simplicity, where D is an integer bound. Before discussing how to maximize Iun, we consider first the problem of estimating it. Similar to the generalization error in supervised learning, Iun cannot be estimated directly in the learning algorithm, but we may be able to bound the difference between the observed information Iob - our “training error” - and Iun - the “generalization error”. To obtain generalization this bound should be uniform over all possible clusterings with a high probability over the randomly selected features. The following lemma argues that such uniform convergence in probability of Iob to Iun always occurs. Lemma 1 With the definitions above, Pr ( sup {t1,...,tm} |Iob −Iun| > ǫ ) ≤2e−2nǫ2/(log k)2+m log k ∀ǫ > 0 where the probability is over the random selection of the observed features. Proof: For fixed cluster labels, {t1, ..., tm}, and a random feature j, the mutual information I(T; Xj) is a function of the random variable j, and hence I(T; Xj) is a random variable in itself. Iob is the average of n such independent random variables and Iun is its expected value. Clearly, for all j, 0 ≤I(T; Xj) ≤log k. Using Hoeffding’s inequality [7], Pr {|Iob −Iun| > ǫ} ≤2e−2nǫ2/(log k)2. Since there are at most km possible partitions, the union bound is sufficient to prove the lemma 1. Note that for any ǫ > 0, the probability that |Iob −Iun| > ǫ goes to zero, as n →∞. The convergence rate of Iob to Iun is bounded by O(log n/√n). As expected, this upper bound decreases as the number of clusters, k, decreases. Unlike the standard bounds in supervised learning, this bound increases with the number of instances (m), and decreases with increasing number of observed features (n). This is because in our scheme the training size is not the number of instances, but rather the number of observed features (See Table 1). However, in the next theorem we obtain an upper bound that is independent of m, and hence is tighter for large m. Theorem 1 (Generalization Theorem) With the definitions above, Pr ( sup {t1,...,tm} |Iob −Iun| > ǫ ) ≤8(log k)e − nǫ2 8(log k)2 + 4k maxj |Xj | ǫ log k−log ǫ ∀ǫ > 0 where |Xj| denotes the alphabet size of Xj (i.e. the number of different values it can obtain). Again, the probability is over the random selection of the observed features. The convergence rate here is bounded by O(log n/3√n). However, for relatively large n one can use the bound in lemma 1, which converge faster. A detailed proof of theorem 1 can be found in [8]. Here we provide the outline of the proof. Proof outline: From the given m instances and any given cluster labels {t1, ..., tm}, draw uniformly and independently m′ instances (repeats allowed) and denote their indexes by {i1, ..., im′}. We can estimate I(T; Xj) from the empirical distribution of (T, Xj) over the m′ instances. This distribution is denoted by ˆP(t, xj) and the corresponding mutual information is denoted by I ˆ P (T; Xj). Theorem 1 is build up from the following upper bounds, which are independent of m, but depend on the choice of m′. The first bound is on E I(T; Xj) −I ˆ P (T; Xj) , where the expectation is over random selection of the m′ instances. From this bound we derive upper bounds on |Iob −E(ˆIob)| and |Iun −E(ˆIun)|, where ˆIob, ˆIun are the estimated values of Iob, Iun based on the subset of m′ instances. The last required bound is on the probability that sup{t1,...,tm} |E(ˆIob) −E(ˆIun)| > ǫ1, for any ǫ1 > 0. This bound is obtained from lemma 1. The choice of m′ is independent on m. Its value should be large enough for the estimations ˆIob, ˆIun to be accurate, but not too large, so as to limit the number of possible clusterings over the m′ instances. We now describe the above mentioned upper bounds in more details. Using Paninski [9] (proposition 1) it is easy to show that the bias between I(T; Xj) and its maximum likelihood estimation, based on ˆP(t, xj) is bounded as follows. E{i1,...,im′} I(T; Xj) −I ˆ P (T; Xj) ≤log 1 + k|Xj| −1 m′ ≤k|Xj| m′ (1) From this equation we obtain, |Iob −E{i1,...,im′}(ˆIob)|, |Iun −E{i1,...,im′}(ˆIun)| ≤k max j |Xj|/m′ (2) Using lemma 1 we have an upper bound on the probability that sup{t1,...,tm} |ˆIob−ˆIun| > ǫ over the random selection of features, as a function of m′. However, the upper bound we need is on the probability that sup{t1,...,tm} |E(ˆIob) −E(ˆIun)| > ǫ1. Note that the expectations E(ˆIob), E(ˆIun) are done over random selection of the subset of m′ instances, for a set of features that were randomly selected once. In order to link between these two probabilities, we need the following lemma. Lemma 2 Consider a function f of two independent random variables (Y, Z). We assume that f(y, z) ≤c, ∀y, z, where c is some constant. If Pr {f(Y, Z) > ˜ǫ} ≤δ, then Pr Z {Ey (f(y, Z)) ≥ǫ} ≤c −˜ǫ ǫ −˜ǫδ ∀ǫ > ˜ǫ The proof of this lemma is rather standard and is given in [8]. From lemmas 1 and 2 it is easy to show that Pr ( E{i1,...,im′} sup {t1,...,tm} ˆIob −ˆIun ! > ǫ1 ) ≤4 log k ǫ1 e − nǫ2 1 2(log k)2 +m′ log k (3) Lemma 2 is used, where Z represents the random selection of features, Y represents the random selection of m′ instances, f(y, z) = sup{t1,...,tm} |ˆIob −ˆIun|, c = log k, and ˜ǫ = ǫ1/2. From eq. 2 and 3 it can be shown that Pr ( sup {t1,...,tm} |Iob −Iun| > ǫ1 + 2k maxj |Xj| m′ ) ≤4 log k ǫ1 e − nǫ2 1 2(log k)2 +m′ log k By selecting ǫ1 = ǫ/2, m′ = 4k maxj |Xj|/ǫ, we obtain theorem 1. Note that the selection of m′ depends on k maxj |Xj|. This reflects the fact that in order to accurately estimate I(T, Xj), we need a number of instances, m′, which is much larger than the product of the alphabet sizes of T, Xj. We can now return to the problem of specifying a clustering that maximizes Iun, using only the observed features. For a reference, we will first define Iun of the best possible clusters. Definition 1 Maximally achievable unobserved information: Let I∗ un,D be the maximum value of Iun that can be achieved by any clustering {t1, ..., tm}, subject to the constraint k ≤D, for some constant D I∗ un,D = sup {{t1,...,tm}:k≤D} Iun The clustering that achieves this value is called the best clustering. The average observed information of this clustering is denoted by I∗ ob,D. Definition 2 Observed information maximization algorithm: Let IobMax be any clustering algorithm that, based on the values of observed features alone, selects the cluster labels {t1, ..., tm} having the maximum possible value of Iob, subject to the constraint k ≤D. Let ˜Iob,D be the average observed information achieved by IobMax algorithm. Let ˜Iun,D be the expected unobserved information achieved by the IobMax algorithm. The next theorem states that IobMax not only maximizes Iob, but also Iun. Theorem 2 With the definitions above, Pr n ˜Iun,D ≤I∗ un,D −ǫ o ≤8(log k)e − nǫ2 32(log k)2 + 8k maxj |Xj | ǫ log k−log(ǫ/2) ∀ǫ > 0 (4) where the probability is over the random selection of the observed features. Proof: We now define a bad clustering as a clustering whose expected unobserved information satisfies Iun ≤I∗ un,D −ǫ. Using Theorem 1, the probability that |Iob −Iun| > ǫ/2 for any of the clusterings is upper bounded by the right term of equation 4. If for all clusterings |Iob −Iun| ≤ǫ/2, then surely I∗ ob,D ≥I∗ un,D −ǫ/2 (see Definition 1) and Iob of all bad clusterings satisfies Iob ≤I∗ un,D −ǫ/2. Hence the probability that a bad clustering has a higher average observed information than the best clustering is upper bounded as in Theorem 2. As a result of this theorem, when n is large enough, even an algorithm that knows the value of all the features (observed and unobserved) cannot find a clustering with the same complexity (k) which is significantly better than the clustering found by IobMax algorithm. 3 Empirical Evaluation In this section we describe an experimental evaluation of the generalization properties of the IobMax algorithm for a finite large number of features. We examine the difference between Iob and Iun as function of the number of observed features and the number of clusters used. We also compare the value of Iun achieved by IobMax algorithm to the maximum achievable I∗ un,D (See definition 1). Our evaluation uses a data set typically used for collaborative filtering. Collaborative filtering refers to methods of making predictions about a user’s preferences, by collecting preferences of many users. For example, collaborative filtering for movie ratings could make predictions about rating of movies by a user, given a partial list of ratings from this user and many other users. Clustering methods are used for collaborative filtering by cluster users based on the similarity of their ratings (see e.g. [10]). In our setting, each user is described as a vector of movie ratings. The rating of each movie is regarded as a feature. We cluster users based on the set of observed features, i.e. rated movies. In our context, the goal of the clustering is to maximize the information between the clusters and unobserved features, i.e. movies that have not yet been rated by any of the users. By Theorem 2, given large enough number of rated movies, we can achieve the best possible clustering of users with respect to unseen movies. In this region, no additional information (such as user age, taste, rating of more movies) beyond the observed features can improve Iun by more than some small ǫ. The purpose of this section is not to suggest a new algorithm for collaborative filtering or compare it to other methods, but simply to illustrate our new theorems on empirical data. Dataset. We used MovieLens (www.movielens.umn.edu), which is a movie rating data set. It was collected distributed by GroupLens Research at the University of Minnesota. It contains approximately 1 million ratings for 3900 movies by 6040 users. Ratings are on a scale of 1 to 5. We used only a subset consisting of 2400 movies by 4000 users. In our setting, each instance is a vector of ratings (x1, ..., x2400) by specific user. Each movie is viewed as a feature, where the rating is the value of the feature. Experimental Setup. We randomly split the 2400 movies into two groups, denoted by “A” and “B”, of 1200 movies (features) each. We used a subset of the movies from group “A” as observed features and all movies from group “B” as the unobserved features. The experiment was repeated with 10 random splits and the results averaged. We estimated Iun by the mean information between the clusters and ratings of movies from group “B”. 0 200 400 600 800 1000 1200 0 0.005 0.01 0.015 0.02 0.025 Number of observed features (movies) (n) Iob Iun I* un (a) 2 Clusters 0 200 400 600 800 1000 1200 0 0.005 0.01 0.015 0.02 0.025 Number of observed features (movies) (n) Iob Iun I* un (b) 6 Clusters 2 3 4 5 6 0 0.005 0.01 0.015 Number of clusters (k) Iob Iun (c) Fixed n (1200) Figure 1: Iob, Iun and I∗ un per number of training movies and clusters. In (a) and (b) the number of movies is variable, and the number of clusters is fixed. In (c) The number of observed movies is fixed (1200), and the number of clusters is variable. The overall mean information is low, since the rating matrix is sparse. Handling Missing Values. In this data set, most of the values are missing (not rated). We handle this by defining the feature variable as 1,2,...,5 for the ratings and 0 for missing value. We maximize the mutual information based on the empirical distribution of values that are present, and weight it by the probability of presence for this feature. Hence, Iob = Pn j=1 P(Xj ̸= 0)I(T; Xj|Xj ̸= 0) and Iun = Ej {P(Xj ̸= 0)I(T; Xj|Xj ̸= 0)}. The weighting prevents ’overfitting’ to movies with few ratings. Since the observed features were selected at random, the statistics of missing values of the observed and unobserved features are the same. Hence, all theorems are applicable to these definitions of Iob and Iun as well. Greedy IobMax Algorithm We cluster the users using a simple greedy clustering algorithm . The input to the algorithm is all users, represented solely by the observed features. Since this algorithm can only find a local maximum of Iob, we ran the algorithm 10 times (each used a different random initialization) and selected the results that had a maximum value of Iob. More details about this algorithm can be found in [8]. In order to estimate I∗ un,D (see definition 1), we also ran the same algorithm, where all the features are available to the algorithm (i.e. also features from group “B”). The algorithm finds clusters that maximize the mean mutual information on features from group "B". Results The results are shown in Figure 1. As n increases, Iob decreases and Iun increases, until they converge to each other. For small n, the clustering ’overfits’ to the observed features. This is similar to training and test errors in supervised learning. For large n, Iun approaches to I∗ un,D, which means the IobMax algorithm found nearly the best possible clustering as expected from the theorem 2. As the number of clusters increases, both Iob and Iun increase, but the difference between them also increases. 4 Discussion and Summary We introduce a new learning paradigm: clustering based on observed features that generalizes to unobserved features. Our results are summarized by two theorems that tell us how, without knowing the value of the unobserved features, one can estimate and maximize information between the clusters and the unobserved features. The key assumption that enables us to prove the theorems is the random independent selection of the observed features. Another interpretation of the generalization theorem, without using this assumption, might be combinatorial. The difference between the observed and unobserved information is large only for a small portion of all possible partitions into observed and unobserved features. This means that almost any arbitrary partition generalizes well. The importance of clustering which preserves information on unobserved features is that it enables us to learn new - previously unobserved - attributes from a small number of examples. Suppose that after clustering fruits based on their observed features, we eat a chinaberry1 and thus, we ”observe” (by getting sick), the previously unobserved attribute of toxicity. Assuming that in each cluster, all fruits have similar unobserved attributes, we can conclude that all fruits in the same cluster, i.e. all chinaberries, are likely to be poisonous. We can even relate the IobMax principle to cognitive clustering in sensory information processing. In general, a symbolic representation (e.g. assigning object names in language) may be based on a similar principle - find a representation (clusters) that contain significant information on as many observed features as possible, while still remaining simple. Such representations are expected to contain information on other rarely viewed salient features. Acknowledgments We thank Amir Globerson, Ran Bachrach, Amir Navot, Oren Shriki, Avner Dor and Ilan Sutskover for helpful discussions. We also thank the GroupLens Research Group at the University of Minnesota for use of the MovieLens data set. Our work is partly supported by grant from the Israeli Academy of Science. References [1] A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: a review. ACM Computing Surveys, 31(3):264–323, September 1999. [2] T. M. Cover and J. A. Thomas. Elements Of Information Theory. Wiley Interscience, 1991. [3] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [4] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. Proc. 37th Allerton Conf. on Communication and Computation, 1999. [5] M. Seeger. Learning with labeled and unlabeled data. Technical report, University of Edinburgh, 2002. [6] M. Szummer and T. Jaakkola. Information regularization with partially labeled data. In NIPS, 2003. [7] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58:13–30, 1963. [8] E. Krupka and N. Tishby. Generalization in clustering with unobserved features. Technical report, Hebrew University, 2005. http://www.cs.huji.ac.il/~tishby/nips2005tr.pdf. [9] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15:1101– 1253, 2003. [10] B. Marlin. Collaborative filtering: A machine learning perspective. Master’s thesis, University of Toronto, 2004. 1Chinaberries are the fruits of the Melia azedarach tree, and are poisonous.
|
2005
|
185
|
2,809
|
Variable KD-Tree Algorithms for Spatial Pattern Search and Discovery Jeremy Kubica Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 jkubica@ri.cmu.edu Joseph Masiero Institute for Astronomy University of Hawaii Honolulu, HI 96822 masiero@ifa.hawaii.edu Andrew Moore Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 awm@cs.cmu.edu Robert Jedicke Institute for Astronomy University of Hawaii Honolulu, HI 96822 jedicke@ifa.hawaii.edu Andrew Connolly Physics & Astronomy Department University of Pittsburgh Pittsburgh, PA 15213 ajc@phyast.pitt.edu Abstract In this paper we consider the problem of finding sets of points that conform to a given underlying model from within a dense, noisy set of observations. This problem is motivated by the task of efficiently linking faint asteroid detections, but is applicable to a range of spatial queries. We survey current tree-based approaches, showing a trade-off exists between single tree and multiple tree algorithms. To this end, we present a new type of multiple tree algorithm that uses a variable number of trees to exploit the advantages of both approaches. We empirically show that this algorithm performs well using both simulated and astronomical data. 1 Introduction Consider the problem of detecting faint asteroids from a series of images collected on a single night. Inherently, the problem is simply one of connect-the-dots. Over a single night we can treat the asteroid’s motion as linear, so we want to find detections that, up to observational errors, lie along a line. However, as we consider very faint objects, several difficulties arise. First, objects near our brightness threshold may oscillate around this threshold, blinking into and out-of our images and providing only a small number of actual detections. Second, as we lower our detection threshold we will begin to pick up more spurious noise points. As we look for really dim objects, the number of noise points increases greatly and swamps the number of detections of real objects. The above problem is one example of a model based spatial search. The goal is to identify sets of points that fit some given underlying model. This general task encompasses a wide range of real-world problems and spatial models. For example, we may want to detect a specific configuration of corner points in an image or search for multi-way structure in scientific data. We focus our discussion on problems that have a high density of both true and noise points, but which may have only a few points actually from the model of interest. Returning to the asteroid linking example, this corresponds to finding a handful of points that lie along a line within a data set of millions of detections. Below we survey several tree-based approaches for efficiently solving this problem. We show that both single tree and conventional multiple tree algorithms can be inefficient and that a trade-off exists between these approaches. To this end, we propose a new type of multiple tree algorithm that uses a variable number of tree nodes. We empirically show that this new algorithm performs well using both simulated and real-world data. 2 Problem Definition Our problem consists of finding sets of points that fit a given underlying spatial model. In doing so, we are effectively looking for known types of structure buried within the data. In general, we are interested in finding sets with k or more points, thus providing a sufficient amount of support to confirm the discovery. Finding this structure within the data may either be our end goal, such as in asteroid linkage, or may just be a preprocessor for a more sophisticated statistical test, such as renewal strings [1]. We are particularly interested in high-density, low-support domains where there may be many hundreds of thousands of points, but only a handful actually support our model. Formally, the data consists of N unique D-dimensional points. We assume that the underlying model can be estimated from c unique points. Since k ≥c, the model may overconstrained. In these cases we divide the points into two sets: Model Points and Support Points. Model points are the c points used to fully define the underlying model. Support points are the remaining points used to confirm the model. For example, if we are searching for sets of k linear points, we could use a set’s endpoints as model points and treat the middle k−2 as support points. Or we could allow any two points to serve as model points, providing an exhaustive variant of the RANSAC algorithm [2]. The prototypical example used in this paper is the (linear) asteroid linkage problem: For each pair of points find the k −2 best support points for the line that they define (such that we use at most one point at each time step). In addition, we place restrictions on the validity of the initial pairs by providing velocity bounds. It is important to note that although we use this problem as a running example, the techniques described can be applied to a range of spatial problems. 3 Overview of Previous Approaches 3.1 Constructive Algorithms Constructive algorithms “build up” valid sets of points by repeatedly finding additional points that are compatible with the current set. Perhaps the simplest approach is to perform a two-tiered brute force search. First, we exhaustively test all sets of c points to determine if they define a valid model. Then, for each valid set we test all of the remaining points for support. For example in the asteroid linkage problem, we can initially search over all O(N2) pairs of points and for each of the resulting lines test all O(N) points to determine if they support that line. A similar approach within the domain of target tracking is sequential tracking (for a good introduction see [3]), where points at early time steps are used to estimate a track that is then projected to later time steps to find additional support points. In large-scale domains, these approaches can often be made tractable by using spatial structure in the data. Again returning to our asteroid example, we can place the points in a KD-tree [4]. We can then limit the number of initial pairs examined by using this tree to find points compatible with our velocity constraints. Further, we can use the KD-tree to only search for support points in localized regions around the line, ignoring large numbers of obviously infeasible points. Similarly, trees have been used in tracking algorithms to efficiently find points near predicted track positions [5]. We call these adaptations single tree algorithms, because at any given time the algorithm is searching at most one tree. 3.2 Parameter Space Methods Another approach is to search for valid sets of points by searching the model’s parameter space, such as in the Hough transform [6]. The idea behind these approaches is that we can test whether each point is compatible with a small set of model parameters, allowing us to search parameter space to find the valid sets. However, this method can be expensive in terms of both computation and memory, especially for high dimensional parameter spaces. Further, if the model’s total support is low, the true model occurrences may be effectively washed out by the noise. For these reasons we do not consider parameter space methods. 3.3 Multiple Tree Algorithms The primary benefit of tree-based algorithms is that they are able to use spatial structure within the data to limit the cost of the search. However, there is a clear potential to push further and use structure from multiple aspects of the search at the same time. In doing so we can hopefully avoid many of the dead ends and wrong turns that may result from exploring bad initial associations in the first few points in our model. For example, in the domain of asteroid linkage we may be able to limit the number of short, initial associations that we have to consider by using information from later time steps. This idea forms the basis of multiple tree search algorithms [7, 8, 9]. Multiple tree methods explicitly search for the entire set of points at once by searching over combinations of tree nodes. In standard single tree algorithms, the search tries to find individual points satisfying some criteria (e.g. the next point to add) and the search state is represented by a single node that could contain such a point. In contrast, multiple tree algorithms represent the current search state with multiple tree nodes that could contain points that together conform to the model. Initially, the algorithm begins with k root nodes from either the same or different tree data structures, representing the k different points that must be found. At each step in the search, it narrows in on a set of mutually compatible spatial regions and thus a set of individual points that fit the model by picking one of the model nodes and recursively exploring its children. As with a standard “single tree” search, we constantly check for opportunities to prune the search. There are several important drawbacks to multiple tree algorithms. First, additional trees introduce a higher branching factor in the search and increase the potential for taking deep “wrong turns.” Second, care must be taken in order to deal with missing or a variable number of support points. Kubica et. al. discuss the use of an additional “missing” tree node to handle these cases [9]. However, this approach can effectively make repeated searches over subsets of trees, making it more expensive both in theory and practice. 4 Variable Tree Algorithms In general we would like to exploit structural information from all aspects of our search problem, but do so while branching the search on just the parameters of interest. To this end we propose a new type of search that uses a variable number of tree nodes. Like a standard multiple tree algorithm, the variable tree algorithm searches combinations of tree nodes to find valid sets of points. However, we limit this search to just those points required (A) (B) Figure 1: The model nodes’ bounds (1 and 2) define a region of feasible support (shaded) for any combination of model points from those nodes (A). As shown in (B), we can classify entire support tree nodes as feasible (node b) or infeasible (nodes a and c). to define, and thus bound, the models currently under consideration. Specifically, we use M model tree nodes,1 which guide the recursion and thus the search. In addition, throughout the search we maintain information about other potential supporting points that can be used to confirm the final track or prune the search due to a lack of support. For example in the asteroid linking problem each line is defined by only 2 points, thus we can efficiently search through the models using a multiple tree search with 2 model trees. As shown in Figure 1.A, the spatial bounds of our current model nodes immediately limit the set of feasible support points for all line segments compatible with these nodes. If we track which support points are feasible, we can use this information to prune the search due to a lack of support for any model defined by the points in those nodes. The key idea behind the variable tree search is that we can use a dynamic representation of the potential support. Specifically, we can place the support points in trees and maintain a dynamic list of currently valid support nodes. As shown in Figure 1.B, by only testing entire nodes (instead of individual points), we are using spatial coherence of the support points to remove the expense of testing each support point at each step in the search. And by maintaining a list of support tree nodes, we are no longer branching the search over these trees. Thus we remove the need to make a hard “left or right” decision. Further, using a combination of a list and a tree for our representation allows us to refine our support representation on the fly. If we reach a point in the search where a support node is no longer valid, we can simply drop it off the list. And if we reach a point where a support node provides too coarse a representation of the current support space, we can simply remove it and add both of its children to the list. This leaves the question of when to split support nodes. If we split them too soon, we may end up with many support nodes in our list and mitigate the benefits of the nodes’ spatial coherence. If we wait too long to split them, then we may have a few large support nodes that cannot efficiently be pruned. Although we are still investigating splitting strategies, the experiments in this paper use a heuristic that seeks to provide a small number of support nodes that are a reasonable fit to the feasible region. We effectively split a support node if doing so would allow one of its two children to be pruned. For KD-trees this roughly means checking whether the split value lies outside the feasible region. The full variable tree algorithm is given in Figure 2. A simple example of finding linear tracks while using the track’s endpoints (earliest and latest in time) as model points and 1Typically M = c, although in some cases it may be beneficial to use a different number of model nodes. Variable Tree Model Detection Input: A set of M current model tree nodes M A set of current support tree nodes S Output: A list Z of feasible sets of points 1. S′ ←{} and Scurr ←S 2. IF we cannot prune based on the mutual compatibility of M: 3. FOR each s ∈Scurr 4. IF s is compatible with M: 5. IF s is “too wide”: 6. Add s’s left and right child to the end of Scurr. 7. ELSE 8. Add s to S′. 9. IF we have enough valid support points: 10. IF all of m ∈M are leaves: 11. Test all combinations of points owned by the model nodes, using the support nodes’ points as potential support. Add valid sets to Z. 12. ELSE 13. Let m∗be the non-leaf model tree node that owns the most points. 14. Search using m∗’s left child in place of m∗and S′ instead of S. 15. Search using m∗’s right child in place of m∗and S′ instead of S. Figure 2: A simple variable tree algorithm for spatial structure search. This algorithm shown uses simple heuristics such as: searching the model node with the most points and splitting a support node if it is too wide. These heuristics can be replaced by more accurate, problem-specific ones. using all other points for support is illustrated in Figure 3. The first column shows all the tree nodes that are currently part of the search. The second and third columns show the search’s position on the two model trees and the current set of valid support nodes respectively. Unlike the pure multiple tree search, the variable tree search does not “branch off” on the support trees, allowing us to consider multiple support nodes from the same time step at any point in the search. Again, it is important to note that by testing the support points as we search, we are both incorporating support information into the pruning decisions and “pruning” the support points for entire sets of models at once. 5 Results on the Asteroid Linking Domain The goal of the single-night asteroid linkage problem is to find sets of 2-dimensional point detections that correspond to a roughly linear motion model. In the below experiments we are interested in finding sets of at least 7 detections from a sequence of 8 images. The movements were constrained to have a speed between 0.05 and 0.5 degrees per day and were allowed an observational error threshold of 0.0003 degrees. All experiments were run on a dual 2.5 GHz Apple G5 with 4 GB of RAM. The asteroid detection data consists of detections from 8 images of the night sky separated by half-hour intervals. The images were obtained with the MegaCam instrument on the 3.6-meter Canada-France-Hawaii Telescope. The detections, along with confidence levels, were automatically extracted from the images. We can pre-filter the data to pull out only those observations above a given confidence threshold σ. This allows us to examine how the algorithms perform as we begin to look for increasingly faint asteroids. It should be noted that only limited preprocessing was done to the data, resulting in a very high level Search Step 1: Search Step 2: Search Step 5: Figure 3: The variable tree algorithm performs a depth first search over the model nodes. At each level of the search the model nodes are checked for mutual compatibility and each support node on the list is check for compatibility with the set of model nodes. Since we are not branching on the support nodes, we can split a support node and add both children to our list. This figure shows the current model and support nodes and their spatial regions. Table 1: The running times (in seconds) for the asteroid linkers with different detection thresholds σ and thus different numbers N and density of observations. σ 10.0 8.0 6.0 5.0 4.0 N 3531 5818 12911 24068 48646 Single Tree 2 7 61 488 2442 Multiple Tree 1 3 30 607 4306 Variable Tree < 1 1 4 40 205 of false detections. While future data sets will contain significantly reduced noise, it is interesting to examine the performance of the algorithms on this real-world high noise, high density data. The results on the intra-night asteroid tracking domain, shown in Table 1, illustrate a clear advantage to using a variable tree approach. As the significance threshold σ decreases, the number and density of detections increases, allowing the support tree nodes to capture feasibility information for a large number of support points. In contrast, neither the full multiple tree algorithm nor the single-tree algorithm performed well. For the multiple tree algorithm, this decrease in performance is likely due to a combination of the high number of time steps, the allowance of a missing observation, and the high density. In particular, the increased density can reduce opportunities for pruning, causing the algorithm to explore deeper before backtracking. Table 2: Average running times (in seconds) for a 2-dimensional rectangle search with different numbers of points N. The brute force algorithm was only run to N = 2500. N 500 1000 2000 2500 5000 10000 25000 50000 Brute Force 0.37 2.73 21.12 41.03 n/a n/a n/a n/a Single Tree 0.02 0.07 0.30 0.51 2.15 10.05 66.24 293.10 Multi-Tree 0.01 0.02 0.06 0.09 0.30 1.11 6.61 27.79 Variable-Tree 0.01 0.02 0.05 0.07 0.22 0.80 4.27 16.30 Table 3: Average running times (in seconds) for a rectangle search with different numbers of required corners k. For this experiment N = 10000 and D = 3. k 8 7 6 5 4 Single Tree 4.71 4.72 4.71 4.71 4.71 Multi-Tree 3.96 19.45 45.02 67.50 78.81 Variable-Tree 0.65 0.75 0.85 0.92 1.02 6 Experiments on the Simulated Rectangle Domain We can apply the above techniques to a range of other model-based spatial search problems. In this section we consider a toy template matching problem, finding axis-aligned hyperrectangles in D-dimensional space by finding k or more corners that fit a rectangle. We use this simple, albeit artificial, problem both to demonstrate potential pattern recognition applications and to analyze the algorithms as we vary the properties of the data. Formally, we restrict the model to use the upper and lower corners as the two model points. Potential support points are those points that fall within some threshold of the other 2D −2 corners. In addition, we restrict the allowable bounds of the rectangles by providing a maximum width. To evaluate the algorithms’ relative performance, we used random data generated from a uniform distribution on a unit hyper-cube. The threshold and maximum width were fixed for all experiments at 0.0001 and 0.2 respectively. All experiments were run on a dual 2.5 GHz Apple G5 with 4 GB of RAM. The first factor that we examined was how each algorithm scales with the number of points. We generated random data with 5 known rectangles and N additional random points and computed the average wall-clock running time (over ten trials) for each algorithm. The results, shown in Table 2, show a graceful scaling of all of the multiple tree algorithms. In contrast, the brute force and single tree algorithms run into trouble as the number of points becomes moderately large. The variable tree algorithm consistently performs the best, as it is able to avoid significant amounts of redundant computation. One potential drawback of the full multiple tree algorithm is that since it branches on all points, it may become inefficient as the allowable number of missing support points grows. To test this we looked at 3-dimensional data and varied the minimum number of required support points k. As shown in Table 3, all multiple tree methods become more expensive as the number of required support points decreases. This is especially the case for the multi-tree algorithm, which has to perform several almost identical searches to account for missing points. However, the variable-tree algorithm’s performance degrades gracefully and is the best for all trials. 7 Conclusions Tree-based spatial algorithms provide the potential for significant computational savings with multiple tree algorithms providing further opportunities to exploit structure in the data. However, a distinct trade-off exists between ignoring structure from all aspects of the problem and increasing the combinatorics of the search. We presented a variable tree approach that exploits the advantages of both single tree and multiple tree algorithms. A combinatorial search is carried out over just the minimum number of model points, while still tracking the feasibility of the various support points. As shown in the above experiments, this approach provides significant computational savings over both the traditional single tree and and multiple tree searches. Finally, it is interesting to note that the dynamic support technique described in this paper is general and may be applied to a range of other algorithms, such as the Fast Hough Transform [10], that maintain information on which points support a given model. Acknowledgments Jeremy Kubica is supported by a grant from the Fannie and John Hertz Foundation. Andrew Moore and Andrew Connolly are supported by a National Science Foundation ITR grant (CCF-0121671). References [1] A.J. Storkey, N.C. Hambly, C.K.I. Williams, and R.G. Mann. Renewal Strings for Cleaning Astronomical Databases. In UAI 19, 559-566, 2003. [2] M.A. Fischler and R.C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM, 24:381–395, 1981. [3] S. Blackman and R. Popoli. Design and Analysis of Modern Tracking Systems. Artech House, 1999. [4] J.L. Bentley . Multidimensional Binary Search Trees Used for Associative Searching. Comm. of the ACM, 18 (9), 1975. [5] J. K. Uhlmann. Algorithms for multiple-target tracking. American Scientist, 80(2):128–141, 1992. [6] P. V. C. Hough. Machine analysis of bubble chamber pictures. In International Conference on High Energy Accelerators and Instrumentation. CERN, 1959. [7] A. Gray and A. Moore. N-body problems in statistical learning. In T. K. Leen and T. G. Dietterich, editors, Advances in Neural Information Processing Systems. MIT Press, 2001. [8] G. R. Hjaltason and H. Samet. Incremental distance join algorithms for spatial databases. In Proc. of the 1998 ACM-SIGMOD Conference, 237–248, 1998. [9] J. Kubica, A. Moore, A. Connolly, and R. Jedicke. A Multiple Tree Algorithm for the Efficient Association of Asteroid Observations. In KDD’05. August 2005. [10] H. Li, M.A. Lavin, and R.J. Le Master. Fast Hough Transform: A Hierarchical Approach. In Computer Vision, Graphics, and Image Processing, 36(2-3):139–161, November 1986.
|
2005
|
186
|
2,810
|
Neural mechanisms of contrast dependent receptive field size in V1 Jim Wielaard and Paul Sajda Department of Biomedical Engineering Columbia University New York, NY 10027 (djw21, ps629)@columbia.edu Abstract Based on a large scale spiking neuron model of the input layers 4Cα and β of macaque, we identify neural mechanisms for the observed contrast dependent receptive field size of V1 cells. We observe a rich variety of mechanisms for the phenomenon and analyze them based on the relative gain of excitatory and inhibitory synaptic inputs. We observe an average growth in the spatial extent of excitation and inhibition for low contrast, as predicted from phenomenological models. However, contrary to phenomenological models, our simulation results suggest this is neither sufficient nor necessary to explain the phenomenon. 1 Introduction Neurons in the primary visual cortex (V1) display what is often referred to as “size tuning”, i.e. the response of a cell is maximal around a cell-specific stimulus size and generally decreases substantially (30-40% on average) or vanishes altogether for larger stimulus sizes1−9. The cell-specific stimulus size eliciting a maximum response, also known as the “receptive field size” of the cell4, has a remarkable property in that it is not contrast invariant, unlike for instance orientation tuning in V1. Quite the contrary, the contrast-dependent change in receptive field size of V1 cells is profound. Typical is a doubling in receptive field size for stimulus contrasts decreasing by a factor of 2-3 on the linear part of the contrast response function4. This behavior is seen throughout V1, including all cell types in all layers and at all eccentricities. A functional interpretation of the phenomenon is that neurons in V1 sacrifice spatial resolution in return for a gain in contrast sensitivity at low contrasts 4. However, its neural mechanisms are at present very poorly understood. Understanding these mechanisms is potentially important for developing a theoretical model of early signal integration and neural encoding of visual features in V1. We have recently developed a large-scale spiking neuron model that accounts for the phenomenon and suggests neural mechanisms from which it may originate. This paper provides a technical description of these mechanisms. 2 The model Our model consists of 8 ocular dominance columns and 64 orientation hypercolumns (i.e. pinwheels), representing a 16 mm2 area of a macaque V1 input layer 4Cα or 4Cβ. The model consists of approximately 65,000 cortical cells in each of the four configurations (see below), and the corresponding appropriate number of LGN cells. Our cortical cells are modeled as conductance based integrate-and-fire point neurons, 75% are excitatory cells and 25% are inhibitory cells. Our LGN cells are rectified spatio-temporal linear filters. The model is constructed with isotropic short-range cortical connections (< 500µm), realistic LGN receptive field sizes and densities, realistic sizes of LGN axons in V1, and cortical magnification factors and receptive field scatter that are in agreement with experimental observations. Dynamic variables of a cortical model-cell i are its membrane potential vi(t) and its spike train Si(t) = P k δ(t −ti,k), where t is time and ti,k is its kth spike time. Membrane potential and spike train of each cell obey a set of N equations of the form Ci dvi dt = −gL,i(vi −vL) −gE,i(t, [S]E, ηE)(vi −vE) −gI,i(t, [S]I, ηI)(vi −vI) , i = 1, . . . , N . (1) These equations are integrated numerically using a second order Runge-Kutta method with time step 0.1 ms. Whenever the membrane potential reaches a fixed threshold level vT it is reset to a fixed reset level vR and a spike is registered. The equation can be rescaled so that vi(t) is dimensionless and Ci = 1, vL = 0, vE = 14/3, vI = −2/3, vT = 1, vR = 0, and conductances (and currents) have dimension of inverse time. The quantities gE,i(t, [S], ηE) and gI,i(t, [S], ηI) are the excitatory and inhibitory conductances of neuron i. They are defined by interactions with the other cells in the network, external noise ηE(I), and, in the case of gE,i possibly by LGN input. The notation [S]E(I) stands for the spike trains of all excitatory (inhibitory) cells connected to cell i. Both, the excitatory and inhibitory populations consist of two subpopulations Pk(E) and Pk(I), k = 0, 1, a population that receives LGN input (k = 1) and one that does not (k = 0). In the model presented here 30% of both the excitatory and inhibitory cell populations receive LGN input. We assume noise, cortical interactions and LGN input act additively in contributing to the total conductance of a cell, gE,i(t, [S]E, ηE) = ηE,i(t) + gcor E,i(t, [S]E) + δigLGN i (t) gI,i(t, [S]I, ηI) = ηI,i(t) + gcor I,i (t, [S]I) , (2) where δi = ℓfor i ∈{Pℓ(E), Pℓ(I)}, ℓ= 0, 1. The terms gcor µ,i (t, [S]µ) are the contributions from the cortical excitatory (µ = E) and inhibitory (µ = I) neurons and include only isotropic connections, gcor µ,i (t, [S]µ) = Z +∞ −∞ ds 1 X k=0 X j∈Pk(µ) Ck′,k µ′,µ(||⃗xi −⃗xj||)Gµ,j(t −s)Sj(s) , (3) where i ∈Pk′(µ′) Here ⃗xi is the spatial position (in cortex) of neuron i, the functions Gµ,j(τ) describe the synaptic dynamics of cortical synapses and the functions Ck′,k µ′,µ(r) describe the cortical spatial couplings (cortical connections). The length scale of excitatory and inhibitory connections is about 200µm and 100µm respectively. In agreement with experimental findings (see references in 10), the LGN neurons are modeled as rectified center-surround linear spatiotemporal filters. The LGN temporal kernels are modeled in agreement with 11, and the LGN spatial kernels are of center-surround type. An important class of parameters are those that define and relate the model’s geometry in visual space and cortical space. Geometric properties are different for the two input layers 4Cα, β and depend also on the eccentricity. As said, contrast dependent receptive field size is observed to be insensitive to those differences4−6,8. In order to verify that our explanations are consistent with this observation, we have performed numerical simulations for four different sets of parameters, corresponding to the 4Cα, β layers at para-foveal eccentricities (< 5◦) and at eccentricities around 10◦. These different model configurations are referred to as M0, M10, and P0, P10. Reported results are qualitatively similar for all four configurations unless otherwise noted. The above is only a very brief description of the model, the details can be found in 12. 3 Visual stimuli and data collection The stimulus used to analyze the phenomenon is a drifting grating confined to a circular aperture, surrounded by a blank (mean luminance) background. The luminance of the stimulus is given by I(⃗y, t) = I0(1 + ǫ cos( ωt −⃗k ·⃗y + φ)) for ||⃗y|| ≤rA and I(⃗y, t) = I0 for ||⃗y|| > rA, with average luminance I0, contrast ǫ, temporal frequency ω, spatial wave vector ⃗k, phase φ, and aperture radius rA. The aperture is centered on the receptive field of the cell and varied in size, while the other parameters are kept fixed and set to preferred values. All stimuli are presented monocularly. Samples consisting of approximately 200 cells were collected for each configuration, containing about an equal number of simple and complex cells. The experiments were performed at “high” contrast, ǫ = 1, and “low” contrast, ǫ = 0.3. 4 Approximate model equations We find that, to good approximation, the membrane potential and instantaneous firing rate of our model cells are respectively12,13 ⟨vk(t, rA)⟩≈Vk(rA, t) ≡⟨ID,k(t, rA)⟩ ⟨gT,k(t, rA)⟩, (4) ⟨Sk(t, rA)⟩≈fk(t, rA) ≡δk [⟨ID,k(t, rA)⟩−⟨gT,k(t, rA)⟩−∆k ]+ , (5) where [x]+ = x if x ≥0 and [x]+ = 0 if x ≤0, and where, the gain δk and threshold ∆k do not depend on the aperture radius rA for most cells. The total conductance gT,k(t, rA) and difference current ID,k(t, rA) are given by gT,k(t, rA) = gL + gE,k(t, [S]E, rA) + gI,k(t, [S]I, rA) (6) ID,k(rA, t) = gE,k(t, [S]E, rA) VE −gI,k(t, [S]I, rA) |VI| . (7) 5 Mechanisms of contrast dependent receptive field size From Eq. (4) and (5) it follows that a change in receptive field size in general results from a change in behavior of the relative gain, G(rA) = ∂gE/∂rA ∂gI/∂rA . (8) rA (deg) <S> spks/s <v> 0 1 2 3 0 25 50 0 0.1 0.2 <S+> <S-> <v+> <v-> ** 1 cycle glgn, gE (s-1) gI (s-1) 0 50 100 150 100 200 300 400 * glgn gE gI 1 cycle glgn, gE (s-1) gI (s-1) 0 50 75 100 125 * glgn gE gI rA (deg) <S> spks/s <v> 0 1 2 3 4 5 6 0 25 50 0.6 0.8 1 * * 1 cycle glgn, gE (s-1) gI (s-1) 0 50 50 100 150 * gE gI 1 cycle glgn, gE (s-1) gI (s-1) 0 40 50 75 * gE gI Figure 1: Two example cells, an M0 simple cell which receives LGN input (top) and an M10 complex cell which does not (bottom). (column 1) Responses as function of aperture size. Mean responses are plotted for the complex cell, first harmonic for the simple cell. Apertures of maximum of responses (i.e. receptive field sizes) are indicated with asterisks (dark=high contrast, light=low contrast). (column 2) Conductances for high contrast at apertures near the maximum responses. Conductances are displayed as nine (top) and eleven (bottom) sub-panels giving the cycle-trial averaged conductances as a function of time (relative to cycle) and aperture size. (column 3) Conductances for low contrast at apertures near the maximum responses. Asterisks in the conductance figures (columns 2 and 3) indicate corresponding apertures of maximum response (column 1) Note that this is a rather different parameter than the “surround gain” parameter (ks) used in the ratio-of-Gaussians (ROG) model8–e.g. unlike for ks, there is no one-to-one relationship between G(rA) and the degree of surround suppression. Qualitatively, the conductances show a similar dependence on aperture size as the membrane potential responses and spike responses, i.e. they display surround suppression as well 12. Receptive field sizes based on these conductances are a measure of the spatial summation extent of excitation and inhibition. An obvious way to change the behavior of G, and consequently the receptive field size, is to change the spatial summation extent of gE and/or gI. However this is not strictly necessary. For example, other possibilities are illustrated by the two cells in Fig. 1. These cells show, both in spike and membrane potential responses, a receptive field growth of a factor of 2 (top) and 3 (bottom) at low contrast. However, for both cells the spatial summation extent of excitation at low contrast is one aperture less than at high contrast. In a similar way as for spike train responses, we also obtained receptive field sizes for the conductances. As do spike responses (Fig. 2A), both excitation and inhibition (Fig. 2B&C) also show, on the average, an increase in their spatial summation extent as contrast is decreased, but the increase is in general smaller than what is seen for spike responses, particularly for cells that show significant receptive field growth. For instance, we see from Figure 2B and C that for cells in the sample with receptive field growths ∼2 or greater, the growth for the conductances is always considerably less than the growth based on spike responses. Expressed more rigorously, a Wilcoxon test on ratio of growth ratios larger than unity gives p < 0.05 (all cells, excitation, Fig. 2B), p < 0.15 (all cells, inhibition, Fig. 2C), p < 0.001 (cells with receptive field growth rate r+/r−> 1.5, both excitation and inhibition.) Although some increase in the spatial summation extent of excitation and inhibition is in general the rule, this increase is rather arbitrary and bears not much relation with the receptive field growth based on spike responses. The same conclusions follow from membrane potential responses (not shown). (A) (B) (C) r+(s) (deg) r-(s) (deg) -1 0 1 -1 0 1 0 1 r-/r+(s) r-/r+(gE) -1 0 1 -1 0 1 r-/r+(s) r-/r+(gI) -1 0 1 -1 0 1 Figure 2: (A) Joint distribution of high and low contrast receptive field sizes, r+ and r−, based on spike responses. All scales are logarithmic, base 10. All distributions are normalized to a peak value of one. Receptive field growth at low contrast is clear. Average growth ratio is 1.9 and is significantly greater than unity (Wilcoxon test, p < 0.001). (B & C) Joint distributions of receptive field growth and growth of spatial summation extent of excitation (B) and inhibition (C) (computed as ratios). There is no simple relation between receptive field growth and the growth of the spatial summation extent of excitatory or inhibitory inputs. For cells in the sample with larger receptive field growths (factor of ∼2 or greater) this growth is always considerably larger than the growths of their excitatory and inhibitory inputs. Fig. 2 thus demonstrates that, contrary to what is predicted by the difference-of-Gaussians (DOG) 4 and ROG models 8 (see Discussion), a growth of spatial summation extent of excitation (and/or inhibition) at low contrast is neither sufficient nor necessary to explain the receptive field growth seen in spike responses. Membrane potential responses give the same conclusion. The fact that a change in receptive field size can take place without a change in the spatial summation extent of gE or gI can be illustrated by a simple example. Consider a situation where both gE and gI have their maximum at the same aperture size rE = rI = r⋆and are monotonically increasing for rA < r⋆and monotonically decreasing for rA > r⋆, as depicted in Fig. 3. We can distinguish three classes with respect to the relative location of the maxima in spike responses rS and the conductances r⋆, namely {X: rS < r⋆}, {Y: rS = r⋆} and {Z: rS > r⋆}. It follows from (5) that if we define the relative gain r+ rG0 relative gain r+ rG0 rA conductance r* gE gI X:rs<r* Y:rs=r* Z:rs>r* X X X Z Z Z Figure 3: Schematic illustration of mechanisms for receptive field growth under equal and constant spatial summation extent of the conductances (rE = rI = r⋆). (A) (B) 0 0.4 0 0.4 rs<rI rs=rI rI<rs<rE rs=rE rs>rE rs<rE rs=rE rE<rs<rI rs=rI rs>rI (rE>rI) (rE<rI) rS<rE rS=rE rs>rE (rE=rI) X Y Z fraction of cells fraction of cells X->X Z->Z Y->Z X->Z Y->X Z->X X->Y Y->Y Z->Y X->X X->Z Y->Z Z->Z Z->X Y->X X->Y Y->Y Z->Y gain change transitions gain change transitions fraction of cells 0 0.3 Figure 4: (A) Distributions of the relative positions of the maxima (receptive field sizes) of spike responses rS and conductances rE and rI, for the M0 configuration. A division is made with respect to the maxima in the conductances, this corresponds to the left (rE = rI), central (rE > rI), and right (rE < rI) part of the figure. Each panel is further subdivided with respect to the maximum in the spike response rS. Upper histograms are for all cells in the sample, lower histograms are for cells that have receptive field growth r−/r+ > 1.5. Unfilled histograms are for high contrast, shaded histograms are for low contrast. (B) Prevalence of transitions between positions of maxima in spike responses and excitatory conductances (left) and in spike responses and inhibitory conductances (right) for a high →low contrast change. See text for definitions of X, Y, Z classes. Data are evaluated for all cells (unfilled histograms) and for cells with a receptive field growth r−/r+ > 1.5 (shaded histograms). parameter G0(v) = (|vI| + v)/(vE −v) then we can characterize the difference between classes X and Z by the way that G crosses G0(1) around rS as depicted in Fig. 3. For class Y the parameter G is not of any particular interest as it can assume arbitrary behavior around rS. It follows from (4) that similar observations hold for the maximum in the membrane potential rv and we need simply to replace G0(1) with G0(v(rv)). A growth of receptive field size can occur without any change in the spatial summation extent (r⋆) of the conductances. Suppose we wish to remain within the same class X or Z, then receptive field growth, can be induced, for instance, by an overall increase (X) or an overall decrease (Z) in relative gain G(rA) as shown in Fig. 3 (dashed line). Receptive field growth also can be caused by more drastic changes in G so that the transitions X →Y, X →Z or Y → Z occur for a high →low contrast change The situation is somewhat more involved when we allow for non-suppressed responses and conductances, and for different positions of the maxima of gE and gI, however, the essence of our conclusions remains the same. Analysis of our data in the light of the above example is given in Fig. 4. Cells were classified (Fig. 4A) according to the relative positions of their maxima in spike response (rS) and excitatory (rE) and inhibitory (rI) conductances, using F0+F1 (i.e. mean response + first Fourier component of the response). Membrane potential responses yield similar results. Comparing this classification at high and low contrast we observe a striking difference for cells with significant receptive field growths, i.e. with growth ratios >1.5 (Fig. 4A, bottom), indicative of X →Y, X →Z and Y →Z transitions (as discussed in the simplified example above). In this realistic situation there are of course many more transitions (i.e. 132), however, that we indeed observe a prevalence for these transitions can be demonstrated in two ways using slightly modified definitions of the X,Y,Z classes. First (Fig. 4B, left), if we redefine the X,Y,Z classes with respect to rS and rE while ignoring rI, i.e. {X: rS < rE}, {Y: rS = rE} and {Z: rS > rE}, then the transition distribution for cells with significant receptive field growth shows that in about 60% of these cells a X →Z or Y →Z transition occurs. Taken together with the fact that roughly 10% of the cells with significant receptive field growth (Figure 4A, bottom) have rI ≤rS < rE at high contrast and rE < rS ≤rI at low contrast, we can conclude that for more than 50% of the cells with significant receptive field growth, a transition takes place from a high contrast RF size less or equal to the spatial summation extent of excitation and inhibition, to a low contrast receptive field size which exceeds both (by at least one aperture). Note that these transitions occur in addition to any growth of rE or rI. Secondly (Fig. 4B, right), the same conclusion is reached when we redefine the X,Y,Z classes with respect to rS and rI while ignoring rE ({X: rS < rI}, {Y: rS = rI} and {Z: rS > rI}), Now a X →Z or Y →Z transition occurs in about 70% of the cells with significant receptive field growth, while about 20% of the cells with significant receptive field growth (Fig. 4A, bottom) have rE ≤rS < rI at high contrast and rI < rS ≤rE at low contrast. Finally, Fig. 4B also demonstrates the presence of a rich diversity in relative gain changes in our model, since all transitions (for all cells, unfilled histograms) occur with some reasonable probability. 6 Discussion The DOG model suggests that growth in receptive field size at low contrast is due to an increase of the spatial summation extent of excitation4 (i.e. increase in the spatial extent parameter σE). This was partially confirmed experimentally in cat primary visual cortex7. Although it has been claimed8 that the ROG model could explain receptive field growth solely from a change in the relative gain parameter ks, we believe this is incorrect. Since there is a one-to-one relationship between ks and surround suppression, this would imply that contrast dependent receptive field size simply results from contrast dependent surround suppression, which contradicts experimental data4,8. As does the DOG model, the ROG model, based on analysis of our data, also predicts that contrast dependent receptive field size is due to contrast dependence of the spatial summation extent of excitation. As we have shown, our simulations confirm an average growth of spatial summation extent of excitation (and inhibition) at low contrast. However, this growth is neither sufficient nor necessary to explain receptive field growth. For cells with significant receptive field growth, (r+/r−> 1.5) we were able to identify an additional property of the neural mechanisms. For more than 50% of such cells, a transition takes place from a high contrast RF size less or equal to the spatial summation extent of excitation and inhibition, to a low contrast receptive field size which exceeds both. An important characteristic of our model is that it is not specifically designed to produce the phenomenon. Rather, the model parameters are set such that it produces realistic orientation tuning and a realistic distribution of response modulations in response to drifting gratings (simple & complex cells). Constructed in this way, our model then naturally produces a wide variety of realistic response properties, classical as well as extraclassical, including the phenomenon discussed here. A prominent feature of the mechanisms we suggest is that, contrary to common belief, they require neither the long-range lateral connections in V1 14−18 nor extrastriate feedback 6,8,19,20. The average receptive field growth we see in our model is about a factor of two (r−/r+ ∼2). This is a little less than what is observed in experiments 5,8. This leaves room for contributions from the LGN input. It seems reasonable to assume that contrast dependent receptive field size is not limited to V1 and is also a property of LGN cells. Somewhat surprisingly, this has to our knowledge not been verified yet for macaque. Contrast dependent receptive field size of LGN cells has been observed in marmoset and an average growth ratio at low contrast of 1.3 was reported21. Receptive field growth of LGN cells in some sense introduces an overall geometric scaling factor on the entire visual input to V1. This observation ignores a great many details of course. For instance, the fact that the density of LGN cells (LGN receptive fields) is not known to change with contrast. On the other hand, it seems unlikely that a reasonable receptive field expansion of LGN cells would not be at least partially transferred to V1. Thus it seems reasonable to conclude from our work that the phenomenon in V1, in particular that seen in layer 4, may be attributed largely to isotropic short-range (< 0.5 mm) cortical connections and LGN input. Acknowledgments This work was supported by grants from ONR (MURI program, N00014-01-1-0625) and NGA (HM1582-05-C-0008). References [1] Dow, B, Snyder, A, Vautin, R, & Bauer, R. (1981) Exp Brain Res 44, 213–228. [2] Schiller, P, Finlay, B, & Volman, S. (1976) J Neurophysiol 39, 1288–1319. [3] Silito, A, Grieve, K, Jones, H, Cudeiro, J, & Davis, J. (1995) Nature 378, 492–496. [4] Sceniak, M, Ringach, D, Hawken, M, & Shapley, R. (1999) Nat Neurosci 2, 733–739. [5] Kapadia, M, Westheimer, G, & Gilbert, C. (1999) Proc Nat Acad Sci USA 96, 12073–12078. [6] Sceniak, M, Hawken, M, & Shapley, R. (2001) J Neurophysiol 85, 1873–1887. [7] Anderson, J, Lampl, I, Gillespie, D, & Ferster, D. (2001) J Neurosci 21, 2104–2112. [8] Cavanaugh, J, Bair, W, & Movshon, J. (2002) J Neurophysiol 88, 2530–2546. [9] Ozeki, H, Sadakane, O, Akasaki, T, Naito, T, Shimegi, S, & Sato, H. (2004) J Neurosci 24, 1428–1438. [10] McLaughlin, D, Shapley, R, Shelley, M, & Wielaard, J. (2000) Proc Nat Acad Sci USA 97, 8087–8092. [11] Benardete, E & Kaplan, E. (1999) Vis Neurosci 16, 355–368. [12] Wielaard, J & Sajda, P. (2005) Cerebral Cortex in press. [13] Wielaard, J, Shelley, M, McLaughlin, D, & Shapley, R. (2001) J Neurosci 21(14), 5203–5211. [14] DeAngelis, G, Freeman, R, & Ohzawa, I. (1994) J Neurophysiol 71, 347–374. [15] Somers, D, Todorov, E, Siapas, A, Toth, L, Kim, D, & Sur, M. (1998) Cereb Cortex 8, 204–217. [16] Dragoi, V & Sur, M. (2000) J Neurophysiol 83, 1019–1030. [17] Hup´e, J, James, A, Girard, P, & Bullier, J. (2001) J Neurophysiol 85, 146–163. [18] Stettler, D, Das, A, Bennett, J, & Gilbert, C. (2002) Neuron 36, 739–750. [19] Angelucci, A, Levitt, J, Walton, E, Hup´e, J, Bullier, J, & Lund, J. (2002) J Neurosci 22, 8633– 8646. [20] Bair, W, Cavanaugh, J, & Movshon, J. (2003) J Neurosci 23(20), 7690–7701. [21] Solomon, S, White, A, & Martin, P. (2002) J Neurosci 22(1), 338–349.
|
2005
|
187
|
2,811
|
Benchmarking Non-Parametric Statistical Tests Mikaela Keller∗ IDIAP Research Institute 1920 Martigny Switzerland mkeller@idiap.ch Samy Bengio IDIAP Research Institute 1920 Martigny Switzerland bengio@idiap.ch Siew Yeung Wong IDIAP Research Institute 1920 Martigny Switzerland sywong@idiap.ch Abstract Although non-parametric tests have already been proposed for that purpose, statistical significance tests for non-standard measures (different from the classification error) are less often used in the literature. This paper is an attempt at empirically verifying how these tests compare with more classical tests, on various conditions. More precisely, using a very large dataset to estimate the whole “population”, we analyzed the behavior of several statistical test, varying the class unbalance, the compared models, the performance measure, and the sample size. The main result is that providing big enough evaluation sets non-parametric tests are relatively reliable in all conditions. 1 Introduction Statistical tests are often used in machine learning in order to assess the performance of a new learning algorithm or model over a set of benchmark datasets, with respect to the state-of-the-art solutions. Several researchers (see for instance [4] and [9]) have proposed statistical tests suited for 2-class classification tasks where the performance is measured in terms of the classification error (ratio of the number of errors and the number of examples), which enables the use of assumptions based on the fact that the error can be seen as a sum of random variables over the evaluation examples. On the other hand, various research domains prefer to measure the performance of their models using different indicators, such as the F1 measure, used in information retrieval [11], described in Section 2.1. Most classical statistical tests cannot cope directly with such measure as the usual necessary assumptions are no longer correct, and non-parametric bootstrap-based methods are then used [5]. Since several papers already use these non-parametric tests [2, 1], we were interested in verifying empirically how reliable they were. For this purpose, we used a very large text categorization database (the extended Reuters dataset [10]), composed of more than 800000 examples, and concerning more than 100 categories (each document was labelled with one or more of these categories). We purposely set aside the largest part of the dataset and considered it as the whole population, while a much smaller part of it was used as a training set for the models. Using the large set aside dataset part, we tested the statistical test in the ∗This work was supported in part by the Swiss NSF through the NCCR on IM2 and in part by the European PASCAL Network of Excellence, IST-2002-506778, through the Swiss OFES. same spirit as was done in [4], by sampling evaluation sets over which we observed the performance of the models and the behavior of the significance test. Following the taxonomy of questions of interest defined by Dietterich in [4], we can differentiate between statistical tests that analyze learning algorithms and statistical tests that analyze classifiers. In the first case, one intends to be robust to possible variations of the train and evaluation sets, while in the latter, one intends to only be robust to variations of the evaluation set. While the methods discussed in this paper can be applied alternatively to both approaches, we concentrate here on the second one, as it is more tractable (for the empirical section) while still corresponding to real life situations where the training set is fixed and one wants to compare two solutions (such as during a competition). In order to conduct a thorough analysis, we tried to vary the evaluation set size, the class unbalance, the error measure, the statistical test itself (with its associated assumptions), and even the closeness of the compared learning algorithms. This paper, and more precisely Section 3, is a detailed account of this analysis. As it will be seen empirically, the closeness of the compared learning algorithms seems to have an effect on the resulting quality of the statistical tests: comparing an MLP and an SVM yields less reliable statistical tests than comparing two SVMs with a different kernel. To the best of our knowledge, this has never been considered in the literature of statistical tests for machine learning. 2 A Statistical Significance Test for the Difference of F1 Let us first remind the basic classification framework in which statistical significance tests are used in machine learning. We consider comparing two models A and B on a two-class classification task where the goal is to classify input examples xi into the corresponding class yi ∈{−1, 1}, using already trained models fA(xi) or fB(xi). One can estimate their respective performance on some test data by counting the number of utterances of each possible outcome: either the obtained class corresponds to the desired class, or not. Let Ne,A (resp. Ne,B) be the number of errors of model A (resp. B) and N the total number of test examples; The difference between models A and B can then be written as D = Ne,A −Ne,B N . (1) The usual starting point of most statistical tests is to define the so-called null hypothesis H0 which considers that the two models are equivalent, and then verifies how probable this hypothesis is. Hence, assuming that D is an instance of some random variable D which follows some distribution, we are interested in p (|D| < |D|) < α (2) where α represents the risk of selecting the alternate hypothesis (the two models are different) while the null hypothesis is in fact true. This can in general be estimated easily when the distribution of D is known. In the simplest case, known as the proportion test, one assumes (reasonably) that the decision taken by each model on each example can be modeled by a Bernoulli, and further assumes that the errors of the models are independent. This is in general wrong in machine learning since the evaluation sets are the same for both models. When N is large, this leads to estimate D as a Normal distribution with zero mean and standard deviation σD σD = r 2 ¯C(1 −¯C) N (3) where ¯C = Ne,A+Ne,B 2N is the average classification error. In order to get rid of the wrong independence assumption between the errors of the models, the McNemar test [6] concentrates on examples which were differently classified by the two compared models. Following the notation of [4], let N01 be the number of examples misclassified by model A but not by model B and N10 the number of examples misclassified by model B but not by model A. It can be shown that the following statistics is approximatively distributed as a χ2 with 1 degree of freedom: z = (|N01 −N10| −1)2 N01 + N10 . (4) More recently, several other statistical tests have been proposed, such as the 5x2cv method [4] or the variance estimate proposed in [9], which both claim to better estimate the distribution of the errors (and hence the confidence on the statistical significance of the results). Note however that these solutions assume that the error of one model is the average of some random variable (the error) estimated on each example. Intuitively, it will thus tend to be Normally distributed as N grows, following the central limit theorem. 2.1 The F1 Measure Text categorization is the task of assigning one or several categories, among a predefined set of K categories, to textual documents. As explained in [11], text categorization is usually solved as K 2-class classification problems, in a one-against-the-others approach. In this field two measures are considered of importance: Precision = Ntp Ntp + Nfp , and Recall = Ntp Ntp + Nfn , where for each category Ntp is the number of true positives (documents belonging to the category that were classified as such), Nfp the number of false positives (documents out of this category but classified as being part of it) and Nfn the number of false negatives (documents from the category classified as out of it). Precision and Recall are effectiveness measures, i.e. inside [0, 1] interval, the closer to 1 the better. For each category k, Precisionk measures the proportion of documents of the class among the ones considered as such by the classifier and Recallk the proportion of documents of the class correctly classified. To summarize these two values, it is common to consider the so-called F1 measure [12], often used in domains such as information retrieval, text categorization, or vision processing. F1 can be described as the inverse of the harmonic mean of Precision and Recall: F1 = µ1 2 · 1 Recall + 1 Precision ¸¶−1 = 2 · Precision · Recall Precision + Recall = 2Ntp 2Ntp + Nfn + Nfp . (5) Let us consider two models A and B, which achieve a performance measured by F1,A and F1,B respectively. The difference dF1 = F1,A −F1,B does not fit the assumptions of the tests presented earlier. Indeed, it cannot be decomposed into a sum over the documents of independent random variables, since the numerator and the denominator of dF1 are non constant sums over documents of independent random variables. For the same reason F1, while being a proportion, cannot be considered as a random variable following a Normal distribution for which we could easily estimate the variance. An alternative solution to measure the statistical significance of dF1 is based on the Bootstrap Percentile Test proposed in [5]. The idea of this test is to approximate the unknown distribution of dF1 by an estimate based on bootstrap replicates of the data. 2.2 Bootstrap Percentile Test Given an evaluation set of size N, one draws, with replacement, N samples from it. This gives the first bootstrap replicate B1, over which one can compute the statistics of interest, dF1,B1. Similarly, one can create as many bootstrap replicates Bn as needed, and for each, compute dF1,Bn. The higher n is, the more precise should be the statistical test. Literature [3] suggests to create at least 50 α replicates where α is the level of the test; for the smallest α we considered (0.01), this amounts to 5000 replicates. These 5000 estimates dF1,Bi represent the non-parametric distribution of the random variable dF1. From it, one can for instance consider an interval [a, b] such that p(a < dF1 < b) = 1 −α centered around the mean of p(dF1). If 0 lies outside this interval, one can say that dF1 = 0 is not among the most probable results, and thus reject the null hypothesis. 3 Analysis of Statistical Tests We report in this section an analysis of the bootstrap percentile test, as well as other more classical statistical tests, based on a real large database. We first describe the database itself and the protocol we used for this analysis, and then provide results and comments. 3.1 Database, Models and Protocol All the experiments detailed in this paper are based on the very large RCV1 Reuters dataset [10], which contains up to 806,791 documents. We divided it as follows: 798,809 documents were kept aside and any statistics computed over this set Dtrue was considered as being the truth (ie a very good estimate of the actual value); the remaining 7982 documents were used as a training set Dtr (to train models A and B). There was a total of 101 categories and each document was labeled with one or more of these categories. We first extracted the dictionary from the training set, removed stop-words and applied stemming to it, as normally done in text categorization. Each document was then represented as a bag-of-words using the usual tfidf coding. We trained three different models: a linear Support Vector Machine (SVM), a Gaussian kernel SVM, and a multi-layer perceptron (MLP). There was one model for each category for the SVMs, and a single MLP for the 101 categories. All models were properly tuned using cross-validation on the training set. Using the notation introduced earlier, we define the following competing hypotheses: H0 : |dF1| = 0 and H1 : |dF1| > 0. We further define the level of the test α = p(Reject H0|H0), where α takes on values 0.01, 0.05 and 0.1. Table 1 summarizes the possible outcomes of a statistical test. With that respect, rejecting H0 means that one is confident with (1 −α) · 100% that H0 is really false. Table 1: Various outcomes of a statistical test, with α = p(Type I error). Decision Truth Reject H0 Accept H0 H0 Type I error OK H1 OK Type II error In order to assess the performance of the statistical tests on their Type I error, also called Size of the test, and on their Power = 1−Type II error, we used the following protocol. For each category Ci, we sampled over Dtrue, S (500) evaluation sets Ds te of N documents, ran the significance test over each Ds te and computed the proportion of sets for which H0 was rejected given that H0 was true over Dtrue (resp. H0 was false over Dtrue), which we note αtrue (resp. π). We used αtrue as an estimate of the significance test’s probability of making a Type I error and π as an estimate of the significance test’s Power. When αtrue is higher than the α fixed by the statistical test, the test underestimates Type I error, which means we should not rely on its decision regarding the superiority of one model over the other. Thus, we consider that the significance test fails. On the contrary, αtrue < α yields a pessimistic statistical test that decides correctly H0 more often than predicted. Furthermore we would like to favor significance tests with a high π, since the Power of the test reflects its ability to reject H0 when H0 is false. 3.2 Summary of Conditions In order to verify the sensitivity of the analyzed statistical tests to several conditions, we varied the following parameters: • the value of α: it took on values in {0.1, 0.05, 0.01}; • the two compared models: there were three models, two of them were of the same family (SVMs), hence optimizing the same criterion, while the third one was an MLP. Most of the times the two SVMs gave very similar results, (probably because the optimal capacity for this problem was near linear), while the MLP gave poorer results on average. The point here was to verify whether the test was sensitive to the closeness of the tested models (although a more formal definition of closeness should certainly be devised); • the evaluation sample size: we varied it from small sizes (100) up to larger sizes (6000) to see the robustness of the statistical test to it; • the class unbalance: out of the 101 categories of the problem, most of them resulted in highly unbalanced tasks, often with a ratio of 10 to 100 between the two classes. In order to experiment with more balanced tasks, we artificially created meta-categories, which were random aggregations of normal categories that tended to be more balanced; • the tested measure: our initial interest was to directly test dF1, the difference of F1, but given poor initial results, we also decided to assess dCerr, the difference of classification errors, in order to see whether the tests were sensitive to the measure itself; • the statistical test: on top of the bootstrap percentile test, we also analyzed the more classical proportion test and McNemar test, both of them only on dCerr (since they were not adapted to dF1). 3.3 Results Figure 1 summarizes the results for the Size of the test estimates. All graphs show αtrue, the number of times the test rejected H0 while H0 was true, for a fixed α = 0.05, with respect to the sample size, for various statistical tests and tested measures. Figure 2 shows the obtained results for the Power of the test estimates. The proportion of evaluation sets over which the significance test (with α = 0.05) rejected H0 when indeed H0 was false, is plotted against the evaluation set size. Figures 1(a) and 2(a) show the results for balanced data (where the positive and negative examples were approximatively equally present in the evaluation set) when comparing two different models (an SVM and an MLP). Figures 1(b) and 2(b) show the results for unbalanced data when comparing two different models. Figures 1(c) and 2(c) show the results for balanced data when comparing two similar models (a linear SVM and a Gaussian SVM) for balanced data, and finally Figures 1(d) and 2(d) show the results for unbalanced data and two similar models. Note that each point in the graphs was computed over a different number of samples, since eg over the (500 evaluation sets × 101 categories) experiments only those for which H0 was true in Dtrue were taken into account in the computation of αtrue. When the proportion of H0 true in Dtrue equals 0 (resp. the proportion of H0 false in Dtrue equals 0), αtrue (resp. π) is set to -1. Hence, for instance the first points ({100, . . . , 1000}) of Figures 2(c) and 2(d) were computed over only 500 evaluation sets on which respectively the same categorization task was performed. This makes these points unreliable. See [8] for more details. For each of the Size’s graphs, when the curves are over the 0.05 line, we can state that the statistical test is optimistic, while when it is below the line, the statistical test is pessimistic. As already explained, a pessimistic test should be favored whenever possible. Several interesting conclusions can be drawn from the analysis of these graphs. First of all, as expected, most of the statistical tests are positively influenced by the size of the evaluation set, in the sense that their αtrue value converges to α for large sample sizes 1. On the available results, the McNemar test and the bootstrap test over dCerr have a similar performance. They are always pessimistic even for small evaluation set sizes, and tend to the expected α values when the models compared on balanced tasks are dissimilar. They have also a similar performance in Power over all the different conditions, higher in general when comparing very different models. When the compared models are similar, the bootstrap test over dF1 has a pessimistic behavior even on quite small evaluation sets. However, when the models are really different the bootstrap test over dF1 is on average always optimistic. Note nevertheless that most of the points in Figures 1(a) and 1(b) have a standard deviation std, over the categories, such that αtrue −std < α (see [8] for more details). Another interesting point is that in the available results for the Power, the dF1’s bootstrap test have relatively high values with respect to the other tests. The proportion test have in general, on the available results, a more conservative behavior than the McNemar test and the dCerr bootstrap test. It has more pessimistic results and less Power. It is too often prone to “Accept H0”, ie to conclude that the compared models have an equivalent performance, whether it is true or not. This results seem to be consistent with those of [4] and [9]. However, when comparing close models in a small unbalanced evaluation set (Figure 1(d)), this conservative behavior is not present. To summarize the findings, the bootstrap-based statistical test over dCerr obtained a good performance in Size comparable to the one of the McNemar test in all conditions. However both significance test performances in Power are low even for big evaluation sets in particular when the compared models are close. The bootstrap-based statistical test over dF1 has higher Power than the other compared tests, however it must be emphasized that it is slightly over-optimistic in particular for small evaluation sets. Finally, when applying the proportion test over unbalanced data for close models we obtained an optimistic behavior, untypical of this usually conservative test. 4 Conclusion In this paper, we have analyzed several parametric and non-parametric statistical tests for various conditions often present in machine learning tasks, including the class balancing, the performance measure, the size of the test sets, and the closeness of the compared mod1Note that the same is true for the variance of αtrue(→0), and this for any of the α values tested. 0 1000 2000 3000 4000 5000 6000 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Evaluation set size Proportion of Type I error 0.05 Bootstrap test dF1 McNemar test Proportion test Bootstrap test dCerr (a) Linear SVM vs MLP - Balanced data 0 1000 2000 3000 4000 5000 6000 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Evaluation set size Proportion of Type I error 0.05 Bootstrap test dF1 McNemar test Proportion test Bootstrap test dCerr (b) Linear SVM vs MLP - Unbalanced data 0 1000 2000 3000 4000 5000 6000 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Evaluation set size Proportion of Type I error 0.05 Bootstrap test dF1 McNemar test Proportion test Bootstrap test dCerr (c) Linear vs RBF SVMs - Balanced data 0 1000 2000 3000 4000 5000 6000 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Evaluation set size Proportion of Type I error 0.05 Bootstrap test dF1 McNemar test Proportion test Bootstrap test dCerr (d) Linear vs RBF SVMs - Unbalanced data Figure 1: Several statistical tests comparing Linear SVM vs MLP or vs RBF SVM. The proportion of Type I error equals -1, in Figure 1(b), when there was no data to compute the proportion (ie H0 was always false). els. More particularly, we were concerned by the quality of non-parametric tests since in some cases (when using more complex performance measures such as F1), they are the only available statistical tests. Fortunately, most statistical tests performed reasonably well (in the sense that they were more often pessimistic than optimistic in their decisions) and larger test sets always improved their performance. Note however that for dF1 the only available statistical test was too optimistic although consistant for different levels. An unexpected result was that the rather conservative proportion test used over unbalanced data for close models yielded an optimistic behavior. It has to be noted that recently, a probabilistic interpretation of F1 was suggested in [7], and a comparison with bootstrap-based tests should be worthwhile. References [1] M. Bisani and H. Ney. Bootstrap estimates for confidence intervals in ASR performance evaluation. In Proceedings of ICASSP, 2004. [2] R. M. Bolle, N. K. Ratha, and S. Pankanti. Error analysis of pattern recognition systems - the subsets bootstrap. Computer Vision and Image Understanding, 93:1– 33, 2004. 0 1000 2000 3000 4000 5000 6000 0.0 0.2 0.4 0.6 0.8 1.0 Evaluation set size Power of the test Bootstrap test dF1 McNemar test Proportion test Bootstrap test dCerr (a) Linear SVM vs MLP - Balanced data 0 1000 2000 3000 4000 5000 6000 0.0 0.2 0.4 0.6 0.8 1.0 Evaluation set size Power of the test Bootstrap test dF1 McNemar test Proportion test Bootstrap test dCerr (b) Linear SVM vs MLP - Unbalanced data 0 1000 2000 3000 4000 5000 6000 0.0 0.2 0.4 0.6 0.8 1.0 Evaluation set size Power of the test Bootstrap test dF1 McNemar test Proportion test Bootstrap test dCerr (c) Linear vs RBF SVMs - Balanced data 0 1000 2000 3000 4000 5000 6000 0.0 0.2 0.4 0.6 0.8 1.0 Evaluation set size Power of the test Bootstrap test dF1 McNemar test Proportion test Bootstrap test dCerr (d) Linear vs RBF SVMs - Unbalanced data Figure 2: Power of several statistical tests comparing Linear SVM vs MLP or vs RBF SVM. The power equals -1, in Figures 2(c) and 2(d), when there was not data to compute the proportion (ie H1 was never true). [3] A. C. Davison and D. V. Hinkley. Bootstrap methods and their application. Cambridge University Press, 1997. [4] T.G. Dietterich. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10(7):1895–1924, 1998. [5] B. Efron and R. Tibshirani. An Introduction to the Bootstrap. Chapman and Hall, 1993. [6] B. S. Everitt. The analysis of contingency tables. Chapman and Hall, 1977. [7] C. Goutte and E. Gaussier. A probabilistic interpretation of precision, recall and Fscore, with implication for evaluation. In Proceedings of ECIR, pages 345–359, 2005. [8] M. Keller, S. Bengio, and S. Y. Wong. Surprising Outcome While Benchmarking Statistical Tests. IDIAP-RR 38, IDIAP, 2005. [9] Claude Nadeau and Yoshua Bengio. Inference for the generalization error. Machine Learning, 52(3):239–281, 2003. [10] T.G. Rose, M. Stevenson, and M. Whitehead. The Reuters Corpus Volume 1 - from yesterday’s news to tomorrow’s language resources. In Proceedings of the 3rd Int. Conf. on Language Resources and Evaluation, 2002. [11] F. Sebastiani. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1–47, 2002. [12] C. J. van Rijsbergen. Information Retrieval. Butterworths, London, UK, 1975.
|
2005
|
188
|
2,812
|
Convergence and Consistency of Regularized Boosting Algorithms with Stationary β-Mixing Observations Aur´elie C. Lozano Department of Electrical Engineering Princeton University Princeton, NJ 08544 alozano@princeton.edu Sanjeev R. Kulkarni Department of Electrical Engineering Princeton University Princeton, NJ 08544 kulkarni@princeton.edu Robert E. Schapire Department of Computer Science Princeton University Princeton, NJ 08544 schapire@cs.princeton.edu Abstract We study the statistical convergence and consistency of regularized Boosting methods, where the samples are not independent and identically distributed (i.i.d.) but come from empirical processes of stationary β-mixing sequences. Utilizing a technique that constructs a sequence of independent blocks close in distribution to the original samples, we prove the consistency of the composite classifiers resulting from a regularization achieved by restricting the 1-norm of the base classifiers’ weights. When compared to the i.i.d. case, the nature of sampling manifests in the consistency result only through generalization of the original condition on the growth of the regularization parameter. 1 Introduction A significant development in machine learning for classification has been the emergence of boosting algorithms [1]. Simply put, a boosting algorithm is an iterative procedure that combines weak prediction rules to produce a composite classifier, the idea being that one can obtain very precise prediction rules by combining rough ones. It was shown in [2] that AdaBoost, the most popular Boosting algorithm, can be seen as stage-wise fitting of additive models under the exponential loss function and it effectively minimizes an empirical loss function that differs from the probability of incorrect prediction. From this perspective, boosting can be seen as performing a greedy stage-wise minimization of various loss functions empirically. The question of whether boosting achieves Bayes-consistency then arises, since minimizing an empirical loss function does not necessarily imply minimizing the generalization error. When run a very long time, the AdaBoost algorithm, though resistant to overfitting, is not immune to it [2, 3]. There also exist cases where running Adaboost forever leads to a prediction error larger than the Bayes error in the limit of infinite sample size. Consequently, one approach for the study of consistency is to modify the original Adaboost algorithm by imposing some constraints on the weights of the composite classifier to avoid overfitting. In this regularized version of Adaboost, the 1-norm of the weights of the base classifiers is restricted to a fixed value. The minimization of the loss function is performed over the restricted class [4, 5]. In this paper, we examine the convergence and consistency of regularized boosting algorithms with samples that are no longer i.i.d. but come from empirical processes of stationary weakly dependent sequences. A practical motivation for our study of non i.i.d. sampling is that in many learning applications observations are intrinsically temporal and hence often weakly dependent. Ignoring this dependency could seriously undermine the performance of the learning process (for instance, information related to the time-dependent ordering of samples would be lost). Recognition of this issue has led to several studies of non i.i.d. sampling [6, 7, 8, 9, 10, 11, 12]. To cope with weak dependence we apply mixing theory which, through its definition of mixing coefficients, offers a powerful approach to extend results for the traditional i.i.d. observations to the case of weakly dependent or mixing sequences. We consider the βmixing coefficients, whose mathematical definition is deferred to Sec. 2.1. Intuitively, they provide a “measure” of how fast the dependence between the observations diminishes as the distance between them increases. If certain conditions on the mixing coefficients are satisfied to reflect a sufficiently fast decline in the dependence between observations as their distance grows, counterparts to results for i.i.d. random processes can be established. A comprehensive review of mixing theory results is provided in [13]. Our principal finding is that consistency of regularized Boosting methods can be established in the case of non-i.i.d. samples coming from empirical sequences of stationary β-mixing sequences. Among the conditions that guarantee consistency, the mixing nature of sampling appears only through a generalization of the one on the growth of the regularization parameter originally stated for the i.i.d. case [4]. 2 Background and Setup 2.1 Mixing Sequences Let W = (Wi)i≥1 be a strictly stationary sequence of random variables, each having the same distribution P on D ⊂Rd. Let σl 1 = σ (W1, W2, . . . , Wl) be the σ-field generated by W1, . . . , Wl. Similarly, let σ∞ l+k = σ (Wl+k, Wl+k+1, . . . , ) . The following mixing coefficients characterize how close to independent a sequence W is. Definition 1. For any sequence W, the β-mixing1 coefficient is defined by βW (n) = supk E sup © |P ¡ A|σk 1 ¢ −P (A) | : A ∈σ∞ k+n ª , where the expectation is taken w.r.t. σk 1. Hence βW (n) quantifies the degree of dependence between ’future’ observations and ’past’ ones separated by a distance of at least n. In this study, we will assume that the sequences 1To gain insight into the notion of β-mixing, it is useful to think of the σ-field generated by a random variable X as the “body of information” carried by X. This leads to the following interpretation of β-mixing. Suppose that the index i in Wi is the time index. Let A be an event happening in the future within the period of time between t = k + n and t = ∞. |P(A|σk 1) −P(A)| is the absolute difference between the probability that event A occurs, given the knowledge of the information generated by the past up to t = k, and the probability of event A occurring without this knowledge. Then, the greater the dependence between σk 1 (the information generated by (W1, . . . , Wk)) and σ∞ k+n (the information generated by (Wk+n, . . . , W∞)), the larger the coefficient βW (n). we consider are algebraically β-mixing. This property implies that the dependence between observations decreases fast enough as the distance between them increases. Definition 2. A sequence W is called β-mixing if limn→∞βW (n) = 0. Further, it is algebraically β-mixing if there is a positive constant rβ such that βW (n) = O (n−rβ) . The choice of β-mixing appears appropriate given previous results that showed “uniform convergence of empirical means uniformly in probability” and “probably approximately correct” properties to be preserved for β-mixing inputs [11]. Some examples of β-mixing sequences that fit naturally in a learning scenario are certain Markov processes and Hidden Markov Models [11]. In practice, if the mixing properties are unknown, they need to be estimated. Although it is difficult to find them in general, there exist simple methods to determine the mixing rates for various classes of random processes (e.g. Gaussian, Markov, ARMA, ARCH, GARCH). Hence the assumption of a known mixing rate is reasonable and has been adopted by many studies [6, 7, 8, 9, 10, 12]. 2.2 Classification with Stationary β-Mixing Training Data In the standard binary classification problem, the training data consist of a set Sn = {(X1, Y1) , . . . , (Xn, Yn)}, where Xk belongs to some measurable space X, and Yk is in {−1, 1}. Using Sn, a classifier hn : X →{−1, 1} is built to predict the label Y of an unlabeled observation X. Traditionally, the samples are assumed to be i.i.d., and to our knowledge, this assumption is made by all the studies on boosting consistency. In this paper, we suppose that the sampling is no longer i.i.d. but corresponds to an empirical process of stationary β-mixing sequences. More precisely, let D = X × Y, where Y = {−1, +1}. Let Wi = (Xi, Yi). We suppose that W = (Wi)i≥1 is a strictly stationary sequence of random variables, each having the same distribution P on D and that W is β-mixing (see Definition 2). This setup is in line with [7]. We assume that the unlabeled observation is such that (X, Y ) is independent of Sn but with the same marginal. 3 Statistical Convergence and Consistency of Regularized Boosting for Stationary β-Mixing Sequences 3.1 Regularized Boosting We adopt the framework of [4] which we now recall. Let H denote the class of base classifiers h : X →{−1, 1}, which usually consists of simple rules (for instance decision stumps). This class is required to have finite VC-dimension. Call F, the class of functions f : X →[−1, 1] obtained as convex combinations of the classifiers in H: F = n f (X) = t X j=1 αjhj (X) : t ∈N, α1, . . . , αt ≥0, t X j=1 αj = 1, h1, . . . , ht ∈H o . (1) Each fn ∈F defines a classifier hfn = sign (fn) and for simplicity the generalization error L (hfn) is denoted by L (fn). Then the training error is denoted by Ln (fn) = 1/n Pn i=1 I[hfn(Xi)̸=Yi]. Define Z (f) = −f (X) Y and Zi (f) = −f (Xi) Yi. Instead of minimizing the indicator of misclassification (I[−f(X)Y >0]), boosting methods are shown to effectively minimize a smooth convex cost function of Z(f). For instance, Adaboost is based on the exponential function. Consider a positive, differentiable, strictly increasing, and strictly convex function φ : R →R+ and assume that φ (0) = 1 and that limx→−∞φ (x) = 0. The corresponding cost function and empirical cost function are respectively C (f) = Eφ (Z (f)) and Cn (f) = 1/n Pn i=1 φ (Zi (f)) . Note that L (f) ≤C (f), since I[x>0] ≤φ (x). The iterative aspect of boosting methods is ignored to consider only their performing an (approximate) minimization of the empirical cost function or, as we shall see, a series of cost functions. To avoid overfitting, the following regularization procedure is developed for the choice of the cost functions. Define φλ such that ∀λ > 0 φλ (x) = φ (λx) . The corresponding empirical and expected cost functions become Cλ n (f) = 1 n Pn i=1 φλ (Zi (f)) and Cλ (f) = Eφλ (Z (f)) . The minimization of a series of cost functions Cλ over the convex hull of H is then analyzed. 3.2 Statistical Convergence The nature of the sampling intervenes in the following two lemmas that relate the empirical cost Cλ n (f) and true cost Cλ (f). Lemma 1. Suppose that for any n, the training data (X1, Y1) , . . . (Xn, Yn) comes from a stationary algebraically β-mixing sequence with β-mixing coefficients β (m) satisfying β (m) = O (m−rβ), m ∈N and rβ a positive constant. Then for any λ > 0 and b ∈[0, 1), E sup f∈F |Cλ (f) −Cλ n (f) | ≤4λφ′ (λ) c1 n(1−b)/2 + 2φ (λ) ³ 1 nb(1+rβ)−1 + 2 n1−b ´ . (2) Lemma 2. Let the training data be as in Lemma 1. For any b ∈[0, 1), and α ∈(0, 1 −b), let ϵn = 3(2c1 + nα/2)λφ′(λ)/n(1−b)/2. Then for any λ > 0 P ¡ sup f∈F |Cλ (f) −Cλ n (f) | > ϵn ¢ ≤exp(−4c2nα) + O(n1−b(rβ+1)). (3) The constants c1 and c2 in the above lemmas are given in the proofs of Lemma 1 (Section 4.2) and Lemma 2 (Section 4.3) respectively. 3.3 Consistency Result The following summarizes the assumptions that are made to prove consistency. Assumption 1. I- Properties of the sample sequence: The samples (X1, Y1) , . . . , (Xn, Yn) are assumed to come from a stationary algebraically β-mixing sequence with β-mixing coefficients βX,Y (n) = O (n−rβ), rβ being a positive constant. II- Properties of the cost function φ: φ is assumed to be a differentiable, strictly convex, strictly increasing cost function such that φ (0) = 1 and limx→−∞φ (x) = 0. III- Properties of the base hypothesis space: H has finite VC dimension. The distribution of (X, Y ) and the class H are such that limλ→∞inff∈λF C (f) = C∗, where λF = {λf : f ∈F} and C∗= inf C (f) over all measurable functions f : X →R. IV- Properties of the smoothing parameter: We assume that λ1, λ2, . . . is a sequence of positive numbers satisfying λn →∞as n →∞, and that there exists a constant c ∈ ¡ 1 1+rβ , 1 ¢ such that λnφ′ (λn) /n(1−c)/2 →0 as n →∞. Call ˆf λ n the function in F which approximatively minimizes Cλ n (f), i.e. ˆf λ n is such that Cλ n( ˆf λ n) ≤inff∈F Cλ n (f) + ϵn = inff∈F 1 n Pn i=1 φλ (Zi (f)) + ϵn, with ϵn →0 as n →∞. The main result is the following. Theorem 1. Consistency of regularized boosting methods for stationary β-mixing sequences. Let fn = ˆf λn n ∈F, where ˆf λn n (approximatively) minimizes Cλn n (f) . Under Assumption 1, limn→∞L (hfn = sign (fn)) = L∗almost surely and hfn is strongly Bayes-risk consistent. Cost functions satisfying Assumption 1.II include the exponential function and the logit function log2(1 + ex). Regarding Assumption 1.II, the reader is referred to [4](Remark on (denseness assumption)). In Assumption 1.IV, notice that the nature of sampling leads to a generalization of the condition on the growth of λnφ′ (λn) already present in the i.i.d. setting [4]. More precisely, the nature of sampling manifests through parameter c, which is limited by rβ. The assumption that rβ is known is quite strict but cannot be avoided (for instance this assumption is widely made in the field of time series analysis). On a positive note, if unknown, rβ can be determined for various classes of processes as mentioned Section 2.1. 4 Proofs 4.1 Preparation to the Proofs: the Blocking Technique The key issue resides in upper bounding sup f∈F ¯¯Cλ n (f) −Cλ (f) ¯¯ = sup f∈F ¯¯¯1/n n X i=1 φ (−λf (Xi) Yi) −Eφ (−λf (X1) Y1) ¯¯¯, (4) where F is given by (1). Let W = (X, Y ), Wi = (Xi, Yi). Define the function gλ by gλ (W) = gλ (X, Y ) = φ (−λf (X) Y ) and the class Gλ by Gλ = {gλ : gλ (X, Y ) = φ (−λf (X) Y ) , f ∈F} . Then (4) can be rewritten as supf∈F ¯¯Cλ n (f) −Cλ (f) ¯¯ = supgλ∈Gλ ¯¯¯n−1 Pn i=1 gλ (Wi) −Egλ (W1) ¯¯¯. Note that the class Gλ is uniformly bounded by φ (λ). Besides, if H is a class of measurable functions, then Gλ is also a class of measurable functions, by measurability of F. As the Wi’s are not i.i.d, we propose to use the blocking technique developed in [12, 14] to construct i.i.d blocks of observations which are close in distribution to the original sequence W1, . . . , Wn. This enables us to work on the sequence of independent blocks instead of the original sequence. We use the same notation as in [12]. The protocol is the following. Let (bn, µn) be a pair of integers, such that (n −2bn) ≤2bnµn ≤n. (5) Divide the segment W1 = (X1, Y1) , . . . , Wn = (Xn, Yn) of the mixing sequence into 2µn blocks of size bn, followed by a remaining block (of size at most 2bn). Consider the odd blocks only. If their size bn is large enough, the dependence between them is weak, since two odd blocks are separated by an even block of the same size bn. Therefore, the odd blocks can be approximated by a sequence of independent blocks with the same within-block structure. The same holds if we consider the even blocks. Let (ξ1, . . . , ξbn) , (ξbn+1, . . . , ξ2bn) , . . . , ¡ ξ(2µn−1)bn, . . . , ξ2µnbn ¢ be independent blocks such that ¡ ξjbn+1, . . . , ξ(j+1)bn ¢ =D ¡ Wjbn+1, . . . , W(j+1)bn ¢ , for j = 0, . . . , µn −1. For j = 1, . . . , 2µn, and any g ∈Gλ, define Zj,g := Pjbn i=(j−1)bn+1 g (ξi) −bnEg (ξ1) , ˜Zj,g := Pjbn i=(j−1)bn+1 g (Wi) −bnEg (W1) . Let Oµn = {1, 3, . . . , 2µn −1} and Eµn = {2, 4, . . . , 2µn}. Define Zi,j(f) as Zi,j(f) := −f ¡ ξ(2j−2)bn+i,1 ¢ · ξ(2j−2)bn+i,2, where ξk,1 and ξk,2 are respectively the 1st and 2nd coordinate of the vector ξk. These correspond to the Zk(f) = −f (Xk) Yk for k in the odd blocks 1, ..., bn, 2bn + 1, ..., 3bn, .... 4.2 Proof sketch of Lemma 1 A. Working with Independent Blocks. We show that E sup g∈Gλ ¯¯¯ 1 n n X i=1 g (Wi)−Eg (W1) ¯¯¯ ≤2E sup g∈Gλ ¯¯¯ 1 n X j∈Oµn Zj,g ¯¯¯+φ (λ) ³ µnβW (bn)+ 2bn n ´ . (6) Proof. Without loss of generality, assume that Eg (W1) = Eg (ξ1) = 0. Then, E supg ¯¯¯ 1 n Pn i=1 g (Wi) ¯¯¯ = E supg ¯¯¯ 1 n ³P Oµn ˜Zj,g + P Eµn ˜Zj,g + R ´ ¯¯¯, where R is the remainder term consisting of a sum of at most 2bn terms. Noting that ∀g ∈ Gλ, |g| ≤φ (λ), it follows that E supg | 1 n Pn i=1 g (Wi) | ≤E(supg | 1 n P Oµn ˜Zj,g|) + E(supg | 1 n P Eµn ˜Zj,g|) + φ(λ)(2bn) n . We use the following intermediary lemma. Lemma 3 (adapted from [15], Lemma 4.1). Call Q the distribution of (W1, . . . , Wbn, W2bn+1, . . . , W3bn, . . .) and eQ the distribution of (ξ1, . . . , ξbn, ξ2bn+1, . . . , ξ3bn, . . .). For any measurable function h on Rbnµn with bound H, |Qh (W1, . . .) −eQh (ξ1, . . .) | ≤H (µn −1) βW (bn) . The same result holds for (Wbn+1, . . . , W2bn, W3bn+1, . . . , W4bn . . .). Using this with h(W1, . . .) = supg | 1 n P Oµn ˜Zj,g| and h(Wbn+1, . . .) = supg | 1 n P Eµn ˜Zj,g| respectively, and noting that H = φ (λ) /2, we have E supg | 1 n Pn i=1 g (Wi) | ≤ E supg | 1 n P Oµn Zj,g|+ φ(λ) 2 µnβW (bn)+E supg | 1 n P Eµn Zj,g|+ φ(λ) 2 µnβW (bn)+ φ(λ)(2bn) n . As the Zj,g’s from odd and even blocks have the same distribution, we obtain (6). ⊓⊔ B. Symmetrization. The odd blocks Zj,g’s being independent, we can use the standard symmetrization techniques. Let Z′ j,g’s be i.i.d. copies of the Zj,g’s. Let Z′ i,j(f)’s be the corresponding copies of the Zi,j(f). Let (σi) be a Rademacher sequence, i.e. a sequence of independent random variables taking the values ±1 with probability 1/2. Then by [16], Lemma 6.3 (Proof is omitted due to space constraints), we have E sup g ¯¯¯ 1 n X j∈Oµm Zj,g ¯¯¯ ≤E sup g ¯¯¯ 1 n X j∈Oµn σj ¡ Zj,g −Z′ j,g ¢ ¯¯¯. (7) C. Contraction Principle. We now show that E sup g∈Gλ ¯¯¯ 1 n X j∈Oµn Zj,g ¯¯¯ ≤2 · bnλφ′ (λ) E sup f∈F ¯¯¯ 1 n µn X j=1 σjZ1,j(f) ¯¯¯. (8) Proof. As Zj,g = Pbn i=1 φλ(Zi,j(f)), and the Zi,j(f)’s and Z′i,j(f)’s are i.i.d., with (7) E supg ¯¯ 1 n P j∈Oµn Zj,g ¯¯ ≤E supg ¯¯ 1 n Pµn j=1 σj Pbn i=1 ¡ φλ (Zi,j(f)) −φλ ¡ Z′ i,j(f) ¢¢ ¯¯ ≤ 2bnE supg ¯¯1 n Pµn j=1 σj (φλ (Z1,j(f))−1) ¯¯. By applying the “Comparison Theorem”, Theorem 7 in [17], to the contraction ψ (x) = (1/λφ′ (λ)) (φλ (x) −1), we obtain (8). ⊓⊔ D. Maximal Inequality. We show that there exists a constant c1 > 0 such that E sup f∈F ¯¯¯ 1 n µn X j=1 σjZ1,j(f) ¯¯¯ ≤c1√µn n . (9) Proof. Denote (h1, . . . , hN) by hN 1 . One can write E supf∈F | 1 n Pµn j=1 σjZ1,j(f)| = 1 nE supN≥1 suphN 1 ∈HN supα1,...,αN | Pµn j=1 PN k=1 αkσjξ(1,j),2hk ¡ ξ(2j−2)bn+1,1 ¢ |. Since ξ(2j−2)bn+1,2 and ξ(2j′−2)bn+1,2 are i.i.d. for all j ̸= j′ (they come from different blocks), and (σj) is a Rademacher sequence, then ¡ σjξ(2j−2)bn+1,2hk ¡ ξ(2j−2)bn+1,1 ¢¢ j=1,...,µn has the same distribution as ¡ σjhk ¡ ξ(2j−2)bn+1,1 ¢¢ j=1,...,µn. Hence E sup f∈F ¯¯¯¯ 1 n µn X j=1 σjZ1,j(f) ¯¯¯¯ = 1 nE sup N≥1 sup hN 1 ∈HN sup α1,...,αN ¯¯¯¯ µn X j=1 N X k=1 σjαkhk ¡ ξ(2j−2)bn+1,1 ¢ ¯¯¯¯. By the same argument as used in [4], p.53 on the maximum of a linear function over a convex polygon, the supremum is achieved when αk = 1 for some k. Hence we get E supf∈F ¯¯¯ 1 n Pµn j=1 σjZ1,j(f) ¯¯¯ = 1 nE suph∈H ¯¯¯ Pµn j=1 σjh ¡ ξ(1,j),1 ¢ ¯¯¯. Noting that for all j ̸= j′, h(ξ(2j−2)bn+1,1) and h(ξ(2j′−2)bn+1,1) are i.i.d. and that Rademacher processes are sub-gaussian, we have by [18], Corollary 2.2.8 1 nE sup h∈H ¯¯¯¯ µn X j=1 σjh ¡ ξ(2j−2)bn+1,1 ¢ ¯¯¯¯ ≤ 1 nE sup h∈H∪{0} ¯¯¯¯ µn X j=1 σjh ¡ ξ(2j−2)bn+1,1 ¢ ¯¯¯¯ ≤ c′√µn n Z ∞ 0 (log sup P N (ϵ, ρ2,Pn, H ∪{0}))1/2dϵ, where c′ is a constant and N (ϵ, ρ2,Pn, H ∪{0}) is the empirical L2 covering number. As H has finite VC-dimension (see Assumption 1.III), there exists a positive constant w such that supP N(ϵ, ρ2,Pn, H ∪{0}) = OP (ϵ−w)(see [18], Theorem 2.6.1). Hence R ∞ 0 (log supPn N (ϵ, ρ2,Pn, H ∪{0}))1/2dϵ < ∞. and (9) follows. ⊓⊔ E. Establishing (2). Combining (6),(8), and (9), we have E supg∈Gλ ¯¯¯ 1 n Pn i=1 g (Wi) −Eg (W1) ¯¯¯ ≤4bnλφ′ (λ) c1√µn n + φ (λ) ¡ µnβW (bn)+ 2bn n ¢ . Take bn = nb, with 0 ≤b < 1. By (5), we obtain µn ≤n1−b/2. Besides, as we assumed that the sequence W is algebraically β-mixing (see Definition 2), βW (n) = O (n−rβ). Then µnβW (bn) = O ¡ n1−b(1+rβ)¢ , and we arrive at (2). 4.3 Proof Sketch of Lemma 2 A. Working with Independent Blocks and Symmetrization. For any b ∈[0, 1), α ∈ (0, 1 −b), let ϵn = 3(2c1 + nα/2)λφ′(λ)/n(1−b)/2. (10) We show P ³ sup g∈Gλ ¯¯¯ 1 n n X i=1 g (Wi)−Eg (W1) ¯¯¯ > ϵn ´ ≤2P ³ sup g∈Gλ ¯¯¯ 1 n X j∈Oµn Zj,g ¯¯¯> ϵn/3 ´ +O(n1−b(1+rβ)). (11) Proof. By [12], Lemma 3.1, we have that for any ϵn such that φ(λ)bn = o(nϵn), P ³ supg∈Gλ ¯¯¯ 1 n Pn i=1 g (Wi) −Eg (W1) ¯¯¯ > ϵn ´ ≤2P ³ supg∈Gλ ¯¯¯ 1 n P j∈Oµn Zj,g ¯¯¯ > ϵn/3 ´ + 4µnβW (bn). Set bn = nb, with 0 ≤b < 1. Then µnβW (bn) = O(n1−b(1+rβ)) (for the same reasons as in Section 4.2 E.). With ϵn as in (10), and since Assumption 1.II implies that λφ′(λ) ≥φ(λ) −1, we automatically obtain φ(λ)bn = o(nϵn). ⊓⊔ B. McDiarmid’s Bounded Difference Inequality. For ϵn as in (10), there exists a constant c2 > 0 such that, P ³ sup g∈Gλ ¯¯¯ 1 n X j∈Oµn Zj,g ¯¯¯ > ϵn/3 ´ ≤exp(−4c2nα). (12) Proof. The Zj,g’s of the odd block being independent, we can apply McDiarmid’s bounded difference inequality ([19], Theorem 9.2 p.136) on the function supg∈Gλ| 1 n P j∈Oµn Zj,g| which depends of Z1,g, Z3,g . . . , Z2µn−1,g. Noting that changing the value of one variable does not change the value of the function by more that bnφ (λ) /n,we obtain with bn = nb that for all ϵ > 0, P ³ supg∈Gλ ¯¯¯ 1 n P j∈Oµn Zj,g ¯¯¯ > E supg∈Gλ ¯¯ 1 n P j∈Oµn Zj,g ¯¯¯ + ϵ ´ ≤exp ³ −4ϵ2n1−b φ(λ)2 ´ . Combining (8) and (9) from the proof of Lemma 1, and with bn = nb, we have E supg∈Gλ ¯¯¯ 1 n P j∈Oµn Zj,g ¯¯¯ ≤2λφ′ (λ) C/n(1−b)/2. With ϵ = nα/2λφ′(λ)/n(1−b)/2, we obtain ϵn as in (10). Pick λ0 such that 0 < λ0 < λ. Then, since λφ′(λ) ≥φ(λ) −1, (12) follows with c2 = (1 −1/φ(λ0))2. ⊓⊔ C. Establishing (3). Combining (11) and (12) we obtain (3). 4.4 Proof Sketch of Theorem 1 Let ¯fλ a function in F minimizing Cλ. With fn = ˆf λn n , we have C (λnfn) −C∗= (Cλn( ˆf λn n ) −Cλn( ¯fλn)) + (inff∈λnF C(f) −C∗). Since λn →∞, the second term on the right-hand side converges to zero by Assumption 1.III. By [19], Lemma 8.2, we have Cλn( ˆf λn n ) −Cλn ¡ ¯fλn ¢ ≤2 supf∈F |Cλn (f) − Cλn n (f) |. By Lemma 2, supf∈F |Cλn (f) −Cλn n (f) | →0 with probability 1 if, as n →∞, λnφ′ (λn) n(α+b−1)/2 →0 and b > 1/(1 + rβ). Hence if Assumption 1.IV holds, C (λnfn) →C∗with probability 1. By [4], Lemma 5, the theorem follows. References [1] Schapire, R.E.: The Boosting Approach to Machine Learning An Overview. In Proc. of the MSRI Workshop on Nonlinear Estimation and Classification (2002) [2] Friedman, J., Hastie T., Tibshirani, R.: Additive logistic regression: A statistical view of boosting. Ann. Statist. 38 (2000) 337–374 [3] Jiang, W.: Does Boosting Overfit:Views From an Exact Solution. Technical Report 00-03 Department of Statistics, Northwestern University (2000) [4] Lugosi, G., Vayatis, N.: On the Bayes-risk consistency of boosting methods. Ann. Statist. 32 (2004) 30–55 [5] Zhang, T.: Statistical Behavior and Consistency of Classification Methods based on Convex Risk Minimization. Ann. Statist. 32 (2004) 56–85 [6] Gy¨orfi, L., H¨ardle, W., Sarda, P., and Vieu, P.: Nonparametric Curve Estimation from Time Series. Lecture Notes in Statistics. Springer-Verlag, Berlin. (1989) [7] Irle, A.: On the consistency in nonparametric estimation under mixing assumptions. J. Multivariate Anal. 60 (1997) 123–147 [8] Meir, R.: Nonparametric Time Series Prediction Through Adaptative Model Selection. Machine Learning 39 (2000) 5–34 [9] Modha, D., Masry, E.: Memory-Universal Prediction of Stationary Random Processes. IEEE Trans. Inform. Theory 44 (1998) 117–133 [10] Roussas, G.G.: Nonparametric estimation in mixing sequences of random variables. J. Statist. Plan. Inference. 18 (1988) 135–149 [11] Vidyasagar, M.: A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems. Second Edition. Springer-Verlag, London (2002) [12] Yu, B.: Density estimation in the L∞norm for dependent data with applications. Ann. Statist. 21 (1993) 711–735 [13] Doukhan, P.: Mixing Properties and Examples. Springer-Verlag, New York (1995) [14] Yu, B.: Some Results on Empirical Processes and Stochastic Complexity. Ph.D. Thesis, Dept of Statistics, U.C. Berkeley (Apr. 1990) [15] Yu, B.: Rate of convergence for empirical processes of stationary mixing sequences. Ann. Probab. 22 (1994) 94–116. [16] Ledoux, M., Talagrand, N.: Probability in Banach Spaces. Springer, New York (1991) [17] Meir, R., Zhang, T.:Generalization error bounds for Bayesian mixture algorithms. J. Machine Learning Research (2003) [18] van der Vaart, A.W., Wellner, J.A.: Weak convergence and empirical processes. Springer Series in Statistics. Springer-Verlag, New York (1996) [19] Devroye, L., Gy¨orfiL., Lugosi, G.: A Probabilistic Theory of Pattern Recognition. Springer, New York (1996)
|
2005
|
189
|
2,813
|
Non-Local Manifold Parzen Windows Yoshua Bengio, Hugo Larochelle and Pascal Vincent Dept. IRO, Universit´e de Montr´eal P.O. Box 6128, Downtown Branch, Montreal, H3C 3J7, Qc, Canada {bengioy,larocheh,vincentp}@iro.umontreal.ca Abstract To escape from the curse of dimensionality, we claim that one can learn non-local functions, in the sense that the value and shape of the learned function at x must be inferred using examples that may be far from x. With this objective, we present a non-local non-parametric density estimator. It builds upon previously proposed Gaussian mixture models with regularized covariance matrices to take into account the local shape of the manifold. It also builds upon recent work on non-local estimators of the tangent plane of a manifold, which are able to generalize in places with little training data, unlike traditional, local, non-parametric models. 1 Introduction A central objective of statistical machine learning is to discover structure in the joint distribution between random variables, so as to be able to make predictions about new combinations of values of these variables. A central issue in obtaining generalization is how information from the training examples can be used to make predictions about new examples and, without strong prior assumptions (i.e. in non-parametric models), this may be fundamentally difficult, as illustrated by the curse of dimensionality. (Bengio, Delalleau and Le Roux, 2005) and (Bengio and Monperrus, 2005) present several arguments illustrating some fundamental limitations of modern kernel methods due to the curse of dimensionality, when the kernel is local (like the Gaussian kernel). These arguments are all based on the locality of the estimators, i.e., that very important information about the predicted function at x is derived mostly from the near neighbors of x in the training set. This analysis has been applied to supervised learning algorithms such as SVMs as well as to unsupervised manifold learning algorithms and graph-based semi-supervised learning. The analysis in (Bengio, Delalleau and Le Roux, 2005) highlights intrinsic limitations of such local learning algorithms, that can make them fail when applied on problems where one has to look beyond what happens locally in order to overcome the curse of dimensionality, or more precisely when the function to be learned has many variations while there exist more compact representations of these variations than a simple enumeration. This strongly suggests to investigate non-local learning methods, which can in principle generalize at x using information gathered at training points xi that are far from x. We present here such a non-local learning algorithm, in the realm of density estimation. The proposed non-local non-parametric density estimator builds upon the Manifold Parzen density estimator (Vincent and Bengio, 2003) that associates a regularized Gaussian with each training point, and upon recent work on non-local estimators of the tangent plane of a manifold (Bengio and Monperrus, 2005). The local covariance matrix characterizing the density in the immediate neighborhood of a data point is learned as a function of that data point, with global parameters. This allows to potentially generalize in places with little or no training data, unlike traditional, local, non-parametric models. Here, the implicit assumption is that there is some kind of regularity in the shape of the density, such that learning about its shape in one region could be informative of the shape in another region that is not adjacent. Note that the smoothness assumption typically underlying non-parametric models relies on a simple form of such transfer, but only for neighboring regions, which is not very helpful when the intrinsic dimension of the data (the dimension of the manifold on which or near which it lives) is high or when the underlying density function has many variations (Bengio, Delalleau and Le Roux, 2005). The proposed model is also related to the Neighborhood Component Analysis algorithm (Goldberger et al., 2005), which learns a global covariance matrix for use in the Mahalanobis distance within a non-parametric classifier. Here we generalize this global matrix to one that is a function of the datum x. 2 Manifold Parzen Windows In the Parzen Windows estimator, one puts a spherical (isotropic) Gaussian around each training point xi, with a single shared variance hyper-parameter. One approach to improve on this estimator, introduced in (Vincent and Bengio, 2003), is to use not just the presence of xi and its neighbors but also their geometry, trying to infer the principal characteristics of the local shape of the manifold (where the density concentrates), which can be summarized in the covariance matrix of the Gaussian, as illustrated in Figure 1. If the data concentrates in certain directions around xi, we want that covariance matrix to be “flat” (near zero variance) in the orthogonal directions. One way to achieve this is to parametrize each of these covariance matrices in terms of “principal directions” (which correspond to the tangent vectors of the manifold, if the data concentrates on a manifold). In this way we do not need to specify individually all the entries of the covariance matrix. The only required assumption is that the “noise directions” orthogonal to the “principal directions” all have the same variance. ˆp(y) = 1 n n X i=1 N(y; xi + µ(xi), S(xi)) (1) where N(y; xi +µ(xi), S(xi)) is a Gaussian density at y, with mean vector xi +µ(xi) and covariance matrix S(xi) represented compactly by S(xi) = σ2 noise(xi)I + d X j=1 s2 j(xi)vj(xi)vj(xi)′ (2) where s2 j(xi) and σ2 noise(xi) are scalars, and vj(xi) denotes a “principal” direction with variance s2 j(xi) + σ2 noise(xi), while σ2 noise(xi) is the noise variance (the variance in all the other directions). vj(xi)′ denotes the transpose of vj(xi). In (Vincent and Bengio, 2003), µ(xi) = 0, and σ2 noise(xi) = σ2 0 is a global hyperparameter, while (λj(xi), vj) = (s2 j(xi) + σ2 noise(xi), vj(xi)) are the leading (eigenvalue,eigenvector) pairs from the eigen-decomposition of a locally weighted covariance matrix (e.g. the empirical covariance of the vectors xl −xi, with xl a near neighbor of xi). The “noise level” hyper-parameter σ2 0 must be chosen such that the principal eigenvalues are all greater than σ2 0. Another hyper-parameter is the number d of principal components to keep. Alternatively, one can choose σ2 noise(xi) to be the (d + 1)th eigenvalue, which guarantees that λj(xi) > σ2 noise(xi), and gets rid of a hyper-parameter. This very simple model was found to be consistently better than the ordinary Parzen density estimator in numerical experiments in which all hyper-parameters are chosen by cross-validation. 3 Non-Local Manifold Tangent Learning In (Bengio and Monperrus, 2005) a manifold learning algorithm was introduced in which the tangent plane of a d-dimensional manifold at x is learned as a function of x ∈RD, using globally estimated parameters. The output of the predictor function F(x) is a d × D matrix whose d rows are the d (possibly non-orthogonal) vectors that span the tangent plane. The training information about the tangent plane is obtained by considering pairs of near neighbors xi and xj in the training set. Consider the predicted tangent plane of the manifold at xi, characterized by the rows of F(xi). For a good predictor we expect the vector (xi −xj) to be close to its projection on the tangent plane, with local coordinates w ∈Rd. w can be obtained analytically by solving a linear system of dimension d. The training criterion chosen in (Bengio and Monperrus, 2005) then minimizes the sum over such (xi, xj) of the sinus of the projection angle, i.e. ||F ′(xi)w −(xj −xi)||2/||xj − xi||2. It is a heuristic criterion, which will be replaced in our new algorithm by one derived from the maximum likelihood criterion, considering that F(xi) indirectly provides the principal eigenvectors of the local covariance matrix at xi. Both criteria gave similar results experimentally, but the model proposed here yields a complete density estimator. In both cases F(xi) can be interpreted as specifying the directions in which one expects to see the most variations when going from xi to one of its near neighbors in a finite sample. plane tangent µ xi v1 σnoise q s2 1 + σ2 noise Figure 1: Illustration of the local parametrization of local or Non-Local Manifold Parzen. The examples around training point xi are modeled by a Gaussian. µ(xi) specifies the center of that Gaussian, which should be non-zero when xi is off the manifold. vk’s are principal directions of the Gaussian and are tangent vectors of the manifold. σnoise represents the thickness of the manifold. 4 Proposed Algorithm: Non-Local Manifold Parzen Windows In equations (1) and (2) we wrote µ(xi) and S(xi) as if they were functions of xi rather than simply using indices µi and Si. This is because we introduce here a non-local version of Manifold Parzen Windows inspired from the non-local manifold tangent learning algorithm, i.e., in which we can share information about the density across different regions of space. In our experiments we use a neural network of nhid hidden neurons, with xi in input to predict µ(xi), σ2 noise(xi), and the s2 j(xi) and vj(xi). The vectors computed by the neural network do not need to be orthonormal: we only need to consider the subspace that they span. Also, the vectors’ squared norm is used to infer s2 j(xi), instead of having a separate output for them. We will note F(xi) the matrix whose rows are the vectors output of the neural network. From it we obtain the s2 j(xi) and vj(xi) by performing a singular value decomposition, i.e. F ′F = Pd j=1 s2 jvjv′ j. Moreover, to make sure σ2 noise does not get too small, which could make the optimization unstable, we impose σ2 noise(xi) = s2 noise(xi) + σ2 0, where snoise(·) is an output of the neural network and σ2 0 is a fixed constant. Imagine that the data were lying near a lower dimensional manifold. Consider a training example xi near the manifold. The Gaussian centered near xi tells us how neighbors of xi are expected to differ from xi. Its “principal” vectors vj(xi) span the tangent of the manifold near xi. The Gaussian center variation µ(xi) tells us how xi is located with respect to its projection on the manifold. The noise variance σ2 noise(xi) tells us how far from the manifold to expect neighbors, and the directional variances s2 j(xi) + σ2 noise(xi) tell us how far to expect neighbors on the different local axes of the manifold, near xi’s projection on the manifold. Figure 1 illustrates this in 2 dimensions. The important element of this model is that the parameters of the predictive neural network can potentially represent non-local structure in the density, i.e., they allow to potentially discover shared structure among the different covariance matrices in the mixture. Here is the pseudo code algorithm for training Non-Local Manifold Parzen (NLMP): Algorithm NLMP::Train(X, d, k, kµ, µ(·), S(·), σ2 0) Input: training set X, chosen number of principal directions d, chosen number of neighbors k and kµ, initial functions µ(·) and S(·), and regularization hyper-parameter σ2 0. (1) For xi ∈X (2) Collect max(k,kµ) nearest neighbors of xj. Below, call yj one of the k nearest neighbors, yµ j one of the kµ nearest neighbors. (3) Perform a stochastic gradient step on parameters of S(·) and µ(·), using the negative log-likelihood error signal on the yj, with a Gaussian of mean xi + µ(xi) and of covariance matrix S(xi). The approximate gradients are: ∂C(yµ j ,xi) ∂µ(xi) = − 1 nkµ (yµ j )S(xi)−1(yµ j −xi −µ(xi)) ∂C(yj,xi) ∂σ2 noise(xi) = 0.5 1 nk(yj) T r(S(xi)−1) −||(yj −xi −µ(xi))′S(xi)−1||2 ∂C(yj,xi) ∂F (xi) = 1 nk(yj)F(xi)S(xi)−1 I −(yj −xi −µ(xi))(yj −xi −µ(xi))′S(xi)−1 where nk(y) = |Nk(y)| is the number of points in the training set that have y among their k nearest neighbors. (4) Go to (1) until a given criterion is satisfied (e.g. average NLL of NLMP density estimation on a validation set stops decreasing) Result: trained µ(·) and S(·) functions, with corresponding σ2 0. Deriving the gradient formula (the derivative of the log-likelihood with respect to the neural network outputs) is lengthy but straightforward. The main trick is to do a Singular Value Decomposition of the basis vectors computed by the neural network, and to use known simplifying formulas for the derivative of the inverse of a matrix and of the determinant of a matrix. Details on the gradient derivation and on the optimization of the neural network are given in the technical report (Bengio and Larochelle, 2005). 5 Computationally Efficient Extension: Test-Centric NLMP While the NLMP algorithm appears to perform very well, one of its main practical limitation for density estimation, that it shares with Manifold Parzen, is the large amount of computation required upon testing: for each test point x, the complexity of the computation is O(n.d.D) (where D is the dimensionality of input space RD). However there may be a different and cheaper way to compute an estimate of the density at x. We build here on an idea suggested in (Vincent, 2003), which yields an estimator that does not exactly integrate to one, but this is not an issue if the estimator is to be used for applications such as classification. Note that in our presentation of NLMP, we are using “hard” neighborhoods (i.e. a local weighting kernel that assigns a weight of 1 to the k nearest neighbors and 0 to the rest) but it could easily be generalized to “soft” weighting, as in (Vincent, 2003). Let us decompose the true density at x as: p(x) = p(x|x ∈Bk(x))P(Bk(x)), where Bk(x) represents the spherical ball centered on x and containing the k nearest neighbors of x (i.e., the ball with radius ∥x −Nk(x)∥where Nk(x) is the k-th neighbor of x in the training set). It can be shown that the above NLMP learning procedure looks for functions µ(·) and S(·) that best characterize the distribution of the k training-set nearest neighbors of x as the normal N(·; x + µ(x), S(x)). If we trust this locally normal (unimodal) approximation of the neighborhood distribution to be appropriate then we can approximate p(x|x ∈Bk(x)) by N(x; x + µ(x), S(x)). The approximation should be good when Bk(x) is small and p(x) is continuous. Moreover as Bk(x) contains k points among n we can approximate P(Bk(x)) by k n. This yields the estimator ˆp(x) = N(x; x+µ(x), S(x)) k n, which requires only O(d.D) time to evaluate at a test point. We call this estimator Test-centric NLMP, since it considers only the Gaussian predicted at the test point, rather than a mixture of all the Gaussians obtained at the training points. 6 Experimental Results We have performed comparative experiments on both toy and real-world data, on density estimation and classification tasks. All hyper-parameters are selected by cross-validation, and the costs on a large test set is used to compare final performance of all algorithms. Experiments on toy 2D data. To understand and validate the non-local algorithm we tested it on toy 2D data where it is easy to understand what is being learned. The sinus data set includes examples sampled around a sinus curve. In the spiral data set examples are sampled near a spiral. Respectively, 57 and 113 examples are used for training, 23 and 48 for validation (hyper-parameter selection), and 920 and 3839 for testing. The following algorithms were compared: • Non-Local Manifold Parzen Windows. The hyper-parameters are the number of principal directions (i.e., the dimension of the manifold), the number of nearest neighbors k and kµ, the minimum constant noise variance σ2 0 and the number of hidden units of the neural network. • Gaussian mixture with full but regularized covariance matrices. Regularization is done by setting a minimum constant value σ2 0 to the eigenvalues of the Gaussians. It is trained by EM and initialized using the k-means algorithm. The hyper-parameter is σ2 0, and early stopping of EM iterations is done with the validation set. • Parzen Windows density estimator, with a spherical Gaussian kernel. The hyperparameter is the spread of the Gaussian kernel. • Manifold Parzen density estimator. The hyper-parameters are the number of principal components, k of the nearest neighbor kernel and the minimum eigenvalue σ2 0. Note that, for these experiments, the number of principal directions (or components) was fixed to 1 for both NLMP and Manifold Parzen. Density estimation results are shown in table 1. To help understand why Non-Local Manifold Parzen works well on these data, figure 2 illustrates the learned densities for the sinus and spiral data. Basically, it works better here because it yields an estimator that is less sensitive to the specific samples around each test point, thanks to its ability to share structure Algorithm sinus spiral Non-Local MP 1.144 -1.346 Manifold Parzen 1.345 -0.914 Gauss Mix Full 1.567 -0.857 Parzen Windows 1.841 -0.487 Table 1: Average out-of-sample negative loglikelihood on two toy problems, for Non-Local Manifold Parzen, a Gaussian mixture with full covariance, Manifold Parzen, and Parzen Windows. The non-local algorithm dominates all the others. Algorithm Valid. Test Non-Local MP -73.10 -76.03 Manifold Parzen 65.21 58.33 Parzen Windows 77.87 65.94 Table 2: Average Negative Log-Likelihood on the digit rotation experiment, when testing on a digit class (1’s) not used during training, for Non-Local Manifold Parzen, Manifold Parzen, and Parzen Windows. The non-local algorithm is clearly superior. across the whole training set. Figure 2: Illustration of the learned densities (sinus on top, spiral on bottom) for four compared models. From left to right: Non-Local Manifold Parzen, Gaussian mixture, Parzen Windows, Manifold Parzen. Parzen Windows wastes probability mass in the spheres around each point, while leaving many holes. Gaussian mixtures tend to choose too few components to avoid overfitting. The Non-Local Manifold Parzen exploits global structure to yield the best estimator. Experiments on rotated digits. The next experiment is meant to show both qualitatively and quantitatively the power of non-local learning, by using 9 classes of rotated digit images (from 729 first examples of the USPS training set) to learn about the rotation manifold and testing on the left-out class (digit 1), not used for training. Each training digit was rotated by 0.1 and 0.2 radians and all these images were used as training data. We used NLMP for training, and for testing we formed an augmented mixture with Gaussians centered not only on the training examples, but also on the original unrotated 1 digits. We tested our estimator on the rotated versions of each of the 1 digits. We compared this to Manifold Parzen trained on the training data containing both the original and rotated images of the training class digits and the unrotated 1 digits. The objective of the experiment was to see if the model was able to infer the density correctly around the original unrotated images, i.e., to predict a high probability for the rotated versions of these images. In table 2 we see quantitatively that the non-local estimator predicts the rotated images much better. As qualitative evidence, we used small steps in the principal direction predicted by Testcentric NLMP to rotate an image of the digit 1. To make this task even more illustrative of the generalization potential of non-local learning, we followed the tangent in the direction opposite to the rotations of the training set. It can be seen in figure 3 that the rotated Figure 3: From left to right: original image of a digit 1; rotated analytically by −0.2 radians; Rotation predicted using Non-Local MP; rotation predicted using MP. Rotations are obtained by following the tangent vector in small steps. digit obtained is quite similar to the same digit analytically rotated. For comparison, we tried to apply the same rotation technique to that digit, but by using the principal direction, computed by Manifold Parzen, of its nearest neighbor’s Gaussian component in the training set. This clearly did not work, and hence shows how crucial non-local learning is for this task. In this experiment, to make sure that NLMP focusses on the tangent plane of the rotation manifold, we fixed the number of principal directions d = 1 and the number of nearest neighbors k = 1, and also imposed µ(·) = 0. The same was done for Manifold Parzen. Experiments on Classification by Density Estimation. The USPS data set was used to perform a classification experiment. The original training set (7291) was split into a training (first 6291) and validation set (last 1000), used to tune hyper-parameters. One density estimator for each of the 10 digit classes is estimated. For comparison we also show the results obtained with a Gaussian kernel Support Vector Machine (already used in (Vincent and Bengio, 2003)). Non-local MP* refers to the variation described in (Bengio and Larochelle, 2005), which attemps to train faster the components with larger variance. The t-test statistic for the null hypothesis of no difference in the average classification error on the test set of 2007 examples between Non-local MP and the strongest competitor (Manifold Parzen) is shown in parenthesis. Figure 4 also shows some of the invariant transformations learned by Non-local MP for this task. Note that better SVM results (about 3% error) can be obtained using prior knowledge about image invariances, e.g. with virtual support vectors (Decoste and Scholkopf, 2002). However, as far as we know the NLMP performance is the best on the original USPS dataset among algorithms that do not use prior knowledge about images. Algorithm Valid. Test Hyper-Parameters SVM 1.2% 4.68% C = 100, σ = 8 Parzen Windows 1.8% 5.08% σ = 0.8 Manifold Parzen 0.9% 4.08% d = 11, k = 11, σ2 0 = 0.1 Non-local MP 0.6% 3.64% (-1.5218) d = 7, k = 10, kµ = 10, σ2 0 = 0.05, nhid = 70 Non-local MP* 0.6% 3.54% (-1.9771) d = 7, k = 10, kµ = 4, σ2 0 = 0.05, nhid = 30 Table 3: Classification error obtained on USPS with SVM, Parzen Windows and Local and Non-Local Manifold Parzen Windows classifiers. The hyper-parameters shown are those selected with the validation set. 7 Conclusion We have proposed a non-parametric density estimator that, unlike its predecessors, is able to generalize far from the training examples by capturing global structural features of the Figure 4: Tranformations learned by Non-local MP. The top row shows digits taken from the USPS training set, and the two following rows display the results of steps taken by one of the 7 principal directions learned by Non-local MP, the third one corresponding to more steps than the second one. density. It does so by learning a function with global parameters that successfully predicts the local shape of the density, i.e., the tangent plane of the manifold along which the density concentrates. Three types of experiments showed that this idea works, yields improved density estimation and reduced classification error compared to its local predecessors. Acknowledgments The authors would like to thank the following funding organizations for support: NSERC, MITACS, and the Canada Research Chairs. The authors are also grateful for the feedback and stimulating exchanges that helped to shape this paper, with Sam Roweis and Olivier Delalleau. References Bengio, Y., Delalleau, O., and Le Roux, N. (2005). The curse of dimensionality for local kernel machines. Technical Report 1258, D´epartement d’informatique et recherche op´erationnelle, Universit´e de Montr´eal. Bengio, Y. and Larochelle, H. (2005). Non-local manifold parzen windows. Technical report, D´epartement d’informatiqueet recherche op´erationnelle, Universit´e de Montr´eal. Bengio, Y. and Monperrus, M. (2005). Non-local manifold tangent learning. In Saul, L., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems 17. MIT Press. Decoste, D. and Scholkopf, B. (2002). Training invariant support vector machines. Machine Learning, 46:161–190. Goldberger, J., Roweis, S., Hinton, G., and Salakhutdinov, R. (2005). Neighbourhood component analysis. In Saul, L., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems 17. MIT Press. Vincent, P. (2003). Mod`eles `a Noyaux `a Structure Locale. PhD thesis, Universit´e de Montr´eal, D´epartement d’informatique et recherche op´erationnelle, Montreal, Qc., Canada. Vincent, P. and Bengio, Y. (2003). Manifold parzen windows. In Becker, S., Thrun, S., and Obermayer, K., editors, Advances in Neural Information Processing Systems 15, Cambridge, MA. MIT Press.
|
2005
|
19
|
2,814
|
A Cortically-Plausible Inverse Problem Solving Method Applied to Recognizing Static and Kinematic 3D Objects David W. Arathorn Center for Computational Biology, Montana State University Bozeman, MT 59717 dwa@cns . montana . edu General Intelligence Corporation dwa@giclab . com Abstract Recent neurophysiological evidence suggests the ability to interpret biological motion is facilitated by a neuronal "mirror system" which maps visual inputs to the pre-motor cortex. If the common architecture and circuitry of the cortices is taken to imply a common computation across multiple perceptual and cognitive modalities, this visual-motor interaction might be expected to have a unified computational basis. Two essential tasks underlying such visual-motor cooperation are shown here to be simply expressed and directly solved as transformation-discovery inverse problems: (a) discriminating and determining the pose of a primed 3D object in a real-world scene, and (b) interpreting the 3D configuration of an articulated kinematic object in an image. The recently developed map-seeking method provides a mathematically tractable, cortically-plausible solution to these and a variety of other inverse problems which can be posed as the discovery of a composition of transformations between two patterns. The method relies on an ordering property of superpositions and on decomposition of the transformation spaces inherent in the generating processes of the problem. 1 Introduction A variety of "brain tasks" can be tersely posed as transformation-discovery problems. Vision is replete with such problems, as is limb control. The problem of recognizing the 2D projection of a known 3D object is an inverse problem of finding both the visual and pose transformations relating the image and the 3D model of the object. When the object in the image may be one of many known objects another step is added to the inverse problem, because there are multiple candidates each of which must be mapped to the input image with possibly different transformations. When the known object is not rigid, the determination of articulations and/or morphings is added to the inverse problem. This includes the general problem of recognition of biological articulation and motion, a task recently attributed to a neuronal mirror-system linking visual and motor cortical areas [1]. Though the aggregate transformation space implicit in such problems is vast, a recently developed method for exploring vast transformation spaces has allowed some significant progress with a simple unified approach. The map-seeking method [2,4] is a general purpose mathematical procedure for finding the decomposition of the aggregate transformation between two patterns, even when that aggregate transformation space is vast and there is no prior information is available to restrict the search space. The problem of concurrently searching a large collection of memories can be treated as a subset of the transformation problem and consequently the same method can be applied to find the best transformation between an input image and a collection of memories (numbering at least thousands in practice to date) during a single convergence. In the last several years the map-seeking method has been applied to a variety of practical problems, most of them related to vision, a few related to kinematics, and some which do not correspond to usual categories of "brain functions." The generality of the method is due to the fact that only the mappings are specialized to the task. The mathematics of the search, whether expressed in an algorithm or in a neuronal or electronic circuit, do not change. From an evolutionary biological point of view this is a satisfying characteristic for a model of cortical function because only the connectivity which implements the mappings must be varied to specialize a cortex to a task. All the rest organization and dynamics would remain the same across cortical areas. f = = I t' / '" t b' , E t .. ' q L1 ~ (; y ~ f' t"' ~ V ~ e/ '" b\ ) ~ I . ~ 2 , )'1 L2 E ~ ~ f 2 = = t'2 ~ V ~ ~ b3 ~ b3 c:::!:::J wt E t } ~ t .~ q3 1 q3 L3 93 1 w, w, W2 w,~ W2 w,~ Figure 1. Data flow in map-seeking circuit Cortical neuroanatomy offers emphatic hints about the characteristics of its solution in the vast neuronal resources allocated to creating reciprocal top-down and bottomup pathways. More specifically, recent evidence suggests this reciprocal pathway architecture appears to be organized with reciprocal, co-centered fan outs in the opposing directions [3], quite possibly implementing inverse mappings. The data flow of map-seeking computations, seen in Figure I, is architecturally compatibility with these features of cortical organization. Though not within the scope of this discussion, it has been demonstrated [4] that the mathematical expression of the map-seeking method, seen in equations 6-9 below, has an isomorphic implementation in neuronal circuitry with reasonably realistic dendritic architecture and dynamics (e.g. compatible with [5] ) and oscillatory dynamics. 2 The basis for tractable transformation-discovery The related problems of recognition/interpretation of 2D images of static and articulated kinematic 3D objects illustrate how cleanly significant vision problems may be posed and solved as transformation-discovery inverse ?roblems. The visual and pose (in the sense of orientation) transformations, tVISua and fo se, between a given 3D model ml and the extent of an input image containing a 2D projection P(OI) of an object 01 mappable to ml can be expressed ff sllal E T Visuol , trse E T pose eq. I If we now consider that the model ml may be constructed by the one-to-many mapping of a base vector or feature e, and that arbitrarily other models mj may be similarly constructed by different mappings, then the transformation f ormation corresponding to the correct "memory" converts the memory database search problem into another transformation-discovery problem with one more composed transformation I p( 0 ) = r :isual 0 { pose 0 ( formation (e) I J k 1111 t~~rmatiol1 E T formatioll eq. 2 t formatioll (e) = m M "" I m l E Finally, if we allow a morphable object to be "constructed" by a generative model, whose various configurations or articulations may be generated by a composition of transformations f ellerative of some root or seed feature e, the problem of explicitly recognizing the particular configuration of morph becomes a transformationdiscovery problem of the form p( C ( 0) ) = t,/",al 0 tfse 0 Wile/alive ( e) t lenerative E T generative eq. 3 These unifying formulations are only useful, however, if there is a tractable method of solving for the various transformations. That is what the map-seeking method provides. Abstractly the problem is the discovery of a composition of transformations between two patterns. In general the transformations express the generating process of the problem. Define correspondence c between vectors rand w through a composition of L transformations tJ, ,t]2 , .. ·,tfL where t~t E ti ,t~,· ··,t;'t 1 This illustrates that forming a superposItion of memories is equivalent to forming superpositions of transformations. The first is a more practical realization, as seen in Figure 1. Though not demonstrated in this paper, the multi-memory architecture has proved robust with 1000 or more memory patterns from real-world datasets. c( j) = (~I tj i ( r), w) eq. 4 where the composition operator is defined L . (I = 1···L ( L o ( L- I ... o (1 (r) o t l ( r) = )L JL-I J 1 ;=0.1 ;. 1=0 r Let C be an L dimensional matrix of values of c(j) whose dimensions are n, .. . nL. The problem, then is to find x = argmax c(j) eq. 5 The indices x specify the sequence of transformations that best correspondence between vectors rand w. The problem is that C is too large a space to search for x by conventional means. Instead, a continuous embedding of C permits a search with resources proportional to the sum of sizes of the dimensions of C instead of their product. C is embedded in a superposition dot product space Q defined eq. 6 where G = [g;:" ] m = 1···L,x", = 1·· ·nm nm is number of t in layer m, g;:, E [0,1] , t: I is adjoint of tf . In Q space, the solution to eq. 5 lies along a single axis in the set of axes represented each row of G. That is, gIll =< 0,.· ·'U'm" · ·,0> U'm > 0 which corresponds to the best fitting transformation tx , where Xm is the mth index in x in eq. 5. This state is reached from an initial ~'~ate G = [1] by a process termed superposition culling in which the components of grad Q are used to compute a path in steps Llg , eq. 7 eq. 8 The functionfpreserves the maximal component and reduces the others: in neuronal terms, lateral inhibition. The resulting path along the surface Q can be thought of as a "high traverse" in contrast to the gradient ascent or descent usual in optimization methods. The price for moving the problem into superposition dot product space is that collusions of components of the superpositions can result in better matches for incorrect mappings than for the mappings of the correct solution. If this occurs it is almost always a temporary state early in the convergence. This is a consequence of the ordering property of superpositions (OPS) [2,4], which, as applied here, describes the characteristics of the surface Q. For example, let three superpositions r = :t U; , S = :t V j and s' = :t Vk be formed from three sets of sparse i= 1 j = l k= l vectors u;ER, Vj ES and VkES' where R n S=0 and R n S'=vq • Then the following relationship expresses the OPS: define Pco"ec! = p( r • s' > r. s), P'"co,,'eCl = p( r. s' :::; r. s) then Pcorrecl > R ncorrect or R orrecf > 0.5 and as n, m --+ 1 Pco"'ect --+ l.0 Applied to eq. 8, this means that for superposItIOns composed of vectors which satisfy the distribution properties of sparse, decorrelating encodings2 (a biologically plausible assumption [6]), the probability of the maximum components of grad Q moving the solution in the correct direction is always greater than 0.5 and increases toward 1.0 as the G becomes sparser. In other words, the probability of the occurrence of collusion decreases with the decrease in numbers of contributing components in the superposition(s), and/or the decrease in their gating coefficients. 3 The map-seeking method and application A map-seeking circuit (MSC) is composed of several transformation or mapping layers between the input at one end and a memory layer at the other, as seen in Figure l. The compositional structure is evident in the simplicity of the equations (eqs. 9-12 below) which define a circuit of any dimension. In a multi-layer circuit of L layers plus memory with n{ mappings in layer I the forward path signal for layer m is computed 11m f m = Lg;" t;' (rm-l) for m = 1. .. L ) = 1 The signal for layer m is form=1. .. L or !gZ" Wk or W for m = L+ I k=1 The mapping coefficients g are updated by the recurrence gi" := K( gi", ti" (f m- I ). b ",+I) for m = 1. .. L,i = 1. .. n, g/+I := K( g/+I , f' • W k ) for k = l... nw (optional) eq. 9 computed eq. 10 eq. 11 where match operator u • v = q, q is a scalar measure of goodness-of-match between u and v, and may be non-linear. When. is a dot product, the second argument of K is the same as oQlg in eq. 7. The competition function K is a realization of lateral inhibition function/in eq. 8. It may optionally be applied to the memory layer, as seen in eq. 11. 2 A restricted case of the superposition ordering property using non-sparse representation is exploited by HRR distributed memory. See [7] for an analysis which is also applicable here. K(g;, q;) = max [0, g; - k, -(1- m:~ q J J eq. 12 Thresholds are normally applied to q and g, below which they are set to zero to speed convergence. In above, f is the input signal, tT , (Ill are the /h forward and backward mappings for the m th layer, Wk is the kth memory pattern, z( ) is a nonlinearity applied to the response of each memory. gill is the set of mapping coefficients gT for the m th layer, each of which is associated with mapping tT and is modified over time by the competition function K( ). Recognizing 2D projections of 3D objects under real operating conditions (a) 3D memory model (b )source image (c) input image - blurred 2 00,---------, 200.--------, ,.-------,0. 150 150 o. 100 100 o. 50 50 os o ':: 0 0 ':---::cc----:-o-:---:-:-----:,-:' o 50 1 00 150 200 0 50 100 150 200 0 50 100 150 2 00 oL..-o-.-o-.--os-....lo 0 (d) iter 1 (e) iter 3 (f)iterI2 (g) final model pose Figure 2. Recognizing target among distractor vehicles. (a) M60 3D memory model ; (b) source image, Fort Carson Data Set; (c) Gaussian blurred input image; (d-f) isolation of target in layer 0, iterations 1, 3, 12; (g) pose determination in final iteration, layer 4 backward - presented left-right mirrored to reflect mirroring determined in layer 3. M-60 model courtesy Colorado State University. Real world problems of the form expressed in eq. 1 often present objects at distances or in conditions which so limit the resolution that there are no alignable features other than the shape of the object itself, which is sufficiently blurred as to prevent generating reliable edges in a feed-forward manner (e.g. Fig. 2c). In the map-seeking approach, however, the top-down (in biological parlance) inversemappings of the 3D model are used to create a set of edge hypotheses on the backward path out of layer 1 into layer O. In layer 0 these hypotheses are used to gate the input image. As convergence proceeds, the edge hypotheses are reduced to a single edge hypothesis that best fits the grayscale input image. Figure 2 shows this process applied to one of a set of deliberately blurred images from the Fort Carson Imagery Data Set. The MSC used four layers of visual transformations: 14,400 translational, 31 rotational, 41 scaling, 481 3D projection. The MSC had no difficulty distinguishing the location and orientation of the tank, despite distractors and background clutter: in all tests in the dataset target was correctly located. In effect, once primed with a top-down expectation, attentional behavior IS an emergent property of application of the map-seeking method to vision [8]. Adapting generative models by transformation "The direct-matching hypothesis of the interpretation of biological motion] holds that we understand actions when we map the visual representation of the observed action onto our motor representation of the same action." [1] This mapping, attributed to a neuronal mirror-system for which there is gathering neurobiological evidence (as reviewed in [1]), requires a mechanism for projecting between the visual space and the constrained skeletal joint parameter (kinematic) space to disambiguate the 2D projection of body structure. [4] Though this problem has been solved to various degrees by other computational methods, a review of which is beyond the scope of this discussion, to the author's knowledge none of these have biological plausibility. The present purpose is to show how simply the problem can be expressed by the generative model interpretation problem introduced in eq. 3 and solve by map-seeking circuits. An idealized example is the problem of interpreting the shape of a featureless "snake" articulated into any configuration, as appears in Fig. 3. (e) (d) Figure 3. Projection between visual and kinematic spaces with two map-seeking circuits. (a) input view, (b) top view, (c) projection of 3D occluding contours, (d,e) projections of relationship of occluding contours to generating spine. The solution to this problem involves two coupled map-seeking circuits. The kinematic circuit layers model the multiple degrees of freedom (here two angles, variable length and optionally variable radius from spine to surface) of each of the connected spine segments. The other circuit determines the visual transformations, as seen in the earlier example. The surface of the articulated cylinder is mapped from an axial spine. The points where that surface is tangent to the viewpoint vectors define the occluding contours which, projected in 2D, become the object silhouette. The problem is to find the articulations, segment lengths (and optionally segment diameter) which account for the occluding contour matching the silhouette in the input image. In the MSC solution, the initial state all possible articulations of the snake spine are superposed, and all the occluding contours from a range of viewing angles are projected into 2D. The latter superposition serves as the backward input to the visual space map-seeking circuit. Since the snake surfaceis determined by all of the layers of the kinematic circuit, these are projected in parallel to form the backward (biologically top-down) 2D input to the visual transformation-discovery circuit. A matching operation between the contributors to the 2D occluding contour superposition and the forward transformations of the input image modulates the gain of each mapping in the kinematic circuit via a/n in eqs. 13, 14 (modified from eq. 11). In eqs. 13, 14 K indicates kinematic circuit, V indicates visual circuit. gin := camp gt', at'· tin f m- l • bm+1 for m = l. .. L,i = l. .. n! K ( K VKK ( K ) K ) K K eq. 13 a!" = f L . t3D....::,.2D 0 t mrjace 0 t: III bm+' VK ( V) K ( K ) eq. 14 The process converges concurrently in both circuits to a solution, as seen in Figure 3. The match of the occluding contours and the input image, Figure 3a, is seen in Figure 3b,c, with its three dimensional structure is clarified in Figure 3d. Figure 3e shows a view of the 3D structure as determined directly from the mapping parameters defining the snake "spine" after convergence. 4 Conclusion The investigations reported here expand the envelope of vision-related problems amenable to a pure transformation-discovery approach implemented by the mapseeking method. The recognition of static 3D models, as seen in Figure 2, and other problems [9] solved by MSC have been well tested with real-world input. Numerous variants of Figure 3 have demonstrated the applicability of MSC to recognizing generative models of high dimensionality, and the principle has recently been applied successfully to real-world domains. Consequently, the research to date does suggest that a single cortical computational mechanism could span a significant range of the brain's visual and kinematic computing. References [IJ G. Rizzolati, L. Fogassi, V. Gallese, Neurophysiological mechanisms underlying the understanding and imitation of action, Nature Reviews Neuroscience, 2, 2001 , 661-670 [2J D. Arathorn, Map-Seeking: Recognition Under Transformation Using A Superposition Ordering Property. Electronics Letters 37(3), 2001 pp164-165 [3J A. Angelucci, B. Levitt, E. Walton, J.M. Hupe, J. Bullier, J. Lund, Circuits for Local and Global Signal Integration in Primary Visual Cortex, Journal of Neuroscience, 22(19) , 2002 pp 8633-8646 [4J D. Arathorn, Map-Seeking Circuits in Visual Cognition, Palo Alto, Stanford Univ Press, 2002 [5J A. Polsky, B. Mel, J. Schiller, Computational Subunits in Thin Dendrites of Pyramidal Cells, Nature Neuroscience 7(6), 2004 pp 621-627 [6J B.A. OIshausen, D.J. Field, Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images, Nature, 381, 1996 pp607-609 [7J T. Plate, Holographic Reduced Representation, CSLI publications, Stanford, California, 2003 [8J D. Arathorn, Memory-driven visual attention: an emergent behavior of map-seeking circuits, in Neurobiology of Attention, Eds Itti L, Rees G, Tsotsos J, Academic/Elsevier, 2005 [9J C. Vogel, D. Arathorn, A. Parker, and A. Roorda, "Retinal motion tracking in adaptive optics scanning laser ophthalmoscopy", Proceedings of OSA Conference on Signal Recovery and Synthesis, Charlotte NC, June 2005.
|
2005
|
190
|
2,815
|
Dynamic Social Network Analysis using Latent Space Models Purnamrita Sarkar, Andrew W. Moore Center for Automated Learning and Discovery Carnegie Mellon University Pittsburgh, PA 15213 (psarkar,awm)@cs.cmu.edu Abstract This paper explores two aspects of social network modeling. First, we generalize a successful static model of relationships into a dynamic model that accounts for friendships drifting over time. Second, we show how to make it tractable to learn such models from data, even as the number of entities n gets large. The generalized model associates each entity with a point in p-dimensional Euclidian latent space. The points can move as time progresses but large moves in latent space are improbable. Observed links between entities are more likely if the entities are close in latent space. We show how to make such a model tractable (subquadratic in the number of entities) by the use of appropriate kernel functions for similarity in latent space; the use of low dimensional kd-trees; a new efficient dynamic adaptation of multidimensional scaling for a first pass of approximate projection of entities into latent space; and an efficient conjugate gradient update rule for non-linear local optimization in which amortized time per entity during an update is O(log n). We use both synthetic and real-world data on upto 11,000 entities which indicate linear scaling in computation time and improved performance over four alternative approaches. We also illustrate the system operating on twelve years of NIPS co-publication data. We present a detailed version of this work in [1]. 1 Introduction Social network analysis is becoming increasingly important in many fields besides sociology including intelligence analysis [2], marketing [3] and recommender systems [4]. Here we consider learning in systems in which relationships drift over time. Consider a friendship graph in which the nodes are entities and two entities are linked if and only if they have been observed to collaborate in some way. In 2002, Raftery et al [5]introduced a model similar to Multidimensional Scaling in which entities are associated with locations in p-dimensional space, and links are more likely if the entities are close in latent space. In this paper we suppose that each observed link is associated with a discrete timestep, so each timestep produces its own graph of observed links, and information is preserved between timesteps by two assumptions. First we assume entities can move in latent space between timesteps, but large moves are improbable. Second, we make a standard Markov assumption that latent locations at time t + 1 are conditionally independent of all previous locations given the latent locations at time t and that the observed graph at time t is conditionally independent of all other positions and graphs, given the locations at time t (see Figure 1). Let Gt be the graph of observed pairwise links at time t. Assuming n entities, and a p-dimensional latent space, let Xt be an n × p matrix in which the ith row, called xi, corresponds to the latent position of entity i at time t. Our conditional independence structure, familiar in HMMs and Kalman filters, is shown in Figure 1. For most of this paper we treat the problem as a tracking problem in which we estimate Xt at each timestep as a function of the current observed graph Gt and the previously estimated positions Xt−1. We want Xt = arg max X P(X|Gt, Xt−1) = arg max X P(Gt|X)P(X|Xt−1) (1) In Section 2 we design models of P(Gt|Xt) and P(Xt|Xt−1) that meet our modeling needs and which have learning times that are tractable as n gets large. In Sections 3 and 4 we introduce a two-stage procedure for locally optimizing equation (1). The first stage generalizes linear multidimensional scaling algorithms to the dynamic case while carefully maintaining the ability to computationally exploit sparsity in the graph. This gives an approximate estimate of Xt. The second stage refines this estimate using an augmented conjugate gradient approach in which gradient updates can use kd-trees over latent space to allow O(n log n) computation per step. X1 G1 XT GT … X0 G0 Figure 1: Model through time 2 The DSNL (Dynamic Social Network in Latent space) Model Let dij = |xi −xj| be the Euclidian distance between entities i and j in latent space at time t. For clarity we will not use a t subscript on these variables except where it is needed. We denote linkage at time t by i ∼j, and absence of a link by i ̸∼j. p(i ∼j) denotes the probability of observing the link. We use p(i ∼j) and pij interchangeably. 2.1 Observation Model The likelihood score function P(Gt|Xt) intuitively measures how well the model explains pairs of entities which are actually connected in the training graph as well as those that are not. Thus it is simply P(Gt|Xt) = Y i∼j pij Y i̸∼j (1 −pij) (2) Following [5] the link probability is a logistic function of dij and is denoted as pL ij , i.e. pL ij = 1 1 + e(dij−α) (3) where α is a constant whose significance is explained shortly. So far this model is similar to [5]. To extend this model to the dynamic case, we now make two important alterations. First, we allow entities to vary their sociability. Some entities participate in many links while others are in few. We give each entity a radius, which will be used as a sphere of interaction within latent space. We denote entity i’s radius as ri. We introduce the term rij to replace α in equation (3). rij is the maximum of the radii of i and j. Intuitively, an entity with higher degree will have a larger radius. Thus we define the radius of entity i with degree δi as, c(δi + 1), so that rij is c × (max(δi, δj) + 1), and c will be estimated from the data. In practice, we estimate the constant c by a simple line-search on the score function. The constant 1 ensures a nonzero radius. 0 0.2 0.4 0.6 0.8 1 0 0.5 1 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 Simple Logistic New Linkage Probability −2 −1 0 1 −2 −1 0 1 0 10 20 30 40 50 60 70 (A) (B) Figure 2: A. The actual logistic function, and our kernelized version with ρ = 0.1. B.The actual (flat, with one minimum), and the modified (steep with two minima) constraint functions, for two dimensions, with Xt varying over a 2-d grid, from (−2, −2) to (2, 2), and Xt−1 = (1, 1) The second alteration is to weigh the link probabilities by a kernel function. We alter the simple logistic link probability pL ij, such that two entities have high probability of linkage only if their latent coordinates are within distance rij of one another. Beyond this range there is a constant noise probability ρ of linkage. Later we will need the kernelized function to be continuous and differentiable at rij. Thus we pick the biquadratic kernel. K(dij) = (1 −(dij/rij)2)2, when dij ≤rij = 0, otherwise (4) Using this function we redefine our link probability pij as pL ijK(dij) + ρ(1 −K(dij)) . This is equivalent to having, pij = 1 1 + e(dij−rij) K(dij) + ρ(1 −K(dij)) when dij ≤rij = ρ otherwise (5) We plot this function in Figure 2A. 2.2 Transition Model The second part of the score penalizes large displacements from the previous time step. We use the most obvious Gaussian model: each coordinate of each latent position is independently subjected to a Gaussian perturbation with mean 0 and variance σ2. Thus log P(Xt|Xt−1) = − n X i=1 |Xi,t −Xi,t−1|2/2σ2 + const (6) 3 Learning Stage One: Linear Approximation We generalize classical multidimensional scaling (MDS) [6] to get an initial estimate of the positions in the latent space. We begin by recapping what MDS does. It takes as input an n × n matrix of non-negative distances D where Di,j denotes the target distance between entity i and entity j. It produces an n × p matrix X where the ith row is the position of entity i in p-dimensional latent space. MDS finds arg minX | ˜D −XXT |F where | · |F denotes the Frobenius norm [7]. ˜D is the similarity matrix obtained from D, using standard linear algebra operations. Let Γ be the matrix of the eigenvectors of ˜D, and Λ be a diagonal matrix with the corresponding eigenvalues. Denote the matrix of the p positive eigenvalues by Λp and the corresponding columns of Γ by Γp. From this follows the expression of classical MDS, i.e. X = ΓpΛ 1 2p . Two questions remain. Firstly, what should be our target distance matrix D? Secondly, how should this be extended to account for time? The first answer follows from [5] and defines Dij as length of the shortest path from i to j in graph G. We restrict this length to a maximum of three hops in order to avoid the full n2 computation of all-shortest paths. D thus has a dense mostly constant structure. When accounting for time, we do not want the positions of entities to change drastically from one time step to another. Hence we try to minimize |Xt −Xt−1|F along with the main objective of MDS. Let ˜Dt denote the ˜D matrix derived from Gt. We formulate the above problem as minimization of | ˜Dt −XtXT t |F + λ|Xt −Xt−1|F, where λ is a parameter which controls the importance of the two parts of the objective function. The above does not have a closed form solution. However, by constraining the objective function further, we can obtain a closed form solution for a closely related problem. The idea is to work with the distances and not the positions themselves. Since we are learning the positions from distances, we change our constraint (during this linear stage of learning) to encourage the pairwise distance between all pairs of entities to change little between each time step, instead of encouraging the individual coordinates to change little. Hence we try to minimize | ˜Dt −XtXT t |F + λ|XtXT t −Xt−1XT t−1|F (7) which is equivalent to minimizing the trace of ( ˜Dt −XtXT t )T ( ˜Dt −XtXT t ) + λ(XtXT t − Xt−1XT t−1)T (XtXT t −Xt−1XT t−1). The above expression has an analytical solution: an affine combination of the current information from the graph and the coordinates at the last timestep. Namely, the new solution satisfies, XtXT t = 1 1 + λ ˜Dt + λ 1 + λXt−1XT t−1 (8) We plot the two constraint functions in Figure 2B. When λ is zero, XtXT t equals ˜Dt , and when λ →∞, it is equal to Xt−1XT t−1. As in MDS, eigendecomposition of the right hand side of equation 8 yields the solution Xt which minimizes the objective function in equation 7. We now have a method which finds latent coordinates for time t that are consistent with Gt and have similar pairwise distances as Xt−1. But although all pairwise distances may be similar, the coordinates may be very different. Indeed, even if λ is very large and we only care about preserving distances, the resulting X may be any reflection, rotation or translation of the original Xt−1. We solve this by applying the Procrustes transform to the solution Xt of equation 8. This transform finds the linear area-preserving transformation of Xt that brings it closest to the previous configuration Xt−1. The solution is unique if XT t Xt−1 is nonsingular [8], and for zero centered Xt and Xt−1, is given by X∗ t = XtUV T , where XT t Xt−1 = USV T using Singular Value Decomposition (SVD). Before moving on to stage two’s nonlinear optimization we must address the scalability of stage one. The naive implementation (SVD of the matrix from equation 8) has a cost of O(n3), for n nodes, since both ˜Dt, and XtXT t , are dense n × n matrices. However in [1] we show how we use the power method [9] to exploit the dense mostly constant structure of Dt and the fact that XtXT t is just an outer product of two thin n × p matrices. The power method is an iterative eigendecomposition technique which only involves multiplying a matrix by a vector. Its net cost can be shown to be O(n2f + n + pn) per iteration, where f is the fraction of non-constant entries in Dt. 4 Stage Two: Nonlinear Search Stage One places entities in reasonably consistent locations which fit our intuition, but it is not tied to the probabilistic model from Section 2. Stage two uses these locations as initializations for applying nonlinear optimization directly to the model in equation 1. We use conjugate gradient (CG) which was the most effective of several alternatives attempted. The most important practical question is how to make these gradient computations tractable, especially when the model likelihood involves a double sum over all entities. We must compute the partial derivatives of logP(Gt|Xt) + logP(Xt|Xt−1) with respect to all values xi,k,t for i ∈1...n and k ∈1..p. First consider the P(Gt|Xt) term: ∂log P(Gt|Xt) ∂Xi,k,t = X j,i∼j ∂log pij ∂Xi,k,t + X j,i̸∼j ∂log(1 −pij) ∂Xi,k,t = X j,i∼j ∂pij/∂Xi,k,t pij − X j,i̸∼j ∂pij/∂Xi,k,t 1 −pij (9) ∂pij/∂Xi,k,t = ∂(pL ijK + ρ(1 −K)) ∂Xi,k,t = K ∂pL ij ∂Xi,k,t + pL ij ∂K ∂Xi,k,t −ρ ∂K ∂Xi,k,t = ψi,j,k,t (10) However K, the biquadratic kernel introduced in equation 4, evaluates to zero and has a zero derivative when dij > rij. Plugging this information in (10), we have, ∂pij/∂Xi,k,t = ½ψi,j,k,t when dij ≤rij, 0 otherwise. (11) Equation (9) now becomes ∂log P(Gt|Xt) ∂Xi,k,t = X j,i∼j dij≤rij ψi,j,k,t pij − X j,i̸∼j dij≤rij ψi,j,k,t 1 −pij (12) when dij ≤rij and zero otherwise. This simplification is very important because we can now use a spatial data structure such as a kd-tree in the low dimensional latent space to retrieve all pairs of entities that lie within each other’s radius in time O(rn+n log n) where r is the average number of in-radius neighbors of an entity [10, 11]. The computation of the gradient involves only those pairs. A slightly more sophisticated trick, omitted for space reasons, lets us compute log P(Gt|Xt), in O(rn + n log n) time. From equation(6), we have ∂log P(Xt|Xt−1) ∂Xi,k,t = −Xi,k,t −Xi,k,t−1 σ2 (13) In the early stages of Conjugate Gradient, there is a danger of a plateau in our score function in which our first derivative is insensitive to two entities that are connected, but are not within each other’s radius. To aid the early steps of CG, we add an additional term to the score function, which penalizes all pairs of connected entities according to the square of their separation in latent space, i.e. P i∼j d2 ij . Weighting this by a constant pConst, our final CG gradient becomes ∂Scoret ∂Xi,k,t = ∂log P(Gt|Xt) ∂Xi,k,t + ∂log P(Xt|Xt−1) ∂Xi,k,t −pConst × 2 X j i∼j (Xi,k,t −Xj,k,t) 5 Results We report experiments on synthetic data generated by a model described below and the NIPS co-publication data 1. We investigate three things: ability of the algorithm to reconstruct the latent space based only on link observations, anecdotal evaluation of what happens to the NIPS data, and scalability results on large datasets from Citeseer. 5.1 Comparing with ground truth We generate synthetic data for six consecutive timesteps. At each timestep the next set of two-dimensional latent coordinates are generated with the former positions as mean, and a gaussian noise of standard deviation σ = 0.01. Each entity is assigned a random radius. At each step , each entity is linked with a relatively higher probability to the ones falling within its radius, or containing it within their radii. There is a noise probability of 0.1, by 1See http://www.cs.toronto.edu/∼roweis/data.html which any two entities i and j outside the maximum pairwise radii rij are connected. We generate graphs of sizes 20 to 1280, doubling the size every time. Accuracy is measured by drawing a test set from the same model, and determining the ROC curve for predicting whether a pair of entities will be linked in the test set. We experiment with six approaches: A. The True model that was used to generate the data (this is an upper bound on the performance of any learning algotihm). B. The DSNL model learned using the above algorithms. C. A random model, guessing link probabilities randomly (this should have an AUC of 0.5). D. The Simple Counting model (Control Experiment). This ranks the likelihood of being linked in the testset according to the frequency of linkage in the training set. It can be considered as the equivalent of the 1-nearest-neighbor method in classification: it does not generalize, but merely duplicates the training set. E. Time-varying MDS: The model that results from running stage one only. F. MDS with no time: The model that results from ignoring time information and running independent MDS on each timestep. Figure 3 shows the ROC curves for the third timestep on a test set of size 160. Table 1 shows the AUC scores of our approach and the five alternatives for 3 different sizes of the dataset over the first, third, and last time steps. 0 2000 4000 6000 8000 10000 12000 14000 0 100 200 300 400 500 600 False Positive True Positive True Model Random Model MDS without time Time−varying MDS Model Learned Control Experiment Figure 3: ROC curves of the six different models described earlier for test set of size 160 at timestep 3, in simulated data. Table 1. AUC score on graphs of size n for six different models (A) True (B) Model learned by DSNL,(C) Random Model,(D) Simple Counting model(Control), (E) MDS with time, and (F) MDS without time. Time A B C D E F n=80 1 0.94 0.85 0.48 0.76 0.77 0.67 3 0.93 0.88 0.48 0.81 0.77 0.65 6 0.93 0.82 0.50 0.76 0.77 0.67 n=320 1 0.86 0.83 0.50 0.70 0.72 0.65 3 0.86 0.79 0.51 0.70 0.72 0.62 6 0.86 0.81 0.50 0.71 0.74 0.64 n=1280 1 0.81 0.79 0.50 0.68 0.61 0.70 3 0.80 0.79 0.50 0.69 0.74 0.71 6 0.81 0.78 0.50 0.68 0.70 0.70 In all the cases we see that the true model has the highest AUC score, followed by the model learned by DSNL. The simple counting model rightly guesses some of the links in the test graph from the training graph. However it also predicts the noise as links, and ends up being beaten by the model we learn. The results show that it is not sufficient to only perform Stage One. When the number of links is small, MDS without time does poorly compared to our temporal version. However as the number of links grows quadratically with the number of entities, regular MDS does almost as well as the temporal version: this is not a surprise because the generalization benefit from the previous timestep becomes unnecessary with sufficient data on the current timestep. Further experiments we conducted [1] show that the experiments initialized with time-variant MDS converges almost twice as fast as those with random initialization, and also converges to a better log-likelihood. 5.2 Visualizing the NIPS coauthorship data over time For clarity we present a subset of the NIPS dataset, obtained by choosing a well-connected author, and including all authors and links within a few hops. We dropped authors who appeared only once and we merged the timesteps into three groups: 1987-1990 (Figure 4A), 1991-1994(Figure 4B), and 1995-1998(Figure 4C). In each picture we have the links for that timestep, a few well connected people highlighted, with their radii. These radii are learnt from the model. Remember that the distance between two people is related to the radii. Two people with very small radii, are considered far apart in the model even if they are physically close. To give some intuition of the movement of the rest of the points, we divided the area in the first timestep in 4 parts, and colored and shaped the points in each differently. This coloring and shaping is preserved throughout all the timesteps. In this paper we limit ourselves to anecdotal examination of the latent positions. For example, with BurgesC and V apnikV we see that they had very small radii in the first four years, and were further apart from one another, since there was no co-publication. However in the second timestep they move closer, though there are no direct links. This is because of the fact that they both had co-published with neighbors of one another. On the third time step they make a connection, and are assigned almost identical coordinates, since they have a very overlapping set of neighbors. We end the discussion with entities HintonG, GhahramaniZ, and JordanM. In the first timestep they did not coauthor with one another, and were placed outside one-another’s radii. In the second timestep GhahramaniZ , and HintonG coauthor with JordanM. However since HintonG had a large radius and more links than the former, it is harder for him to meet all the constraints, and he doesn’t move very close to JordanM. In the next timestep however GhahramaniZ has a link with both of the others, and they move substantially closer to one another. 5.3 Performance Issues Figure 4D shows the performance against the number of entities. When kd-trees are used and the graphs are sparse scaling is clearly sub-quadratic and nearly linear in the number of entities, meeting our expectation of O(n log n) performance. We successfully applied our algorithms to networks of sizes up to 11,000 [1]. The results show subquadratic timecomplexity along with satisfactory link prediction on test sets. 6 Conclusions and Future Work This paper has described a method for modeling relationships that change over time. We believe it is useful both for understanding relationships in a mass of historical data and also as a tool for predicting future interactions, and we plan to explore both directions further. In [1] we develop a forward-backward algorithm, optimizing the global likelihood instead of treating the model as a tracking model. We also plan to extend this to find the posterior distributions of the coordinates following the approach used by [5]. Acknowledgments We are very grateful to Anna Goldenberg for her valuable insights. We also thank Paul Komarek and Sajid Siddiqi for some very helpful discussions and useful comments. This work was partially funded by DARPA EELD grant F30602-01-2-0569. References [1] P. Sarkar and A. Moore. Dynamic social network analysis using latent space models. SIGKDD Explorations: Special Issue on Link Mining, 2005. [2] J. Schroeder, J. J. Xu, and H. Chen. Crimelink explorer: Using domain knowledge to facilitate automated crime association analysis. In ISI, pages 168–180, 2003. [3] J. J. Carrasco, D. C. Fain, K. J. Lang, and L. Zhukov. Clustering of bipartite advertiser-keyword graph. In ICDM, 2003. [4] J. Palau, M. Montaner, and B. L´opez. Collaboration analysis in recommender systems using social networks. In Eighth Intl. Workshop on Cooperative Info. Agents (CIA’04), 2004. BurgesC GhahramaniZ HintonG JordanM KochC ManwaniA SejnowskiT VapnikV ViolaP BurgesC GhahramaniZ HintonG JordanM KochC ManwaniA SejnowskiT VapnikV ViolaP (A) (B) BurgesC GhahramaniZ HintonG JordanM KochC ManwaniA SejnowskiT VapnikV ViolaP 300 400 500 600 700 800 900 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of entities Time in Seconds quadratic score score using kd−tree (C) (D) Figure 4: NIPS coauthorship data at A. Timestep 1: green stars in upper-left corner, magenta pluses in top right, cyan spots in lower right, and blue crosses in the bottom-left. B. Timestep 2. C. Timestep 3. D. Time taken for score calculation vs number of entities. [5] A. E. Raftery, M. S. Handcock, and P. D. Hoff. Latent space approaches to social network analysis. J. Amer. Stat. Assoc., 15:460, 2002. [6] R. L. Breiger, S. A. Boorman, and P. Arabie. An algorithm for clustering relational data with applications to social network analysis and comparison with multidimensional scaling. J. of Math. Psych., 12:328–383, 1975. [7] I. Borg and P. Groenen. Modern Multidimensional Scaling. Springer-Verlag, 1997. [8] R. Sibson. Studies in the robustness of multidimensional scaling : Perturbational analysis of classical scaling. J. Royal Stat. Soc. B, Methodological, 41:217–229, 1979. [9] David S. Watkins. Fundamentals of Matrix Computations. John Wiley & Sons, 1991. [10] F. Preparata and M. Shamos. Computational Geometry: An Introduction. Springer, 1985. [11] A. G. Gray and A. W. Moore. N-body problems in statistical learning. In NIPS, 2001.
|
2005
|
191
|
2,816
|
Principles of real-time computing with feedback applied to cortical microcircuit models Wolfgang Maass, Prashant Joshi Institute for Theoretical Computer Science Technische Universitaet Graz A-8010 Graz, Austria maass,joshi@igi.tugraz.at Eduardo D. Sontag Department of Mathematics Rutgers, The State University of New Jersey Piscataway, NJ 08854-8019, USA sontag@cs.rutgers.edu Abstract The network topology of neurons in the brain exhibits an abundance of feedback connections, but the computational function of these feedback connections is largely unknown. We present a computational theory that characterizes the gain in computational power achieved through feedback in dynamical systems with fading memory. It implies that many such systems acquire through feedback universal computational capabilities for analog computing with a non-fading memory. In particular, we show that feedback enables such systems to process time-varying input streams in diverse ways according to rules that are implemented through internal states of the dynamical system. In contrast to previous attractor-based computational models for neural networks, these flexible internal states are high-dimensional attractors of the circuit dynamics, that still allow the circuit state to absorb new information from online input streams. In this way one arrives at novel models for working memory, integration of evidence, and reward expectation in cortical circuits. We show that they are applicable to circuits of conductance-based Hodgkin-Huxley (HH) neurons with high levels of noise that reflect experimental data on invivo conditions. 1 Introduction Quite demanding real-time computations with fading memory1 can be carried out by generic cortical microcircuit models [1]. But many types of computations in the brain, for 1A map (or filter) F from input- to output streams is defined to have fading memory if its current output at time t depends (up to some precision ε) only on values of the input u during some finite time interval [t −T, t]. In formulas: F has fading memory if there exists for every ε > 0 some δ > 0 and T > 0 so that |(Fu)(t) −(F ˜u)(t)| < ε for any t ∈R and any input functions u, ˜u with example computations that involve memory or persistent internal states, cannot be modeled by such fading memory systems. On the other hand concrete examples of artificial neural networks [2] and cortical microcircuit models [3] suggest that their computational power can be enlarged through feedback from trained readouts. Furthermore the brain is known to have an abundance of feedback connections on several levels: within cortical areas, where pyramidal cells typically have in addition to their long projecting axon a number of local axon collaterals, between cortical areas, and between cortex and subcortical structures. But the computational role of these feedback connections has remained open. We present here a computational theory which characterizes the gain in computational power that a fading memory system can acquire through feedback from trained readouts, both in the idealized case without noise and in the case with noise. This theory simultaneously characterizes the potential gain in computational power resulting from training a few neurons within a generic recurrent circuit for a specific task. Applications of this theory to cortical microcircuit models provide a new way of explaining the possibility of real-time processing of afferent input streams in the light of learning-induced internal circuit states that might represent for example working memory or rules for the timing of behavior. Further details to these results can be found in [4]. 2 Computational Theory Recurrent circuits of neurons are from a mathematical perspective special cases of dynamical systems. The subsequent mathematical results show that a large variety of dynamical systems, in particular also neural circuits, can overcome in the presence of feedback the computational limitations of a fading memory – without necessarily falling into the chaotic regime. In fact, feedback endows them with universal capabilities for analog computing, in a sense that can be made precise in the following way (see Fig. 1A-C for an illustration): Theorem 2.1 A large class Sn of systems of differential equations of the form x′ i(t) = fi(x1(t), . . . , xn(t)) + gi(x1(t), . . . , xn(t)) · v(t), i = 1, . . . , n (1) are in the following sense universal for analog computing: It can respond to an external input u(t) with the dynamics of any nth order differential equation of the form z(n)(t) = G(z(t), z′(t), z′′(t), . . . , z(n−1)(t)) + u(t) (2) (for arbitrary smooth functions G : Rn →R) if the input term v(t) is replaced by a suitable memoryless feedback function K(x1(t), . . . , xn(t), u(t)), and if a suitable memoryless readout function h(x1(t), . . . , xn(t)) is applied to its internal state ⟨x1(t), . . . , xn(t)⟩. Also the dynamic responses of all systems consisting of several higher order differential equations of the form (2) can be simulated by fixed systems of the form (1) with a corresponding number of feedbacks. The class Sn of dynamical systems that become through feedback universal for analog computing subsumes2 systems of the form x′ i(t) = −λixi(t) + σ ( n X j=1 aij · xj(t)) + bi · v(t) , i = 1, . . . , n (3) ∥u(τ) −˜u(τ)∥< δ for all τ ∈[t −T, t]. This is a characteristic property of all filters that can be approximated by an integral over the input stream u, or more generally by Volterra- or Wiener series. 2for example if the λi are pairwise different and aij = 0 for all i, j, and all bi are nonzero; fewer restrictions are needed if more then one feedback to the system (3) can be used Figure 1: Universal computational capability acquired through feedback according to Theorem 2.1. (A) A fixed circuit C with dynamics (1). (B) An arbitrary given nth order dynamical system (2) with external input u(t). (C) If the input v(t) to circuit C is replaced by a suitable feedback K(x(t), u(t)), then this fixed circuit C can simulate the dynamic response z(t) of the arbitrarily given system shown in B, for any input stream u(t). that are commonly used to model the temporal evolution of firing rates in neural circuits (σ is some standard activation function). If the activation function σ is also applied to the term v(t) in (3), the system (3) can still simulate arbitrary differential equations (2) with bounded inputs u(t) and bounded responses z(t), . . . , z(n−1)(t). Note that according to [5] all Turing machines can be simulated by systems of differential equations of the form (2). Hence the systems (1) become through feedback also universal for digital computing. A proof of Theorem 2.1 is given in [4]. It has been shown that additive noise, even with an arbitrarily small bounded amplitude, reduces the non-fading memory capacity of any recurrent neural network to some finite number of bits [6, 7]. Hence such network can no longer simulate arbitrary Turing machines. But feedback can still endow noisy fading memory systems with the maximum possible computational power within this a-priori limitation. The following result shows that in principle any finite state machine (= deterministic finite automaton), in particular any Turing machine with tapes of some arbitrary but fixed finite length, can be emulated by a fading memory system with feedback, in spite of noise in the system. Theorem 2.2 Feedback allows linear and nonlinear fading memory systems, even in the presence of additive noise with bounded amplitude, to employ the computational capability and non-fading states of any given finite state machine (in addition to their fading memory) for real-time processing of time varying inputs. The precise formalization and the proof of this result (see [4]) are technically rather involved, and cannot be given in this abstract. A key method of the proof, which makes sure that noise does not get amplified through feedback, is also applied in the subsequent computer simulations of cortical microcircuit models. There the readout functions K that provide feedback values K(x(t)) are trained to assume values which cancel the impact of errors or imprecision in the values K(x(s)) of this feedback for immediately preceding time steps s < t. 3 Application to Generic Circuits of Noisy Neurons We tested this computational theory on circuits consisting of 600 integrate-and-fire (I&F) neurons and circuits consisting of 600 conductance-based HH neurons, in either case with a rather high level of noise that reflects experimental data on in-vivo conditions [8]. In addition we used models for dynamic synapses whose individual mixture of paired-pulse depression and facilitation is based on experimental data [9, 10]. Sparse connectivity between neurons with a biologically realistic bias towards short connections was generated by a probabilistic rule, and synaptic parameters were randomly chosen, depending on the type of pre-and postsynaptic neurons, in accordance with these empirical data (see [1] or [4] for details). External inputs and feedback from readouts were connected to populations of neurons within the circuit, with randomly varying connection strengths. The current circuit state x(t) was modeled by low-pass filtered spike trains from all neurons in the circuit (with a time constant of 30 ms, modeling time constants of receptors and membrane of potential readout neurons). Readout functions K(x(t)) were modeled by weighted sums w · x(t) Figure 2: State-dependent real-time processing of 4 independent input streams in a generic cortical microcircuit model. (A) 4 input streams, consisting each of 8 spike trains generated by Poisson processes with randomly varying rates ri(t), i = 1, . . . , 4 (rates plotted in (B); all rates are given in Hz). The 4 input streams and the feedback were injected into disjoint but densely interconnected subpopulations of neurons in the circuit. (C) Resulting firing activity of 100 out of the 600 I&F neurons in the circuit. Spikes from inhibitory neurons marked in gray. (D) Target activation times of the high-dimensional attractor (gray shading), spike trains of 2 of the 8 I&F neurons that were trained to create the high-dimensional attractor by sending their output spike trains back into the circuit, and average firing rate of all 8 neurons (lower trace). (E and F) Performance of linear readouts that were trained to switch their real-time computation task depending on the current state of the high-dimensional attractor: output 2 · r3(t) instead of r3(t) if the high-dimensional attractor is on (E), output r3(t) + r4(t) instead of |r3(t) −r4(t)| if the high-dimensional attractor is on (F). (G) Performance of linear readout that was trained to output r3(t) · r4(t), showing that another linear readout from the same circuit can simultaneously carry out nonlinear computations that are invariant to the current state of the high-dimensional attractor. whose weights w were trained during 200 s of simulated biological time to minimize the mean squared error with regard to desired target output functions K. After training these weights w were fixed, and the performance of the otherwise generic circuit was evaluated for new input streams u (with new input rates drawn from the same distribution) that had not been used for training. It was sufficient to use just linear functions K that transformed the current circuit state x(t) into a feedback K(x(t)), confirming the predictions of [1] and [2] that the recurrent circuit automatically assumes the role of a kernel (in the sense of machine learning) that creates nonlinear combinations of recent inputs. We found that computer simulations of such generic cortical microcircuit models confirm the theoretical prediction that feedback from suitably trained readouts enables complex state-dependent real-time processing of a fairly large number of diverse input spike trains within a single circuit (all results shown are for test inputs that had not been used for training). Readout neurons could be trained to turn a high-dimensional attractor on or off in response to particular signals in 2 of the 4 independent input streams (Fig. 2D). The target value for K(x(t)) during training was the currently desired activity-state of the high-dimensional attractor, where x(t) resulted from giving already tentative spike trains that matched this target value as feedback into the circuit. These neurons were trained to represent in their firing activity at any time the information in which of input streams 1 or 2 a burst had most recently occurred. If it occurred most recently in stream 1, they were trained to fire at 40 Hz, and not to fire otherwise. Thus these neurons were required to represent the non-fading state of a very simple finite state machine, demonstrating in a simple example the validity of Theorem 2.2. The weights w of these readout neurons were determined by a sign-constrained linear regression, so that weights from excitatory (inhibitory) presynaptic neurons were automatically positive (negative). Since these readout neurons had the same properties as neurons within the circuit, this computer simulation also provided a first indication of the gain in real-time processing capability that can be achieved by suitable training of a few spiking neurons within an otherwise randomly connected recurrent circuit. Fig. 2 shows that other readouts from the same circuit (that do not provide feedback) can be trained to amplify their response to one of the input streams (Fig. 2E), or even switch their computational function (Fig. 2F) if the high-dimensional attractor is in the on-state, thereby providing a model for the way in which internal circuit states can change the “program” for its online processing. Continuous high-dimensional attractors that hold a time-varying analog value (instead of a discrete state) through globally distributed activity within the circuit can be created in the same way through feedback. In fact, several such high-dimensional attractors can coexist within the same circuit, see Fig. 3B,C,D. This gives rise to a model (Fig. 3) that could explain how timing of behavior and reward expectation are learnt and controlled by neural microcircuits on a behaviorally relevant large time scale. In addition Fig. 4 shows that a continuous high-dimensional attractor that is created through feedback provides a new model for a neural integrator, and that the current value of this neural integrator can be combined within the same circuit and in real-time with variables extracted from timevarying analog input streams. This learning-induced generation of high-dimensional attractors through feedback provides a new model for the emergence of persistent firing in cortical circuits that does not rely on especially constructed circuits, neurons, or synapses, and which is consistent with high noise (see Fig. 4G for the quite realistic trial-to-trial variability in this circuit of HH neurons with background noise according to [8]). This learning based model is also consistent with the surprising plasticity that has recently been observed even in quite specialized neural integrators [11]. Its robustness can be traced back to the fact that readouts can be trained to correct errors in their previous feedback. Furthermore such error correction is not restricted to linear computational operations, since the inherent kernel property of generic recurrent circuits allows even linear readouts to carry out nonlinear computations on firing rates (Fig. 2G). Whereas previous models for discrete or continuous attractors in recurrent neural circuits required that the whole dynamics of such circuit was entrained by the attractor, our new model predicts that persistent firing states can co-exist with other high-dimensional attractors and with responses to time-varying afferent inputs within the same circuit. Note that such attractors can equivalently be generated by training (instead of readouts) a few neurons within an otherwise generic cortical microcircuit model. Figure 3: Representation of time for behaviorally relevant time spans in a generic cortical microcircuit model. (A) Afferent circuit input, consisting of a cue in one channel (gray) and random spikes (freshly drawn for each trial) in the other channels. (B) Response of 100 neurons from the same circuit as in Fig. 2, which has here two co-existing high-dimensional attractors. The autonomously generated periodic bursts with a periodic frequency of about 8 Hz are not related to the task, and readouts were trained to become invariant to them. (C and D) Feedback from two linear readouts that were simultaneously trained to create and control two high-dimensional attractors. One of them was trained to decay in 400 ms (C), and the other in 600 ms (D) (scale in nA is the average current injected by feedback into a randomly chosen subset of neurons in the circuit). (E) Response of the same neurons as in (B), for the same circuit input, but with feedback from a different linear readout that was trained to create a high-dimensional attractor that increases its activity and reaches a plateau 600 ms after the occurrence of the cue in the input stream. (F) Feedback from the linear readout that creates this continuous high-dimensional attractor. 4 Discussion We have demonstrated that persistent memory and online switching of real-time processing can be implemented in generic cortical microcircuit models by training a few neurons Figure 4: A model for analog real-time computation on external and internal variables in a generic cortical microcircuit (consisting of 600 conductance-based HH neurons). (A and B) Two input streams as in Fig. 2; their firing rates r1(t), r2(t) are shown in (B). (C) Resulting firing activity of 100 neurons in the circuit. (D) Performance of a neural integrator, generated by feedback from a linear readout that was trained to output at any time t an approximation CA(t) of the integral R t 0 (r1(s) −r2(s))ds over the difference of both input rates. Feedback values were injected as input currents into a randomly chosen subset of neurons in the circuit. Scale in nA shows average strength of feedback currents (also in panel H). (E) Performance of linear readout that was trained to output 0 as long as CA(t) stayed below 1.35 nA, and to output then r2(t) until the value of CA(t) dropped below 0.45 nA (i.e., in this test run during the shaded time periods). (F) Performance of linear readout trained to output r1(t) −CA(t), i.e. a combination of external and internal variables, at any time t (both r1 and CA normalized into the range [0, 1]). (G) Response of a randomly chosen neuron in the circuit for 10 repetitions of the same experiment (with input spike trains generated by Poisson processes with the same time-course of firing rates), showing biologically realistic trial-to-trial variability. (H) Activity traces of a continuous attractor as in (D), but in 8 different trials for 8 different fixed values of r1 and r2 (shown on the right). The resulting traces are very similar to the temporal evolution of firing rates of neurons in area LIP that integrate sensory evidence (see Fig.5A in [12]). (within or outside of the circuit) through very simple learning processes (linear regression, or alternatively – with some loss in performance – perceptron learning). The resulting highdimensional attractors can be made noise-robust through training, thereby overcoming the inherent brittleness of constructed attractors. The high dimensionality of these attractors, which is caused by the small number of synaptic weights that are fixed for their creation, allows the circuit state to move in or out of other attractors, and to absorb new information from online inputs, while staying within such high-dimensional attractor. The resulting virtually unlimited computational capability of fading memory circuits with feedback can be explained on the basis of the theoretical results that were presented in section 2. Acknowledgments Helpful comments from Wulfram Gerstner, Stefan Haeusler, Herbert Jaeger, Konrad Koerding, Henry Markram, Gordon Pipa, Misha Tsodyks, and Tony Zador are gratefully acknowledged. Written under partial support by the Austrian Science Fund FWF, project # S9102-N04, project # IST2002-506778 (PASCAL) and project # FP6-015879 (FACETS) of the European Union. References [1] W. Maass, T. Natschl¨ager, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11):2531–2560, 2002. [2] H. J¨ager and H. Haas. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science, 304:78–80, 2004. [3] P. Joshi and W. Maass. Movement generation with circuits of spiking neurons. Neural Computation, 17(8):1715–1738, 2005. [4] W. Maass, P. Joshi, and E. D. Sontag. Computational aspects of feedback in neural circuits. submitted for publication, 2005. Online available as #168 from http://www.igi.tugraz.at/maass/. [5] M. S. Branicky. Universal computation and other capabilities of hybrid and continuous dynamical systems. Theoretical Computer Science, 138:67–100, 1995. [6] M. Casey. The dynamics of discrete-time computation with application to recurrent neural networks and finite state machine extraction. Neural Computation, 8:1135 – 1178, 1996. [7] W. Maass and P. Orponen. On the effect of analog noise in discrete-time analog computations. Neural Computation, 10:1071–1095, 1998. [8] A. Destexhe, M. Rudolph, and D. Pare. The high-conductance state of neocortical neurons in vivo. Nat. Rev. Neurosci., 4(9):739–751, 2003. [9] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of neocortical pyramidal neurons. PNAS, 95:5323–5328, 1998. [10] A. Gupta, Y. Wang, and H. Markram. Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex. Science, 287:273–278, 2000. [11] G. Major, R. Baker, E. Aksay, B. Mensh, H. S. Seung, and D. W. Tank. Plasticity and tuning by visual feedback of the stability of a neural integrator. Proc Natl Acad Sci, 101(20):7739–7744, 2004. [12] M. E. Mazurek, J. D. Roitman, J. Ditterich, and M. N. Shadlen. A role for neural integrators in perceptual decision making. Cerebral Cortex, 13(11):1257–1269, 2003.
|
2005
|
192
|
2,817
|
Location-based Activity Recognition Lin Liao, Dieter Fox, and Henry Kautz Computer Science & Engineering University of Washington Seattle, WA 98195 Abstract Learning patterns of human behavior from sensor data is extremely important for high-level activity inference. We show how to extract and label a person’s activities and significant places from traces of GPS data. In contrast to existing techniques, our approach simultaneously detects and classifies the significant locations of a person and takes the highlevel context into account. Our system uses relational Markov networks to represent the hierarchical activity model that encodes the complex relations among GPS readings, activities and significant places. We apply FFT-based message passing to perform efficient summation over large numbers of nodes in the networks. We present experiments that show significant improvements over existing techniques. 1 Introduction The problem of learning patterns of human behavior from sensor data arises in many areas and applications of computer science, including intelligent environments, surveillance, and assistive technology for the disabled. A focus of recent interest is the use of data from wearable sensors, and in particular, GPS (global positioning system) location data. Such data is used to recognize the high-level activities in which a person is engaged and to determine the relationship between activities and locations that are important to the user [1, 6, 8, 3]. Our goal is to segment the user’s day into everyday activities such as “working,” “visiting,” “travel,” and to recognize and label significant locations that are associated with one or more activity, such as “work place,” “friend’s house,” “user’s bus stop.” Such activity logs can be used, for instance, for automated diaries or long-term health monitoring. Previous approaches to location-based activity recognition suffer from design decisions that limit their accuracy and flexibility: First, previous work decoupled the subproblem of determining whether or not a geographic location is significant and should be assigned a label, from that of labeling places and activities. The first problem was handled by simply assuming that a location is significant if and only if the user spends at least N minutes there, for some fixed threshold N [1, 6, 8, 3]. Some way of restricting the enormous set of all locations recorded for the user to a meaningful subset is clearly necessary. However, in practice, any fixed threshold leads to many errors. Some significant locations, for example, the place where the user drops off his children at school, may be visited only briefly, and so would be excluded by a high threshold. A lower threshold, however, would include too many insignificant locations, for example, a place where the user briefly waited at a traffic light. The inevitable errors cannot be resolved because information cannot flow from the label assignment process back to the one that determines the domain to be labeled. Second, concerns for computational efficiency prevented previous approaches from tackling the problem of activity and place labeling in full generality. [1] does not distinguish between places and activities; although [8] does, the implementation limited places to a single activity. Neither approaches model or label the user’s activities when moving between places. [6] and [3] learn transportation patterns, but not place labels. The third problem is one of the underlying causes of the other limitations. The representations and algorithms used in previous work make it difficult to learn and reason with the kinds of non-local features that are useful in disambiguating human activity. For a simple example, if a system could learn that a person rarely went to a restaurant more than once a day, then it could correctly give a low probability to an interpretation of a day’s data under which the user went to three restaurants. Our previous work [8] used clique templates in relational Markov networks for concisely expressing global features, but the MCMC inference algorithm we used made it costly to reason with aggregate features, such as statistics on the number of times a given activity occurs. The ability to efficiently leverage global features of the data stream could enhance the scope and accuracy of activity recognition. This paper presents a unified approach to automated activity and place labeling which overcomes these limitations. Contributions of this work include the following: • We show how to simultaneously solve the tasks of identifying significant locations and labeling both places and activities from raw GPS data, all in a conditionally trained relational Markov network. Our approach is notable in that nodes representing significant places are dynamically added to the graph during inference. No arbitrary thresholds regarding the time spent at a location or the number of significant places are employed. • Our model creates a complete interpretation of the log of a user’s data, including transportation activities as well as activities performed at particular places. It allows different kinds of activities to be performed at the same location. • We extend our work on using clique templates for global features to support efficient inference by belief propagation. We introduce, in particular, specialized Fast Fourier Transform (FFT) templates for belief propagation over aggregate (counting) features, which reduce computation time by an exponential amount. Although [9] introduced the use of the FFT to compute probability distributions over summations, our work appears to be the first to employ it for full bi-directional belief propagation. This paper is organized as follows. We begin with a discussion of relational Markov networks and a description of an FFT belief propagation algorithm for aggregate statistical features. Then we explain how to apply RMNs to the problem of location-based activity recognition. Finally, we present experimental results on real-world data that demonstrate significant improvement in coverage and accuracy over previous work. 2 Relational Markov Networks and Aggregate Features 2.1 Preliminaries Relational Markov Networks (RMNs) [10] are extensions of Conditional Random Fields (CRFs), which are undirected graphical models that were developed for labeling sequence data [5]. CRFs have been shown to produce excellent results in areas such as natural language processing [5] and computer vision [4]. RMNs extend CRFs by providing a relational language for describing clique structures and enforcing parameter sharing at the template level. Thereby RMNs provide a very flexible and concise framework for defining the features we use in our activity recognition context. A key concept of RMNs are relational clique templates, which specify the structure of a CRF in a concise way. In a nutshell, a clique template C ∈C is similar to a database query (e.g., SQL) in that it selects tuples of nodes from a CRF and connects them into cliques. Each clique template C is additionally associated with a potential function φC(vC) that maps values of variables to a non-negative real number. Using a log-linear combination of feature functions, we get φC(vC) = exp{wT C · fC(vC)}, where fC() defines a feature vector for C and wT C is the transpose of the corresponding weight vector. An RMN defines a conditional distribution p(y|x) over labels y given observations x. To compute such a conditional distribution, the RMN generates a CRF with the cliques specified by the clique templates. All cliques that originate from the same template must share the same weight vector wC. The resulting cliques factorize the conditional distribution as p(y | x) = 1 Z(x) Y C∈C Y vC∈C exp{wT C · fC(vC)}, (1) where Z(x) is the normalizing partition function. The weights w of an RMN can be learned discriminatively by maximizing the loglikelihood of labeled training data [10, 8]. This requires running an inference procedure at each iteration of the optimization and can be very expensive. To overcome this problem, we instead maximize the pseudo-log-likelihood of the training data: L(w) ≡ n X i=1 log p(yi | MB(yi), w) −wT w 2σ2 (2) where MB(yi) is the Markov Blanket of variable yi. The rightmost term avoids overfitting by imposing a zero-mean, Gaussian shrinkage prior on each component of the weights [10]. In the context of place labeling, [8] showed how to use non-zero mean priors in order to transfer weights learned for one person to another person. In our experiments, learning the weights using pseudo-log-likelihood is very efficient and performs well in our tests. In our previous work [8] we used MCMC for inference. While this approach performed well for the models considered in [8], it does not scale to more complex activity models such as the one described here. Taskar and colleagues [10] relied on belief propagation (BP) for inference. The BP (sum-product) algorithm converts a CRF to a pairwise representation and performs message passing, where the message from node i to its neighbor j is computed as mij(yj) = X yi φ(yi)φ(yi, yj) Y k∈n(i)\j mki(yi) , (3) where φ(yi) is a local potential, φ(yi, yj) is a pairwise potential, and {n(i)\j} denotes i’s neighbors other than j. All messages are updated iteratively until they (possibly) converge. However, our model takes into account aggregate features, such as summation. Performing aggregation would require the generation of cliques that contain all nodes over which the aggregation is performed. Since the complexity of standard BP is exponential in the number of nodes in the largest clique, aggregation can easily make BP intractable. 2.2 Efficient summation templates In our model, we address the inference of aggregate cliques at the template level within the framework of BP. Each type of aggregation function is associated with a computation template that specifies how to propagate messages through the clique. In this section, we discuss an efficient computation template for summation. To handle summation cliques with potentially large numbers of addends, our summation template dynamically builds a summation tree, which is a pairwise Markov network as shown in Fig. 1(a). In a summation tree, the leaves are the original addends and each (a) aN−2 aN−1 aN .... a1 .... 2a p 1 p 2 p K Activity Place Local evidence e 1 1 e 1 E e 1 N e N E .... .... (b) Figure 1: (a) Summation tree that represents ysum = P8 i=1 yi, where the Si’s are auxiliary nodes to ensure the summation relation. (b) CRF for labeling activities and places. Each activity node ai is connected to E observed local evidence nodes e1 i to eE i . Place nodes pi are generated based on the inferred activities and each place is connected to all activity nodes that are within a certain distance. internal node yjk represents the sum of its two children yj and yk, and this sum relation is encoded by an auxiliary node Sjk and its potential. The state space of Sjk consists of the joint (cross-product) state of its neighbors yj, yk, and yjk. It is easy to see that the summation tree guarantees that the root represents ysum = Pn i=1 yi, where y1 to yn are the leaves of the tree. To define the BP protocol for summation trees, we need to specify two types of messages: an upward message from an auxiliary node to its parent (e.g., mS12y12), and a downward message from an auxiliary node to one of its two children (e.g., mS12y1). Upward message update: Starting with Equation (3), we can update an upward message mSijyij as follows. mSijyij(yij) = X yi,yj φS(yi, yj, yij) myiSij(yi) myjSij(yj) = X yi myiSij(yi) myjSij(yij −yi) (4) = F−1 F(myiSij(yi)) · F(myjSij(yj)) (5) where φS(yi, yj, yij) is the local potential of Sij encoding the equality yij = yi + yj. (4) follows because all terms not satisfying the equality disappear. Therefore, message mSijyij is the convolution of myiSij and myjSij. (5) follows from the convolution theorem, which states that the Fourier transform of a convolution is the point-wise product of Fourier transforms [2], where F and F−1 represent the Fourier transform and its inverse, respectively. When the messages are discrete functions, the Fourier transform and its inverse can be computed efficiently using the Fast Fourier Transform (FFT) [2, 9]. The computational complexity of one summation using FFT is O(k log k), where k is the maximum number of states in yi and yj. Downward message update: We also allow messages to pass from sum variables downward to its children. This is necessary if we want to use the belief on sum variables (e.g., knowledge on the number of homes) to change the distribution of individual variables (e.g., place labels). From Equation (3) we get the downward message mSijyi as mSijyi(yi) = X yj,yij φS(yi, yj, yij)myjSij(yj)myijSij(yij) = X yj myjSij(yj)myijSij(yi + yj) (6) = F−1 F(myjSij(yj)) · F(myijSij(yij)) (7) where (6) again follows from the sum relation. Note that the downward message mSijyi turns out to be the correlation of messages myjSij and myijSij. (7) follows from the correlation theorem [2], which is similar to the convolution theorem except, for correlation, we must compute the complex conjugate of the first Fourier transform, denoted as F. Again, for discrete messages, (7) can be evaluated efficiently using FFT. At each level of a summation tree, the number of messages (nodes) is reduced by half and the size of each message is doubled. Suppose the tree has n upward messages at the bottom and the maximum size of a message is k . For large summation trees where n ≫k, the total complexity of updating the upward messages at all the log n levels follows now as log n X i=1 n 2i · O 2i−1k log 2i−1k = O n 2 log n X i=1 log 2i−1 ! = O(n log2 n) (8) Similar reasoning shows that the complexity of the downward pass is O(n log2 n) as well. Therefore, updating all messages in a summation clique takes O(n log2 n) instead of time exponential in n, as would be the case for a non-specialized implementation of aggregation. 3 Location-based Activity Model 3.1 Overview To recognize activities and places, we first segment raw GPS traces by grouping consecutive GPS readings based on their spatial relationship. This segmentation can be performed by simply combining all consecutive readings that are within a certain distance from each other (10m in our implementation). However, it might be desirable to associate GPS traces to a street map, for example, in order to relate locations to addresses in the map. To jointly estimate the GPS to street association and trace segmentation, we construct an RMN that takes into account the spatial relationship and temporal consistency between the measurements and their associations (see [7] for more details). In this section, we focus on inferring activities and types of significant places after segmentation. To do so, we construct a hierarchical RMN that explicitly encodes the relations between activities and places. A CRF instantiated from the RMN is shown in Fig. 1(b). At the lower level of the hierarchy, each activity node is connected to various features, summarizing information resulting from the GPS segmentation. These features include: • Temporal information such as time of day, day of week, and duration of the stay; • Average speed through a segment, for discriminating transportation modes; • Information extracted from geographic databases, such as whether a location is close to a bus route or bus stop, and whether it is near a restaurant or store; • Additionally, each activity node is connected to its neighbors. These features measure compatibility between types of activities at neighboring nodes in the trace. Our model also aims at determining those places that play a significant role in the activities of a person, such as home, work place, friend’s home, grocery stores, restaurants, and bus stops. Such significant places comprise the upper level of the CRF shown in Fig. 1(b). However, since these places are not known a priori, we must additionally detect a person’s significant places. To incorporate place detection into our system, we use an iterative algorithm that re-estimates activities and places. Before we describe this algorithm, let us first look at the features that are used to determine the types of significant places under the assumption that the locations and number of these places are known. • The activities that occur at a place strongly indicate the type of the place. For example, at a friends’ home people either visit or pick up / drop off someone. Our features consider the frequencies of the different activities at a place. This is done by generating a clique for each place that contains all activity nodes in its vicinity. For example, the nodes p1, a1, and aN−2 in Fig. 1(b) form such a clique. • A person usually has only a limited number of different homes or work places. We add two additional summation cliques that count the number of homes and work places. These counts provide soft constraints that bias the system to generate interpretations with reasonable numbers of homes and work places. 1. Input: GPS trace ⟨g1, g2, . . . , gT ⟩and iteration counter i := 0 2. ⟨a1, . . . , aN⟩, ⟨e1 1, . . . , eE 1 , . . .⟩ := trace segmentation (⟨g1, g2, . . . , gT ⟩) 3. // Generate CRF containing activity and local evidence (lower two levels in Fig. 1(b)) CRF0 := instantiate crf ⟨⟩, ⟨a1, . . . , aN⟩, ⟨e1 1, . . . , eE 1 , . . .⟩ 4. a∗ 0 := BP inference( CRF0) // infer sequence of activities 5. do 6. i := i + 1 7. ⟨p1, . . . , pK⟩i := generate places(a∗ i−1) // Instantiate places 8. CRFi := instantiate crf ⟨p1, . . . , pK⟩i, ⟨a1, . . . , aN⟩, ⟨e1 1, . . . , eE 1 , . . .⟩ 9 ⟨a∗ i , p∗ i ⟩:= BP inference( CRFi) // inference in complete CRF 10. until a∗ i = a∗ i−1 11. return ⟨a∗ i , p∗ i ⟩ Table 1: Algorithm for extracting and labeling activities and significant places. Note that the above two types of aggregation features can generate large cliques in the CRF, which could make standard inference intractable. In our inference, we use the optimized summation templates discussed in Section 2.2. 3.2 Place Detection and Labeling Algorithm Table 1 summarizes our algorithm for efficiently constructing a CRF that jointly estimates a person’s activities and the types of his significant places. The algorithm takes as input a GPS trace. In Step 2 and 3, this trace is segmented into activities ai and their local evidence ej i, which are then used to generate CRF0 without significant places. BP inference is first performed in this CRF so as to determine the activity estimate a∗0, which consists of a sequence of locations and the most likely activity performed at that location (Step 4). Within each iteration of the loop starting at Step 5, such an activity estimate is used to extract a set of significant places. This is done by classifying individual activities in the sequence according to whether or not they belong to a significant place. For instance, while walking, driving a car, or riding a bus are not associated with significant places, working or getting on or off the bus indicate a significant place. All instances at which a significant activity occurs generate a place node. Because a place can be visited multiple times within a sequence, we perform clustering and merge duplicate places into the same place node. This classification and clustering is performed by the algorithm generate places() in Step 7. These places are added to the model and BP is performed in this complete CRF. Since a CRFi can have a different structure than the previous CRFi−1, it might generate a different activity sequence. If this is the case, the algorithm returns to Step 5 and re-generates the set of places using the improved activity sequence. This process is repeated until the activity sequence does not change. In our experiments we observed that this algorithm converges very quickly, typically after three or four iterations. 4 Experimental Results In our experiments, we collected GPS data traces from four different persons, approximately seven days of data per person. The data from each person consisted of roughly 40,000 GPS measurements, resulting in about 10,000 10m segments. We used leaveone-out cross-validation for evaluation. Learning from three persons’ data took about one minute and BP inference on the last person’s data converged within one minute. Extracting significant places We compare our model with a widely-used approach that uses a time threshold to determine whether or not a location is significant [1, 6, 8, 3]. We use four different thresholds from 0 10 20 30 40 0 2 4 6 8 10 False negative False positive 1 min 3 min 5 min 10 min Threshold method Our model (a) 10 100 1000 0 1000 2000 3000 Number of nodes Running Time [s] Naive BP MCMC Optimized BP (b) Figure 2: (a) Accuracy of extracting places. (b) Computation times for summation cliques. Inferred labels Truth Work Sleep Leisure Visit Pickup On/off car Other FN Work 12 / 11 0 0 / 1 0 0 0 1 0 Sleep 0 21 1 2 0 0 0 0 Leisure 2 0 20 / 17 1 / 4 0 0 3 0 Visiting 0 0 0 / 2 7 / 5 0 0 2 0 Pickup 0 0 0 0 1 0 0 2 On/Off car 0 0 0 0 1 13 / 12 0 2 / 3 Other 0 0 0 0 0 0 37 1 FP 0 0 0 0 2 2 3 Table 2: Activity confusion matrix of cross-validation data with (left values) and without (right values) considering places for activity inference (FN and FP are false negatives and false positives). 1 minute to 10 minutes, and we measure the false positive and false negative locations extracted from the GPS traces. As shown in Fig. 2(a), any fixed threshold is not satisfactory: low thresholds have many false positives, and high thresholds result in many false negatives. In contrast, our model performs much better: it only generates 4 false positives and 3 false negative. This experiment shows that using high-level context information drastically improves the extraction of significant places. Labeling places and activities In our system the labels of activities generate instances of places, which then help to better estimate the activities occurring in their spatial area. The confusion matrix given in Table 2 summarizes the activity estimation results achieved with our system on the cross-validation data. The results are given with and without taking the detected places into account. More specifically, without places are results achieved by CRF0 generated by Step 4 of the algorithm in Table 1, and results with places are those achieved after model convergence. When the results of both approaches are identical, only one number is given, otherwise, the first number gives the result achieved with the complete model. The table shows two main results. First, the accuracy of our approach is quite high, especially when considering that the system was evaluated on only one week of data and was trained on only three weeks of data collected by different persons. Second, performing joint inference over activities and places increases the quality of inference. The reason for this is that a place node connects all the activities occurring in its spatial area so that these activities can be labeled in a more consistent way. A further evaluation of the detected places showed that our system achieved 90.6% accuracy in place detection and labeling (see [7] for more results). Efficiency of inference We compared our optimized BP algorithm using FFT summation cliques with inference based on MCMC and regular BP, using the model and data from [8]. Note that a naive implementation of BP is exponential in the number of nodes in a clique. In our experiments, the test accuracies resulting from using the different algorithms are almost identical. Therefore, we only focus on comparing the efficiency and scalability of summation aggregations. The running times for the different algorithms are shown in Fig. 2(b). As can be seen, naive BP becomes extremely slow for only 20 nodes, MCMC only works for up to 500 nodes, while our algorithm can perform summation for 2, 000 variables within a few minutes. 5 Conclusions We provided a novel approach to performing location-based activity recognition. In contrast to existing techniques, our approach uses one consistent framework for both low-level inference and the extraction of a person’s significant places. Thereby, our model is able to take high-level context into account in order to detect the significant locations of a person. Furthermore, once these locations are determined, they help to better detect low-level activities occurring in their vicinity. Summation cliques are extremely important to introduce long-term, soft constraints into activity recognition. We show how to incorporate such cliques into belief propagation using bi-directional FFT computations. The clique templates of RMNs are well suited to specify such clique-specific inference mechanisms and we are developing additional techniques, including clique-specific MCMC and local dynamic programming. Our experiments based on traces of GPS data show that our system significantly outperforms existing approaches. We demonstrate that the model can be trained from a group of persons and then applied successfully to a different person, achieving more than 85% accuracy in determining low-level activities and above 90% accuracy in detecting and labeling significant places. In future work, we will add more sensor data, including accelerometers, audio signals, and barometric pressure. Using the additional information provided by these sensors, we will be able to perform more fine-grained activity recognition. Acknowledgments The authors would like to thank Jeff Bilmes for useful comments. This work has partly been supported by DARPA’s ASSIST and CALO Programme (contract numbers: NBCH-C-05-0137, SRI subcontract 27-000968) and by the NSF under grant number IIS-0093406. References [1] D. Ashbrook and T. Starner. Using GPS to learn significant locations and predict movement across multiple users. Personal and Ubiquitous Computing, 7(5), 2003. [2] E. Oran Brigham. Fast Fourier Transform and Its Applications. Prentice Hall, 1988. [3] V. Gogate, R. Dechter, C. Rindt, and J. Marca. Modeling transportation routines using hybrid dynamic mixed networks. In Proc. of the Conference on Uncertainty in Artificial Intelligence, 2005. [4] S. Kumar and M. Hebert. Discriminative random fields: A discriminative framework for contextual interaction in classification. In Proc. of the International Conference on Computer Vision, 2003. [5] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of the International Conference on Machine Learning, 2001. [6] L. Liao, D. Fox, and H. Kautz. Learning and inferring transportation routines. In Proc. of the National Conference on Artificial Intelligence, 2004. [7] L. Liao, D. Fox, and H. Kautz. Hierarchical conditional random fields for GPS-based activity recognition. In Proc. of the 12th International Symposium of Robotics Research (ISRR), 2005. [8] L. Liao, D. Fox, and H. Kautz. Location-based activity recognition using relational Markov networks. In Proc. of the International Joint Conference on Artificial Intelligence, 2005. [9] Yongyi Mao, Frank R. Kschischang, and Brendan J. Frey. Convolutional factor graphs as probabilistic models. In Proc. of the Conference on Uncertainty in Artificial Intelligence, 2004. [10] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In Proc. of the Conference on Uncertainty in Artificial Intelligence, 2002.
|
2005
|
193
|
2,818
|
Modeling Memory Transfer and Savings in Cerebellar Motor Learning Naoki Masuda RIKEN Brain Science Institute Wako, Saitama 351-0198, Japan masuda@brain.riken.jp Shun-ichi Amari RIKEN Brain Science Institute Wako, Saitama 351-0198, Japan amari@brain.riken.jp Abstract There is a long-standing controversy on the site of the cerebellar motor learning. Different theories and experimental results suggest that either the cerebellar flocculus or the brainstem learns the task and stores the memory. With a dynamical system approach, we clarify the mechanism of transferring the memory generated in the flocculus to the brainstem and that of so-called savings phenomena. The brainstem learning must comply with a sort of Hebbian rule depending on Purkinje-cell activities. In contrast to earlier numerical models, our model is simple but it accommodates explanations and predictions of experimental situations as qualitative features of trajectories in the phase space of synaptic weights, without fine parameter tuning. 1 Introduction The cerebellum is involved in various types of motor learning. As schematically shown in Fig. 1, the cerebellum is composed of the cerebellar cortex and the cerebellar nuclei (we depict the vestibular nucleus V N in Fig. 1). There are two main pathways linking external input from the mossy fibers (mf) to motor outputs, which originate from the cerebellar nuclei. The pathway that relays the mossy fibers directly to the cerebellar nuclei is called the direct pathway. Each nucleus cell receives about 104 mossy fiber synapses. The pathway involving the mossy fibers, the granule cells (gr), the parallel fibers (pl), and the Purkinje cells (Pr) in the flocculo-nodular lobes of the cerebellar cortex, is called the indirect pathway. Because the Purkinje cells, which are the sole source of output from the cerebellar cortex, are GABAergic, firing rates of the nuclei are suppressed when this pathway is active. The indirect pathway also includes recurrent collaterals terminating on various types of inhibitory cells. Another anatomical feature of the indirect pathway is that climbing fibers (Cm in Fig. 1) from the inferior olive (IO) innervate on Purkinje cells. Taking into account the huge mass of intermediate computational units in the indirect pathway, or the granule cells, Marr conjectured that the cerebellum operates as a perceptron with high computational power [8]. The climbing fibers were thought to induce long-term potentiation (LTP) of pl-Pr synapses to reinforce the signal transduction. Albus claimed that long-term depression (LTD) rather than LTP should occur so that the Purkinje cells inhibit the nuclei [2]. The climbing fibers were thought to serve as teaching lines that convey error-correcting signals. x=Au u y=wx=wAu z=vu-y e=ru-z Pr mf VN Cm pl IO gr *v *w Figure 1: Architecture of the VOR model. The vestibulo-ocular reflex (VOR) is a standard benchmark for exploring synaptic substrates of cerebellar motor learning. The VOR is a short-latency reflex eye movement that stabilizes images on the retina during head movement. Motion of the head drives eye movements in the opposite direction. When a subject wears a prism, adaptation of the VOR gain occurs for image stabilization. In this context, in vivo experiments confirmed that the LTD hypothesis is correct (reviewed in [6]). However, the cerebellum is not the only site of convergence of visual and vestibular signals. The learning scheme depending only on the indirect pathway is called the flocculus hypothesis. An alternative is the brainstem hypothesis in which synaptic plasticity is assumed to occur in the direct pathway (mf →V N) [12]. This idea is supported by experimental evidence that flocculus shutdown after 3 days of VOR adaptation does not impair the motor memory [7]. Moreover, in other experiments, plasticity of the Purkinje cells in response to vestibular inputs, as required in the flocculus hypothesis, really occurs but in the direction opposite to that predicted by the flocculus hypothesis [5, 12]. Also, LTP of the mf-V N synapses, which is necessary to implement the brainstem hypothesis [3], has been suggested in experiments [14]. Relative contributions of the flocculus mechanism and the brainstem mechanism to motor learning remain illusive [3, 5, 9]. The same controversy exists regarding the mechanism of associative eyelid conditioning [9, 10, 11]. Related is the distinction between short-term and long-term plasticities. Many of the experiments in favor of the flocculus hypothesis are concerned with short-term learning, whereas plasticity involving the vestibular nuclei is suggested to be functional in the long term. Short-term motor memory in the flocculus may eventually be transferred to the brainstem. This is termed the memory transfer hypothesis [9]. Medina and Mauk proposed a numerical model and examined what types of brainstem learning rules are compatible with memory transfer [10]. They concluded that the brainstem plasticity should be driven by coincident activities of the Purkinje cells and the mossy fibers. The necessity of Hebbian type of learning in the direct pathway is also supported by another numerical model [13]. We propose a much simpler model to understand the essential mechanism of memory transfer without fine parameter manipulations. Another goal of this work is to explain savings of learning. Savings are observed in natural learning tasks. Because animals can be trained just for a limited amount of time per day, the task period and the rest period, of e.g. 1 day, alternate. Performance is improved during the task period, and it degrades during the rest period (in the dark). However, when the alternation is repeated, the performance is enhanced more rapidly and progressively in later sessions [7] (also, S. Nagao, private communication). The flocculus may be responsible for daily rapid learning and forgetting, and the brainstem may underlie gradual memory consolidation [11]. While our target phenomenon of interest is the VOR, the proposed model is fairly general. 2 Model Looking at Fig. 1, let us denote by u ∈Rm the external input to the mossy fibers. It is propagated to the granule cells via synaptic connectivity represented by an n by m matrix A, where presumably n ≫m. The output of the granule cells, or x ≡Au ∈Rn, is received by the Purkinje-cell layer. For simplicity, we assume just one Purkinje cell whose output is written as y ≡wx, where w ∈R × Rm. Since pl-Pr synapses are excitatory, the elements of w are positive. The direct pathway (mf →V N) is defined by a plastic connection matrix v ∈R × Rm. The output to the VOR actuator is given by z = vu −y = vu −wAu, which is the output of the sole neuron of the cerebellar nuclei. This form of z takes into account that the contribution of the indirect pathway is inhibitory and that of the direct pathway is excitatory. The animal learns to adapt z as close as possible to the desirable motor output ru. For a large (resp. small) desirable gain r, the correct direction of synaptic changes is the decrease (resp. increase) in w and the increase (resp. decrease) in v [5]. The learning error e ≡ru −z is carried by the climbing fibers and projects onto the Purkinje cell, which enables supervised learning [6]. The LTD of w occurs when the parallel-fiber input and the climbing-fiber input are simultaneously large [6, 9]. Since we can write ˙w = −η1ex = −1 2η1 ∂e2 ∂w , (1) where η1 is the learning rate, w evolves to minimize e2. Equation (1) is a type of WidrowHoff rule [4, p. 320]. With spontaneous inputs only, or in the presence of x and the absence of e, w experiences LTP [6, 9]. We model this effect by adding η2x to Eq. (1). This term provides subtractive normalization that counteracts the use-dependent LTD [4, p. 290]. However, subtractive normalization cannot prohibit w from running away when the error signal is turned off. Therefore, we additionally assume multiplicative normalization term η3w to limit the magnitude of w [4, p. 290, 314]. In the end, Eq. (1) is modified to ˙w = −η1(ru −vu + wAu)Au + η2Au −η3w, (2) where η2 and η3 are rates of memory decay satisfying η2, η3 ≪η1. In the dark, the VOR gain, which might have changed via adaptation, tends back to a value close to unity [5]. Let us represent this reference gain by r = r0. With the synaptic strengths in this null condition denoted by (w, v) = (w0, v0), we obtain r0u = v0u − w0Au. By setting ˙w = 0 in Eq. (2), we derive η2Au = η1(r0u −v0u + w0Au)Au + η3w0 = η3w0. (3) Substituting Eq. (3) into Eq. (2) results in ˙w = −η1 (ru −vu + wAu) Au −η3 (w −w0) . (4) Experiments show that v can be potentiated [14]. Enhancement of the excitability of the nucleus output (z) in response to tetannic stimulation, or sustained u, is also in line with the LTP of v [1]. In contrast, LTD of v is biologically unknown. Numerical models suggest that LTP in the nuclei should be driven by y [10, 11]. However, the mechanism and the specificity underlying plasticity of v are not well understood [9]. Therefore, we assume that both LTP and LTD of v occur in an associative manner, and we represent the LTP effect by a general function F. In parallel to the learning rule of w, we assume a subtractive normalization term −η5u [10]. We also add a multiplicative normalization term η6v to constrain v. Finally, we obtain ˙v = η4F(u, y, z, e) −η5u −η6v. (5) Presumably, v changes much more slowly (on a time scale of 8–12 hr) than w changes (0.5 hr) [10, 13]. Therefore, we assume η1 ≫η4 ≫η5, η6. 3 Analysis of Memory Transfer Let us examine a couple of learning rules in the direct pathway to identify robust learning mechanisms. 3.1 Supervised learning Although the climbing fibers carrying e send excitatory collaterals to the cerebellar nuclei, supervised learning there has very little experimental support [5]. Here we show that supervised learning in the direct pathway is theoretically unlikely. Let us assume that modification of v decreases |e|. Accordingly, we set F = −∂e2/∂v = eu. Then, Eq. (5) becomes ˙v = η4(ru −vu + wAu)u −η5u −η6v. (6) In the natural situation, r = r0. Hence, η5u = η4(r0u −v0u + w0Au)u −η6v0 = −η6v0. (7) Inserting Eq. (7) into Eq. (6) yields ˙v = η4 (ru −vu + wAu) u −η6(v −v0). (8) For further analysis, let us assume m = n = 1 (for which we quit bold notations) and perform the slow-fast analysis based on η1 ≫η3, η4 ≫η6. Equations (4) and (8) define the nullclines ˙w = 0 and ˙v = 0, which are represented respectively by v = v0 + r −r0 + η1A2u2 + η3 η1Au2 (w −w0), and (9) v = v0 + η4u2 η4u2 + η6 (r −r0) + η4Au2 η4u2 + η6 (w −w0). (10) Since ˙w = O(η1) ≫O(η4) = ˙v in an early stage, a trajectory in the w-v plane initially approaches the fast manifold (Eq. (9)) and moves along it toward the equilibrium given by w∗= w0− η1η6Au2(r −r0) η1η6A2u2 + η3η4u2 + η3η6 , v∗= v0+ η3η4u2(r −r0) η1η6A2u2 + η3η4u2 + η3η6 . (11) LTD of w and LTP of v are expected for adaptation to a larger gain (r > r0), and LTP of w and LTD of v are expected for r < r0. The results are consistent with both the flocculus hypothesis and the brainstem hypothesis as far as the direction of learning is concerned [5]. When r > r0 (resp. r < r0), LTD (resp. LTP) of w first occurs to decrease the learning error. Then, the motor memory stored in w is gradually transferred by LTP (resp. LTD) of v replacing LTD (resp. LTP) of w. In the long run, the memory is stored mainly in v, not in w. However, the memory transfer based on supervised learning has fundamental deficiencies. First, since η1 ≫η3 and η4 ≫η6, both nullclines Eqs. (9) and (10) have a slope close to A in the w-v plane. This means that the relative position of the equilibrium depends heavily on the parameter values, especially on the learning rates, the choice of which is rather arbitrary. Then, (w∗, v∗) may be located so that, for example, the LTP of w or LTD of v results from r > r0. Also, the degree of transfer, or |w∗−w0| / |v∗−v0|, is not robust against parameter changes. This may underlie the fact that LTD of w was not followed by partial LTP in the numerical simulations in [10]. Even if the position of (w∗, v∗) happens to support LTD of w and LTP of v, memory transfer takes a long time. This is because Eqs. (9) and (10) are fairly close, which means that ˙v is small on the fast manifold ( ˙w = 0). We can also imagine a type of Hebbian rule with F = ∂z2/∂v = zu. Similar calculations show that this rule also realizes memory transfer only in an unreliable manner. w ,v 0 * e=0 . w=0 v=0 . w w ,v v 0 * A w ,v 0 * e=0 . w=0 v=0 . w v * w ,v0 B Figure 2: Dynamics of the synaptic weights in the Purkinje cell-dependent learning. (A) r > r0 and (B) r < r0. 3.2 Purkinje cell-dependent learning Results of numerical studies support that v should be subject to a type of Hebbian learning depending on two afferents to the vestibular nuclei, namely, u and y [10, 11, 13]. Changes in the VOR gain are signaled by y. Since LTP should logically occur when y is small and u is large, we set F = (ymax −y)u, where ymax is the maximum firing rate of the Purkinje cell. Then, we obtain ˙v = η4(ymax −wAu)u −η5u −η6v. (12) The subtraction normalization is determined from the equilibrum condition: η5u = η4(ymax −w0Au) −η6v0. (13) Substituting Eq. (13) into Eq. (12) yields ˙v = η4(w0 −w)Au2 + η6(v0 −v). (14) When m = n = 1, the nullclines are given by Eq. (9) and v = v0 −η4Au2 η6 (w −w0), (15) which are depicted in Fig. 2(A) and (B) for r > r0 and r < r0, respectively. As shown by arrows in Fig. 2, trajectories in the w-v space first approach the fast manifold Eq. (9) and then move along it toward the equilibrium given by w∗= w0− η1η6Au2(r −r0) η1η4A2u4 + η1η6A2u2 + η3η6 , v∗= v0+ η1η4A2u4(r −r0) η1η4A2u4 + η1η6A2u2 + η3η6 . (16) Equation (15) has a large negative slope because η4 ≫η6. Consequently, setting r > r0 (resp. r < r0) duly results in LTD (resp. LTP) of w and LTP (resp. LTD) of v. At the same time, LTD (resp. LTP) of w in an early stage of learning is partially compensated by subsequent LTP (resp. LTD) of w, which agrees with previously reported numerical results [10]. In contrast to the supervised and Hebbian learning rules, this learning is robust against parameter changes since the positions and the slopes of the two nullclines are apart from each other. Owing to this property, in the long term, the memory is transferred more rapidly along the w-nullcline than for the other two learning rules. Another benefit of the large negative slope of Eq. (15) is that |v∗−v0| ≫|w∗−w0| holds, which means efficient memory transfer from w to v. The error at the equilibrum state is e∗ = η3η6(r −r0)u η1η4A2u4 + η1η6A2u2 + η3η6 . (17) Equation (17) guarantees that the e = 0 line is located as shown in Fig. 2, and the learning proceeds so as to decrease |e|. The performance overshoot, which is unrealistic, does not occur. 4 Numerical Simulations of Savings The learning rule proposed in Sec. 3.2 explains savings as well. To show this, we mimic a situation of savings by periodically alternating the task period and the rest period. Specifically, we start with r = r0 = 1, w = w0, v = v0, and the learning condition (r = 2 or r = 0.5) is applied for 4 hours a day. During the rest of the day (20 hours), the dark condition is simulated by giving no teaching signal to the model. Changes in the VOR gains for 8 consecutive days are shown in Fig. 3(A) and (C) for r = 2 and r = 0.5, respectively. The numerical results are consistent with the savings found in other reported experiments [7] and models [11]; the animal forgets much of the acquired gain in the dark, while a small fraction is transferred each day to the cerebellar nuclei. The time-dependent synaptic weights are shown in Fig. 3(B) (r = 2) and (D) (r = 0.5) and suggest that v is really responsible for savings and that its plasticity needs guidance under the short-term learning of w. The memory transfer occurs even in the dark condition, as indicated by the increase (resp. decrease) of v in the dark shown in Fig. 3(B) (resp. (D)). This happens because ruin of the short-term memory of w drives the learning of v for some time even after the daily training has finished. For the indirect pathway, a dark condition defines an off-task period during which w gradually loses its associations. For comparison, let us deal with the case in which v is fixed. Then, the learning rule Eq. (4) is reduced to ˙w = −η1 [(r −r0) u + (w −w0) Au] Au −η3 (w −w0) . (18) The VOR adaptation with this rule is shown in Fig. 4(A) (r = 2) and (B) (r = 0.5). Longterm retention of the acquired gain is now impossible, whereas the short-term learning, or the adaptation within a day, deteriorates little. Since savings do not occur, the ultimate learning error is larger than when v is plastic. However, if w is fixed and v is plastic, the VOR gain is not adaptive, since y does not carry teaching signals any longer. In this case, we must implement supervised learning of v for learning to occur. Then, r adapts only gradually on the slow time scale of η4, and the short-term learning is lost. 5 Discussion Our model explains how the flocculus and the brainstem cooperate in motor learning. Presumably, the indirect pathway involving the flocculus is computationally powerful because of a huge number of intermediate granule cells, but its memory is of short-term nature. The direct pathway bypassing the mossy fibers to the cerebellar nuclei is likely to have less computational power but stores motor memory for a long period. A part of the motor memory is expected to be passed from the flocculus to the nuclei. This happens in a robust manner if the direct pathway is equipped with the learning rule dependent on correlation between the Purkinje-cell firing and the mossy-fiber firing. To explore whether associative LTP/LTD in the cerebellar nuclei really exists will be a subject of future experimental work. Our model is also applicable to savings. 1 1.25 1.5 1.75 2 0 50 100 150 r time [hr] A 1.5 2 2.5 3 0 1 2 v w B 0.5 0.75 1 0 50 100 150 r time [hr] B 1 1.5 2 2 2.5 3 v w D Figure 3: Numerical simulations of savings with the Purkinje cell-dependent learning rule. We set A = 0.4, u = 1, w0 = 2, r0 = 1, v0 = r0 + Aw0, η1 = 7, η3 = 0.3, η4 = 0.05, η6 = 0.002. The target gains are (A, B) r = 2 and (C, D) r = 0.5. (A) and (C) show VOR gains. (B) and (D) show trajectories in the w-v space (thin solid lines) together with the nullclines (thick solid lines) and e = 0 (thick dotted lines). 1 1.25 1.5 1.75 2 0 50 100 150 r time [hr] A 0.5 0.75 1 0 50 100 150 r time [hr] B Figure 4: Numerical simulations of savings with fixed v. The parameter values are the same as those used in Fig. 3. The target gains are (A) r = 2 and (B) r = 0.5. In the earlier models [10, 11], quantitative meanings were given to the equilibrium synaptic weights. Actually, they are solely determined from non-experimentally determined parameters, namely, the balance between the learning rates (in our terminology, η1, η2, η4 and η5). Also, the balance seems to play a role in preventing runaway of synaptic weights. In contrast, our model uses the ratio of learning rates (and values of other parameters) just for qualitative purposes and is capable of explaning and predicting experimental settings without parameter tuning. For example, the earlier arguments negating the flocculus hypothesis are based on the fact that the plasticity of the flocculus (w) responding to vestibular inputs occurs but in the direction opposite to the expectation of the flocculus hypothesis [5, 12]. However, this experimental observation is not necessarily contradictory to either the flocculus hypothesis or the two-site hypothesis. As shown in Fig. 2(A), when adapting to a large VOR gain, w experiences LTD in the initial stage [6]. Then, partial LTP ensues as the motor memory is transferred to the nuclei. Another prediction is about adaptation to a small gain. Figure 2(B) predicts that, in this case, LTP in the indirect pathway is gradually transferred to LTD in the direct pathway. Partial LTD following LTP is anticipated in the flocculus. This implies savings in unlearning. Acknowledgments We thank S. Nagao for helpful discussions. This work was supported by the Special Postdoctoral Researchers Program of RIKEN. References [1] C. D. Aizenman, D. J. Linden. Rapid, synaptically driven increases in the intrinsic excitability of cerebellar deep nuclear neurons. Nat. Neurosci., 3, 109–111 (2000). [2] J. S. Albus. A theory of cerebellar function. Math. Biosci., 10, 25–61 (1971). [3] E. S. Boyden, A. Katoh, J. L. Raymond. Cerebellum-dependent learning: the role of multiple plasticity mechanisms. Annu. Rev. Neurosci., 27, 581–609 (2004). [4] P. Dayan, L. F. Abbott. Theoretial Neuroscience — Computational and Mathematical Modeling of Neural Systems. MIT (2001). [5] S. du Lac, J. L. Raymond, T. J. Sejnowski, S. G. Lisberger. Learning and memory in the vestibulo-ocular reflex. Annu. Rev. Neurosci., 18, 409–441 (1995). [6] M. Ito. Long-term depression. Ann. Rev. Neurosci., 12, 85–102 (1989). [7] A. E. Luebke, D. A. Robinson. Gain changes of the cat’s vestibulo-ocular reflex after flocculus deactivation. Exp. Brain Res., 98, 379–390 (1994). [8] D. Marr. A theory of cerebellar cortex. J. Physiol., 202, 437–470 (1969). [9] M. D. Mauk. Roles of cerebellar cortex and nuclei in motor learning: contradictions or clues? Neuron, 18, 343–346 (1997). [10] J. F. Medina, M. D. Mauk. Simulations of cerebellar motor learning: computational analysis of plasticity at the mossy fiber to deep nucleus synapse. J. Neurosci., 19, 7140–7151 (1999). [11] J. F. Medina, K. S. Garcia, M. D. Mauk. A mechanism for savings in the cerebellum. J. Neurosci., 21, 4081–4089 (2001). [12] F. A. Miles, D. J. Braitman, B. M. Dow. Long-term adaptive changes in primate vestibuloocular reflex. IV. Electrophysiological observations in flocculus of adapted monkeys. J. Neurophysiol., 43, 1477–1493 (1980). [13] B. W. Peterson, J. F. Baker, J. C. Houk. A model of adaptive control of vestibuloocular reflex based on properties of cross-axis adaptation. Ann. New York Acad. Sci. 627, 319–337 (1991). [14] R. J. Racine, D. A. Wilson, R. Gingell, D. Sunderland. Long-term potentiation in the interpositus and vestibular nuclei in the rat. Exp. Brain Res., 63, 158–162 (1986).
|
2005
|
194
|
2,819
|
TD(0) Leads to Better Policies than Approximate Value Iteration Benjamin Van Roy Management Science and Engineering and Electrical Engineering Stanford University Stanford, CA 94305 bvr@stanford.edu Abstract We consider approximate value iteration with a parameterized approximator in which the state space is partitioned and the optimal cost-to-go function over each partition is approximated by a constant. We establish performance loss bounds for policies derived from approximations associated with fixed points. These bounds identify benefits to having projection weights equal to the invariant distribution of the resulting policy. Such projection weighting leads to the same fixed points as TD(0). Our analysis also leads to the first performance loss bound for approximate value iteration with an average cost objective. 1 Preliminaries Consider a discrete-time communicating Markov decision process (MDP) with a finite state space S = {1, . . . , |S|}. At each state x ∈S, there is a finite set Ux of admissible actions. If the current state is x and an action u ∈Ux is selected, a cost of gu(x) is incurred, and the system transitions to a state y ∈S with probability pxy(u). For any x ∈S and u ∈Ux, P y∈S pxy(u) = 1. Costs are discounted at a rate of α ∈(0, 1) per period. Each instance of such an MDP is defined by a quintuple (S, U, g, p, α). A (stationary deterministic) policy is a mapping µ that assigns an action u ∈Ux to each state x ∈S. If actions are selected based on a policy µ, the state follows a Markov process with transition matrix Pµ, where each (x, y)th entry is equal to pxy(µ(x)). The restriction to communicating MDPs ensures that it is possible to reach any state from any other state. Each policy µ is associated with a cost-to-go function Jµ ∈ℜ|S|, defined by Jµ = P∞ t=0 αtP t µgµ = (I −αPµ)−1gµ, where, with some abuse of notation, gµ(x) = gµ(x)(x) for each x ∈S. A policy µ is said to be greedy with respect to a function J if µ(x) ∈argmin u∈Ux (gu(x) + α P y∈S pxy(u)J(y)) for all x ∈S. The optimal cost-to-go function J∗∈ℜ|S| is defined by J∗(x) = minµ Jµ(x), for all x ∈S. A policy µ∗is said to be optimal if Jµ∗= J∗. It is well-known that an optimal policy exists. Further, a policy µ∗is optimal if and only if it is greedy with respect to J∗. Hence, given the optimal cost-to-go function, optimal actions can computed be minimizing the right-hand side of the above inclusion. Value iteration generates a sequence Jℓconverging to J∗according to Jℓ+1 = TJℓ, where T is the dynamic programming operator, defined by (TJ)(x) = minu∈Ux(gu(x) + α P y∈S pxy(u)J(y)), for all x ∈S and J ∈ℜ|S|. This sequence converges to J∗for any initialization of J0. 2 Approximate Value Iteration The state spaces of relevant MDPs are typically so large that computation and storage of a cost-to-go function is infeasible. One approach to dealing with this obstacle involves partitioning the state space S into a manageable number K of disjoint subsets S1, . . . , SK and approximating the optimal cost-to-go function with a function that is constant over each partition. This can be thought of as a form of state aggregation – all states within a given partition are assumed to share a common optimal cost-to-go. To represent an approximation, we define a matrix Φ ∈ℜ|S|×K such that each kth column is an indicator function for the kth partition Sk. Hence, for any r ∈ℜK, k, and x ∈Sk, (Φr)(x) = rk. In this paper, we study variations of value iteration, each of which computes a vector r so that Φr approximates J∗. The use of such a policy µr which is greedy with respect to Φr is justified by the following result (see [10] for a proof): Theorem 1 If µ is a greedy policy with respect to a function ˜J ∈ℜ|S| then ∥Jµ −J∗∥∞≤ 2α 1 −α∥J∗−˜J∥∞. One common way of approximating a function J ∈ℜ|S| with a function of the form Φr involves projection with respect to a weighted Euclidean norm ∥·∥π. The weighted Euclidean norm: ∥J∥2,π = P x∈S π(x)J2(x) 1/2. Here, π ∈ℜ|S| + is a vector of weights that assign relative emphasis among states. The projection ΠπJ is the function Φr that attains the minimum of ∥J −Φr∥2,π; if there are multiple functions Φr that attain the minimum, they must form an affine space, and the projection is taken to be the one with minimal norm ∥Φr∥2,π. Note that in our context, where each kth column of Φ represents an indicator function for the kth partition, for any π, J, and x ∈Sk, (ΠπJ)(x) = P y∈Sk π(y)J(y)/ P y∈Sk π(y). Approximate value iteration begins with a function Φr(0) and generates a sequence according to Φr(ℓ+1) = ΠπTΦr(ℓ). It is well-known that the dynamic programming operator T is a contraction mapping with respect to the maximum norm. Further, Ππ is maximum-norm nonexpansive [16, 7, 8]. (This is not true for general Φ, but is true in our context in which columns of Φ are indicator functions for partitions.) It follows that the composition ΠπT is a contraction mapping. By the contraction mapping theorem, ΠπT has a unique fixed point Φ˜r, which is the limit of the sequence Φr(ℓ). Further, the following result holds: Theorem 2 For any MDP, partition, and weights π with support intersecting every partition, if Φ˜r = ΠπTΦ˜r then ∥Φ˜r −J∗∥∞≤ 2 1 −α min r∈ℜK ∥J∗−Φr∥∞, and (1 −α)∥Jµ˜r −J∗∥∞≤ 4α 1 −α min r∈ℜK ∥J∗−Φr∥∞. The first inequality of the theorem is an approximation error bound, established in [16, 7, 8] for broader classes of approximators that include state aggregation as a special case. The second is a performance loss bound, derived by simply combining the approximation error bound and Theorem 1. Note that Jµ˜r(x) ≥J∗(x) for all x, so the left-hand side of the performance loss bound is the maximal increase in cost-to-go, normalized by 1 −α. This normalization is natural, since a cost-to-go function is a linear combination of expected future costs, with coefficients 1, α, α2, . . ., which sum to 1/(1 −α). Our motivation of the normalizing constant begs the question of whether, for fixed MDP parameters (S, U, g, p) and fixed Φ, minr ∥J∗−Φr∥∞also grows with 1/(1 −α). It turns out that minr ∥J∗−Φr∥∞= O(1). To see why, note that for any µ, Jµ = (I −αPµ)−1gµ = 1 1 −αλµ + hµ, where λµ(x) is the expected average cost if the process starts in state x and is controlled by policy µ, λµ = lim τ→∞ 1 τ τ−1 X t=0 P t µgµ, and hµ is the discounted differential cost function hµ = (I −αPµ)−1(gµ −λµ). Both λµ and hµ converge to finite vectors as α approaches 1 [3]. For an optimal policy µ∗, limα↑1 λµ∗(x) does not depend on x (in our context of a communicating MDP). Since constant functions lie in the range of Φ, lim α↑1 min r∈ℜK ∥J∗−Φr∥∞≤lim α↑1 ∥hµ∗∥∞< ∞. The performance loss bound still exhibits an undesirable dependence on α through the coefficient 4α/(1 −α). In most relevant contexts, α is close to 1; a representative value might be 0.99. Consequently, 4α/(1 −α) can be very large. Unfortunately, the bound is sharp, as expressed by the following theorem. We will denote by 1 the vector with every component equal to 1. Theorem 3 For any δ > 0, α ∈(0, 1), and ∆≥0, there exists MDP parameters (S, U, g, p) and a partition such that minr∈ℜK ∥J∗−Φr∥∞= ∆and, if Φ˜r = ΠπTΦ˜r with π = 1, (1 −α)∥Jµ˜r −J∗∥∞≥ 4α 1 −α min r∈ℜK ∥J∗−Φr∥∞−δ. This theorem is established through an example in [22]. The choice of uniform weights (π = 1) is meant to point out that even for such a simple, perhaps natural, choice of weights, the performance loss bound is sharp. Based on Theorems 2 and 3, one might expect that there exists MDP parameters (S, U, g, p) and a partition such that, with π = 1, (1 −α)∥Jµ˜r −J∗∥∞= Θ 1 1 −α min r∈ℜK ∥J∗−Φr∥∞ . In other words, that the performance loss is both lower and upper bounded by 1/(1 −α) times the smallest possible approximation error. It turns out that this is not true, at least if we restrict to a finite state space. However, as the following theorem establishes, the coefficient multiplying minr∈ℜK ∥J∗−Φr∥∞can grow arbitrarily large as α increases, keeping all else fixed. Theorem 4 For any L and ∆≥0, there exists MDP parameters (S, U, g, p) and a partition such that limα↑1 minr∈ℜK ∥J∗−Φr∥∞= ∆and, if Φ˜r = ΠπTΦ˜r with π = 1, lim inf α↑1 (1 −α) (Jµ˜r(x) −J∗(x)) ≥L lim α↑1 min r∈ℜK ∥J∗−Φr∥∞, for all x ∈S. This Theorem is also established through an example [22]. For any µ and x, lim α↑1 ((1 −α)Jµ(x) −λµ(x)) = lim α↑1(1 −α)hµ(x) = 0. Combined with Theorem 4, this yields the following corollary. Corollary 1 For any L and ∆≥0, there exists MDP parameters (S, U, g, p) and a partition such that limα↑1 minr∈ℜK ∥J∗−Φr∥∞= ∆and, if Φ˜r = ΠπTΦ˜r with π = 1, lim inf α↑1 (λµ˜r(x) −λµ∗(x)) ≥L lim α↑1 min r∈ℜK ∥J∗−Φr∥∞, for all x ∈S. 3 Using the Invariant Distribution In the previous section, we considered an approximation Φ˜r that solves ΠπTΦ˜r = Φ˜r for some arbitrary pre-selected weights π. We now turn to consider use of an invariant state distribution π˜r of Pµ˜r as the weight vector.1 This leads to a circular definition: the weights are used in defining ˜r and now we are defining the weights in terms of ˜r. What we are really after here is a vector ˜r that satisfies Ππ˜rTΦ˜r = Φ˜r. The following theorem captures the associated benefits. (Due to space limitations, we omit the proof, which is provided in the full length version of this paper [22].) Theorem 5 For any MDP and partition, if Φ˜r = Ππ˜rTΦ˜r and π˜r has support intersecting every partition, (1 −α)πT ˜r (Jµ˜r −J∗) ≤2α minr∈ℜK ∥J∗−Φr∥∞. When α is close to 1, which is typical, the right-hand side of our new performance loss bound is far less than that of Theorem 2. The primary improvement is in the omission of a factor of 1 −α from the denominator. But for the bounds to be compared in a meaningful way, we must also relate the left-hand-side expressions. A relation can be based on the fact that for all µ, limα↑1 ∥(1 −α)Jµ −λµ∥∞= 0, as explained in Section 2. In particular, based on this, we have lim α↑1(1 −α)∥Jµ −J∗∥∞= |λµ −λ∗| = λµ −λ∗= lim α↑1 πT (Jµ −J∗), for all policies µ and probability distributions π. Hence, the left-hand-side expressions from the two performance bounds become directly comparable as α approaches 1. Another interesting comparison can be made by contrasting Corollary 1 against the following immediate consequence of Theorem 5. Corollary 2 For all MDP parameters (S, U, g, p) and partitions, if Φ˜r = Ππ˜rTΦ˜r and lim infα↑1 P x∈Sk π˜r(x) > 0 for all k, lim sup α↑1 ∥λµ˜r −λµ∗∥∞≤2 lim α↑1 min r∈ℜK ∥J∗−Φr∥∞. The comparison suggests that solving Φ˜r = Ππ˜rTΦ˜r is strongly preferable to solving Φ˜r = ΠπTΦ˜r with π = 1. 1By an invariant state distribution of a transition matrix P, we mean any probability distribution π such that πT P = πT . In the event that Pµ˜r has multiple invariant distributions, π˜r denotes an arbitrary choice. 4 Exploration If a vector ˜r solves Φ˜r = Ππ˜rTΦ˜r and the support of π˜r intersects every partition, Theorem 5 promises a desirable bound. However, there are two significant shortcomings to this solution concept, which we will address in this section. First, in some cases, the equation Ππ˜rTΦ˜r = Φ˜r does not have a solution. It is easy to produce examples of this; though no example has been documented for the particular class of approximators we are using here, [2] offers an example involving a different linearly parameterized approximator that captures the spirit of what can happen. Second, it would be nice to relax the requirement that the support of π˜r intersect every partition. To address these shortcomings, we introduce stochastic policies. A stochastic policy µ maps state-action pairs to probabilities. For each x ∈S and u ∈Ux, µ(x, u) is the probability of taking action u when in state x. Hence, µ(x, u) ≥0 for all x ∈S and u ∈Ux, and P u∈Ux µ(x, u) = 1 for all x ∈S. Given a scalar ϵ > 0 and a function J, the ϵ-greedy Boltzmann exploration policy with respect to J is defined by µ(x, u) = e−(TuJ)(x)(|Ux|−1)/ϵe P u∈Ux e−(TuJ)(x)(|Ux|−1)/ϵe . For any ϵ > 0 and r, let µϵ r denote the ϵ-greedy Boltzmann exploration policy with respect to Φr. Further, we define a modified dynamic programming operator that incorporates Boltzmann exploration: (T ϵJ)(x) = P u∈Ux e−(TuJ)(x)(|Ux|−1)/ϵe(TuJ)(x) P u∈Ux e−(TuJ)(x)(|Ux|−1)/ϵe . As ϵ approaches 0, ϵ-greedy Boltzmann exploration policies become greedy and the modified dynamic programming operators become the dynamic programming operator. More precisely, for all r, x, and J, limϵ↓0 µϵ r(x, µr(x)) = 1 and limϵ↓1 T ϵJ = TJ. These are immediate consequences of the following result (see [4] for a proof). Lemma 1 For any n, v ∈ℜn, mini vi + ϵ ≥P i e−vi(n−1)/ϵevi/ P i e−vi(n−1)/ϵe ≥ mini vi. Because we are only concerned with communicating MDPs, there is a unique invariant state distribution associated with each ϵ-greedy Boltzmann exploration policy µϵ r and the support of this distribution is S. Let πϵ r denote this distribution. We consider a vector ˜r that solves Φ˜r = Ππϵ ˜rT ϵΦ˜r. For any ϵ > 0, there exists a solution to this equation (this is an immediate extension of Theorem 5.1 from [4]). We have the following performance loss bound, which parallels Theorem 5 but with an equation for which a solution is guaranteed to exist and without any requirement on the resulting invariant distribution. (Again, we omit the proof, which is available in [22].) Theorem 6 For any MDP, partition, and ϵ > 0, if Φ˜r = Ππϵ ˜rT ϵΦ˜r then (1 − α)(πϵ ˜r)T (Jµϵ ˜r −J∗) ≤2α minr∈ℜK ∥J∗−Φr∥∞+ ϵ. 5 Computation: TD(0) Though computation is not a focus of this paper, we offer a brief discussion here. First, we describe a simple algorithm from [16], which draws on ideas from temporal-difference learning [11, 12] and Q-learning [23, 24] to solve Φ˜r = ΠπTΦ˜r. It requires an ability to sample a sequence of states x(0), x(1), x(2), . . ., each independent and identically distributed according to π. Also required is a way to efficiently compute (TΦr)(x) = minu∈Ux(gu(x) + α P y∈S pxy(u)(Φr)(y)), for any given x and r. This is typically possible when the action set Ux and the support of px·(u) (i.e., the set of states that can follow x if action u is selected) are not too large. The algorithm generates a sequence of vectors r(ℓ) according to r(ℓ+1) = r(ℓ) + γℓφ(x(ℓ)) (TΦr(ℓ))(x(ℓ)) −(Φr(ℓ))(x(ℓ)) , where γℓis a step size and φ(x) denotes the column vector made up of components from the xth row of Φ. In [16], using results from [15, 9], it is shown that under appropriate assumptions on the step size sequence, r(ℓ) converges to a vector ˜r that solves Φ˜r = ΠπTΦ˜r. The equation Φ˜r = ΠπTΦ˜r may have no solution. Further, the requirement that states are sampled independently from the invariant distribution may be impractical. However, a natural extension of the above algorithm leads to an easily implementable version of TD(0) that aims at solving Φ˜r = Ππϵ ˜rT ϵΦ˜r. The algorithm requires simulation of a trajectory x0, x1, x2, . . . of the MDP, with each action ut ∈Uxt generated by the ϵ-greedy Boltzmann exploration policy with respect to Φr(t). The sequence of vectors r(t) is generated according to r(t+1) = r(t) + γtφ(xt) (T ϵΦr(t))(xt) −(Φr(t))(xt) . Under suitable conditions on the step size sequence, if this algorithm converges, the limit satisfies Φ˜r = Ππϵ ˜rT ϵΦ˜r. Whether such an algorithm converges and whether there are other algorithms that can effectively solve Φ˜r = Ππϵ ˜rT ϵΦ˜r for broad classes of relevant problems remain open issues. 6 Extensions and Open Issues Our results demonstrate that weighting a Euclidean norm projection by the invariant distribution of a greedy (or approximately greedy) policy can lead to a dramatic performance gain. It is intriguing that temporal-difference learning implicitly carries out such a projection, and consequently, any limit of convergence obeys the stronger performance loss bound. This is not the first time that the invariant distribution has been shown to play a critical role in approximate value iteration and temporal-difference learning. In prior work involving approximation of a cost-to-go function for a fixed policy (no control) and a general linearly parameterized approximator (arbitrary matrix Φ), it was shown that weighting by the invariant distribution is key to ensuring convergence and an approximation error bound [17, 18]. Earlier empirical work anticipated this [13, 14]. The temporal-difference learning algorithm presented in Section 5 is a version of TD(0), This is a special case of TD(λ), which is parameterized by λ ∈[0, 1]. It is not known whether the results of this paper can be extended to the general case of λ ∈[0, 1]. Prior research has suggested that larger values of λ lead to superior results. In particular, an example of [1] and the approximation error bounds of [17, 18], both of which are restricted to the case of a fixed policy, suggest that approximation error is amplified by a factor of 1/(1 −α) as λ is changed from 1 to 0. The results of Sections 3 and 4 suggest that this factor vanishes if one considers a controlled process and performance loss rather than approximation error. Whether the results of this paper can be extended to accommodate approximate value iteration with general linearly parameterized approximators remains an open issue. In this broader context, error and performance loss bounds of the kind offered by Theorem 2 are unavailable, even when the invariant distribution is used to weight the projection. Such error and performance bounds are available, on the other hand, for the solution to a certain linear program [5, 6]. Whether a factor of 1/(1−α) can similarly be eliminated from these bounds is an open issue. Our results can be extended to accommodate an average cost objective, assuming that the MDP is communicating. With Boltzmann exploration, the equation of interest becomes Φ˜r = Ππϵ ˜r(T ϵΦ˜r −˜λ1). The variables include an estimate ˜λ ∈ℜof the minimal average cost λ∗∈ℜand an approximation Φ˜r of the optimal differential cost function h∗. The discount factor α is set to 1 in computing an ϵ-greedy Boltzmann exploration policy as well as T ϵ. There is an average-cost version of temporal-difference learning for which any limit of convergence (˜λ, ˜r) satisfies this equation [19, 20, 21]. Generalization of Theorem 2 does not lead to a useful result because the right-hand side of the bound becomes infinite as α approaches 1. On the other hand, generalization of Theorem 6 yields the first performance loss bound for approximate value iteration with an average-cost objective: Theorem 7 For any communicating MDP with an average-cost objective, partition, and ϵ > 0, if Φ˜r = Ππϵ ˜r(T ϵΦ˜r −˜λ1) then λµϵ ˜r −λ∗≤2 min r∈ℜK ∥h∗−Φr∥∞+ ϵ. Here, λµϵ ˜r ∈ℜdenotes the average cost under policy µϵ ˜r, which is well-defined because the process is irreducible under an ϵ-greedy Boltzmann exploration policy. This theorem can be proved by taking limits on the left and right-hand sides of the bound of Theorem 6. It is easy to see that the limit of the left-hand side is λµϵ ˜r −λ∗. The limit of minr∈ℜK ∥J∗−Φr∥∞ on the right-hand side is minr∈ℜK ∥h∗−Φr∥∞. (This follows from the analysis of [3].) Acknowledgments This material is based upon work supported by the National Science Foundation under Grant ECS-9985229 and by the Office of Naval Research under Grant MURI N00014-001-0637. The author’s understanding of the topic benefited from collaborations with Dimitri Bertsekas, Daniela de Farias, and John Tsitsiklis. A full length version of this paper has been submitted to Mathematics of Operations Research and has benefited from a number of useful comments and suggestions made by reviewers. References [1] D. P. Bertsekas. A counterexample to temporal-difference learning. Neural Computation, 7:270–279, 1994. [2] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, 1996. [3] D. Blackwell. Discrete dynamic programming. Annals of Mathematical Statistics, 33:719–726, 1962. [4] D. P. de Farias and B. Van Roy. On the existence of fixed points for approximate value iteration and temporal-difference learning. Journal of Optimization Theory and Applications, 105(3), 2000. [5] D. P. de Farias and B. Van Roy. Approximate dynamic programming via linear programming. In Advances in Neural Information Processing Systems 14. MIT Press, 2002. [6] D. P. de Farias and B. Van Roy. The linear programming approach to approximate dynamic programming. Operations Research, 51(6):850–865, 2003. [7] G. J. Gordon. Stable function approximation in dynamic programming. Technical Report CMU-CS-95-103, Carnegie Mellon University, 1995. [8] G. J. Gordon. Stable function approximation in dynamic programming. In Machine Learning: Proceedings of the Twelfth International Conference (ICML), San Francisco, CA, 1995. [9] T. Jaakkola, M. I. Jordan, and S. P. Singh. On the Convergence of Stochastic Iterative Dynamic Programming Algorithms. Neural Computation, 6:1185–1201, 1994. [10] S. P. Singh and R. C. Yee. An upper-bound on the loss from approximate optimalvalue functions. Machine Learning, 1994. [11] R. S. Sutton. Temporal Credit Assignment in Reinforcement Learning. PhD thesis, University of Massachusetts, Amherst, Amherst, MA, 1984. [12] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988. [13] R. S. Sutton. On the virtues of linear learning and trajectory distributions. In Proceedings of the Workshop on Value Function Approximation, Machine Learning Conference, 1995. [14] R. S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In Advances in Neural Information Processing Systems 8, Cambridge, MA, 1996. MIT Press. [15] J. N. Tsitsiklis. Asynchronous stochastic approximation and Q-learning. Machine Learning, 16:185–202, 1994. [16] J. N. Tsitsiklis and B. Van Roy. Feature–based methods for large scale dynamic programming. Machine Learning, 22:59–94, 1996. [17] J. N. Tsitsiklis and B. Van Roy. An analysis of temporal–difference learning with function approximation. IEEE Transactions on Automatic Control, 42(5):674–690, 1997. [18] J. N. Tsitsiklis and B. Van Roy. Analysis of temporal-difference learning with function approximation. In Advances in Neural Information Processing Systems 9, Cambridge, MA, 1997. MIT Press. [19] J. N. Tsitsiklis and B. Van Roy. Average cost temporal-difference learning. In Proceedings of the IEEE Conference on Decision and Control, 1997. [20] J. N. Tsitsiklis and B. Van Roy. Average cost temporal-difference learning. Automatica, 35(11):1799–1808, 1999. [21] J. N. Tsitsiklis and B. Van Roy. On average versus discounted reward temporaldifference learning. Machine Learning, 49(2-3):179–191, 2002. [22] B. Van Roy. Performance loss bounds for approximate value iteration with state aggregation. Under review with Mathematics of Operations Research, available at www.stanford.edu/ bvr/psfiles/aggregation.pdf, 2005. [23] C. J. C. H. Watkins. Learning From Delayed Rewards. PhD thesis, Cambridge University, Cambridge, UK, 1989. [24] C. J. C. H. Watkins and P. Dayan. Q-learning. Machine Learning, 8:279–292, 1992.
|
2005
|
195
|
2,820
|
Gradient Flow Independent Component Analysis in Micropower VLSI Abdullah Celik, Milutin Stanacevic and Gert Cauwenberghs Johns Hopkins University, Baltimore, MD 21218 {acelik,miki,gert}@jhu.edu Abstract We present micropower mixed-signal VLSI hardware for real-time blind separation and localization of acoustic sources. Gradient flow representation of the traveling wave signals acquired over a miniature (1cm diameter) array of four microphones yields linearly mixed instantaneous observations of the time-differentiated sources, separated and localized by independent component analysis (ICA). The gradient flow and ICA processors each measure 3mm × 3mm in 0.5 µm CMOS, and consume 54 µW and 180 µW power, respectively, from a 3 V supply at 16 ks/s sampling rate. Experiments demonstrate perceptually clear (12dB) separation and precise localization of two speech sources presented through speakers positioned at 1.5m from the array on a conference room table. Analysis of the multipath residuals shows that they are spectrally diffuse, and void of the direct path. 1 Introduction Time lags in acoustic wave propagation provide cues to localize an acoustic source from observations across an array. The time lags also complicate the task of separating multiple co-existing sources using independent component analysis (ICA), which conventionally assumes instantaneous mixture observations. Inspiration from biology suggests that for very small aperture (spacing between acoustic sensors i.e., tympanal membranes), small differences (gradients) in sound pressure level are more effective in resolving source direction than actual (microsecond scale) time differences. The remarkable auditory localization capability of certain insects at a small (1%) fraction of the wavelength of the source owes to highly sensitive differential processing of sound pressure through inter-tympanal mechanical coupling [1] or inter-aural coupled neural circuits [2]. We present a mixed-signal VLSI system that operates on spatial and temporal differences (gradients) of the acoustic field at very small aperture to separate and localize mixtures of traveling wave sources. The real-time performance of the system is characterized through experiments with speech sources presented through speakers in a conference room setting. s(t) u θ φ x10 x-10 x01 x0-1 τ2τ1 t τ τ s s ITD ILD s(t+τ) s(t) (a) (b) s(t) u θ φ x10 x-10 x01 x0-1 τ2τ1 s(t) u θ φ x10 x-10 x01 x0-1 τ2τ1 t τ τ s s ITD ILD s(t+τ) s(t) t τ τ s s ITD ILD s(t+τ) s(t) (a) (b) Figure 1: (a) Gradient flow principle. At low aperture, interaural level differences (ILD) and interaural time differences (ITD) are directly related, scaled by the temporal derivative of the signal. (b) 3-D localization (azimuth θ and elevation φ) of an acoustic source using a planar geometry of four microphones. 2 Gradient Flow Independent Component Analysis Gradient flow [3, 4] is a signal conditioning technique for source separation and localization suited for arrays of very small aperture, i.e., of dimensions significantly smaller than the shortest wavelength in the sources. The principle is illustrated in Figure 1 (a). Consider a traveling acoustic wave impinging on an array of four microphones, in the configuration of Figure 1 (b). The 3-D direction cosines of the traveling wave u are implied by propagation delays τ1 and τ2 in the source along directions p and q in the sensor plane. Direct measurement of these delays is problematic as they require sampling in excess of the bandwidth of the signal, increasing noise floor and power requirements. However, indirect estimates of the delays are obtained, to first order, by relating spatial and temporal derivatives of the acoustic field: ξ10(t) ≈ τ1 ˙ξ00(t) ξ01(t) ≈ τ2 ˙ξ00(t) (1) where ξ10 and ξ01 represent spatial gradients in p and q directions around the origin (p = q = 0), ξ00 the spatial common mode, and ˙ξ00 its time derivative. Estimates of ξ00, ξ10 and ξ01 for the sensor geometry of Figure 1 can be obtained as: ξ00 ≈ 1 4 ¡ x−1,0 + x1,0 + x0,−1 + x0,1 ¢ ξ10 ≈ 1 2 ¡ x1,0 −x−1,0 ¢ (2) ξ01 ≈ 1 2 ¡ x0,1 −x0,−1 ¢ A single source can be localized by estimating direction cosines τ1 and τ2 from (1), a principle known for years in monopulse radar, exploited by parasite insects [1], and implemented in mixed-signal VLSI hardware [6]. As shown in Figure 1 (b), the planar geometry of four microphones allows to localize a source in 3-D, with both azimuth and elevation 1. More significantly, multiple coexisting sources sℓ(t) can be jointly separated and localized 1An alternative using two microphones, exploiting shape of the pinna, is presented in [5] using essentially the same principle [3, 4]: ξ00(t) = X ℓ sℓ(t) + ν00(t) ξ10(t) = X ℓ τ ℓ 1 ˙sℓ(t) + ν10(t) (3) ξ01(t) = X ℓ τ ℓ 2 ˙sℓ(t) + ν01(t) where ν00, ν10 and ν01 represent common mode and spatial derivative components of additive noise in the sensor observations. Taking the time derivative of ξ00, we thus obtain from the sensors a linear instantaneous mixture of the time-differentiated source signals, ˙ξ00 ξ10 ξ01 ≈ 1 · · · 1 τ 1 1 · · · τ L 1 τ 1 2 · · · τ L 2 ˙s1 ... ˙sL + " ˙ν00 ν10 ν01 # , (4) an equation in the standard form x = As + n, where x is given and the mixing matrix A and sources s are unknown. Ignoring the noise term n, the problem setting is standard in Independent Component Analysis (ICA), and three independent sources can be identified from the three gradient observations. Various formulations of ICA exist to arrive at estimates of the unknown s and A from observations x. ICA algorithms typically specify some sort of statistical independence assumption on the sources s either in distribution over amplitude [7] or over time [8]. Most forms specify ICA to be static, in assuming that the observations contain static (instantaneous) linear mixtures of the sources. Note that this definition of static ICA includes methods for blind source separation that make use of temporal structure in the dynamics within the sources themselves [8], as long as the observed mixture of the sources is static. In contrast, ‘convolutive’ ICA techniques explicitly assume convolutive or delayed mixtures in the source observations. Convolutive ICA techniques (e.g., [10]) are usually much more involved and require a large number of parameters and long adaptation time horizons for proper convergence. The instantaneous static formulation of gradient flow (4) is convenient,2 and avoids the need for non-static (convolutive) ICA to separate delayed mixtures of traveling wave sources (in free space) xpq(t) = P ℓsℓ(t + pτ1 + qτ2). Reverberation in multipath wave propagation contributes delayed mixture components in the observations which limit the effectiveness of a static ICA formulation. As shown in the experiments below, static ICA still produces reasonable results (12 dB of perceptually clear separation) in typical enclosed acoustic environments (conference room). 3 Micropower VLSI Implementation Various analog VLSI implementations of ICA exist in the literature, e.g., [11, 12], and digital implementations using DSP are common practice in the field. By adopting a mixedsignal architecture in the implementation, we combine advantages of both approaches: an analog datapath directly interfaces with inputs and outputs without the need for data conversion; and digital adaptation offers the flexibility of reconfigurable ICA learning rules. 2The time-derivative in the source signals (4) is immaterial, and can be removed by timeintegrating the separated signals obtained by applying ICA directly to the gradient flow signals. LMS REGISTERS LMS REGISTERS MULTIPLYING DAC MULTIPLYING DAC ξξξξ00 ξξξξ00. ξξξξ10 ξξξξ01 ττττ1 ττττ2 W31 W32 W33 W21 W22 W23 W11 W12 W13 S/H OUTPUT BUFFERS ICA REGISTERS MULTIPLYING DAC (a) (b) LMS REGISTERS LMS REGISTERS MULTIPLYING DAC MULTIPLYING DAC ξξξξ00 ξξξξ00. ξξξξ10 ξξξξ01 ττττ1 ττττ2 W31 W32 W33 W21 W22 W23 W11 W12 W13 S/H OUTPUT BUFFERS ICA REGISTERS MULTIPLYING DAC (a) (b) Figure 2: (a) Gradient flow processor. (b) Reconfigurable ICA processor. Dimensions of both processors are 3mm × 3mm in 0.5 µm CMOS technology. W11 W12 W13 W21 W22 W23 W31 W32 W33 x1 x2 x3 y1 y2 y3 update bits update bits Wij xi yj level comparison level comparison output bits -1, 0, +1 C -1, 0, +1 µ Figure 3: Reconfigurable mixed-signal ICA architecture implementing general outerproduct forms of ICA update rules. 3.1 Gradient Flow Processor The mixed-signal VLSI processor implementing gradient flow is presented in [6]. A micrograph of the chip is shown in Figure 2 (a). Precise analog gradients ˙ξ00, ξ10 and ξ01 are acquired from the microphone signals by correlated double sampling (CDS) in fully differential switched-capacitor circuits. Least-mean-squares (LMS) cancellation of common-mode leakage in the gradient signals further increases differential sensitivity. The adaptation is performed in the digital domain using counting registers, and couples to the switchedcapacitor circuits using capacitive multiplying DAC arrays. An additional stage of LMS adaptation produces digital estimates of direction cosines τ1 and τ2 for a single source. In the present setup this stage is bypassed, and the common-mode corrected gradient signals are presented as inputs to the ICA chip for localization and separation of up to three independent sources. 3.2 Reconfigurable ICA Processor A general mixed-signal parallel architecture, that can be configured for implementation of various ICA update rules in conjunction with gradient flow, is shown in Figure 3 [9]. Here we briefly illustrate the architecture with a simple configuration designed to separate two sources, and present CMOS circuits that implement the architecture. The micrograph of the reconfigurable ICA chip is shown in Figure 2 (a). 3.2.1 ICA update rule Efficient implementation in parallel architecture requires a simple form of the update rule, that avoids excessive matrix multiplications and inversions. A variety of ICA update algorithms can be cast in a common, unifying framework of outer-product rules [9]. To obtain estimates y = ˆs of the sources s, a linear transformation with matrix W is applied to the gradient signals x, y = Wx. Diagonal terms are fixed wii ≡1, and off-diagonal terms adapt according to ∆wij = −µ f(yi)g(yj), i ̸= j (5) The implemented update rule can be seen as the gradient of InfoMax [7] multiplied by WT , rather than the natural gradient multiplication factor WT W. To obtain the full natural gradient in outer-product form, it is necessary to include a back-propagation path in the network architecture, and thus additional silicon resources, to implement the vector contribution yT . Other equivalences with standard ICA algorithms are outlined in [9]. 3.2.2 Architecture Level comparison provides implementation of discrete approximations of any scalar function f(y) and g(y) appearing in different learning rules. Since speech signals are approximately Laplacian distributed, the nonlinear scalar function f(y) is approximated by sign(y) and implemented using single bit quantization. Conversely, a linear function g(y) ≡y in the learning rule is approximated by a 3-level staircase function (−1, 0, +1) using 2-bit quantization. The quantization of the f and g terms in the update rule (5) simplifies the implementation to that of discrete counting operations. The functional block diagram of a 3 × 3 outer-product incremental ICA architecture, supporting a quantized form of the general update rule (5), is shown in Figure 3 [9]. Un-mixing coefficients are stored digitally in each cell of the architecture. The update is performed locally by once or repeatedly incrementing, decrementing or holding the current value of counter based on the learning rule served by the micro-controller. The 8 most significant bits of the 14-bit counter holding and updating the coefficients are presented to a multiplying D/A capacitor array [6] to linearly unmix the separated signal. The remaining 6 bits in the coefficient registers provide flexibility in programming the update rate to tailor convergence. 3.2.3 Circuit implementation As in the implementation of the gradient flow processor [6], the mixed-signal ICA architecture is implemented using fully differential switched-capacitor sampled-data circuits. Correlated double sampling performs common mode offset rejection and 1/f noise reduction. An external micro-controller provides flexibility in the implementation of different learning rules. The ICA architecture is integrated on a single 3mm × 3mm chip fabricated in 0.5 µm 3M2P CMOS technology. The block diagram of ICA prototype in Figure 3 indicates its main functionality is a vector(3x1)-matrix(3x3) multiplication with adaptive matrix elements. Each cell in the implemented architecture contains a 14-bit counter, decoder and D/A capacitor arrays. Adaptation is performed in outer-product fashion by incrementing, decrementing or holding the current value of the counters. The most significant 8 bits of the C2 C2 1 1 1e 1e 2 2 2 Vref A A φ φ φ φ φ φ yi + y i 1e A φ^ 1φ^ 2φ^ C3 C4 Vth C3 1e A φ^ sgn(yi-Vth)
x j x j + WijC (1-Wij)C (1-Wij)C WijC Figure 4: Correlated double sampling (CDS) switched-capacitor fully differential circuits implementing linearly weighted summing in the mixed-signal ICA architecture. 1.5 m 1.5 m 1 cm 1.5 m 1.5 m 1 cm Figure 5: Experimental setup for separation of two acoustic sources in a conference room enviroment. counter are presented to the multiplying D/A capacitor arrays to construct the source estimation. Figure 4 shows the circuits one output component in the architecture, linearly summing the input contributions. The implementation of the multiplying capacitor arrays are identical to those discussed in [6]. Each output signal yi is is computed by accumulating outputs from the all the cells in the ithrow. The accumulation is performed on C2 by switch-cap amplifier yielding the estimated signals during Φ2 phase. While the estimation signals are valid, yi+ is sampled at ˆΦ1 by the comparator circuit. The sign of the comparison of yi with variable level threshold Vth is computed in the evaluate phase, through capacitive coupling into the amplifier input node. 4 Experimental Results To demonstrate source separation and localization in a real environment, the mixed-signal VLSI ASICs were interfaced with four omnidirectional miniature microphones (Knowles FG-3629), arranged in a circular array with radius 0.5 cm. At the front-end, the microphone signals were passed through second-order bandpass filters with low-frequency cutoff at 130 Hz and high-frequency cutoff at 4.3 kHz. The signals were also amplified by a factor of 20. The experimental setup is shown in Figure 5. The speech signals were presented through loudspeakers positioned at 1.5 m distance from the array. The system sampling frequency of both chips was set to 16 kHz. A male and female speakers from TIMIT database were chosen as sound sources. To provide the ground truth data and full characterization of the systems, speech segments were presented individually through either loudspeaker at different time instances. The data was recorded for both speakers, archived, and presented to the 0
1
2
3
5
0
5
ξ
00
Frequency
0
1
2
x 10
4
0
0.5
1
0
1
2
3
2
0
2
ξ
10
Frequency
0
1
2
x 10
4
0
0.5
1
0
1
2
3
2
0
2
ξ
01
Frequency
0
1
2
x 10
4
0
0.5
1
0
1
2
3
1
0
1
s
1
Frequency
0
1
2
x 10
4
0
0.5
1
0
1
2
3
1
0
1
s
2
Time (s)
Time
Frequency
0
1
2
x 10
4
0
0.5
1
^ ^ Figure 6: Time waveforms and spectrograms of the presented sources s1 and s2, observed common-mode and gradient signals ξ00, ξ10 and ξ01 by the gradient flow chip, and recovered sources ˆs1 and ˆs2 by the ICA chip. Table 1: Localization Performance Male speaker Female speaker Single-source LMS localization -31.11 40.95 Dual-source ICA localization -30.35 43.55 gradient flow chip. Localization results obtained by gradient flow chip through LMS adaptation are reported in Table 1. The two recorded datasets were then added, and presented to the gradient flow ASIC. The gradient signals obtained from the chip were then presented to the ICA processor, configured to implement the outerproduct update algorithm in (5). The observed convergence time was around 2 seconds. From the recorded 14-bit digital weights, the angles of incidence of the sources relative to the array were derived. These estimated angles are reported in Table 1. As seen, the angles obtained through LMS bearing estimation under individual source presentation are very close to the angles produced by ICA under joint presentation of both sources. The original sources and the recorded source signal estimates, along with recorded common-mode signal and first-order spatial gradients, are shown in Figure 6. 5 Conclusions We presented a mixed-signal VLSI system that operates on spatial and temporal differences (gradients) of the acoustic field at very small aperture to separate and localize mixtures of traveling wave sources. The real-time performance of the system was characterized through experiments with speech sources presented through speakers in a conference room setting. Although application of static ICA is limited by reverberation, the perceptual quality of the separated outputs owes to the elimination of the direct path in the residuals. Miniature size of the microphone array enclosure (1 cm diameter) and micropower consumption of the VLSI hardware (250 µW) are key advantages of the approach, with applications to hearing aids, conferencing, multimedia, and surveillance. Acknowledgments This work was supported by grants of the Catalyst Foundation (New York), the National Science Foundation, and the Defense Intelligence Agency. References [1] D. Robert, R.N. Miles, and R.R. Hoy, “Tympanal Hearing in the Sarcophagid Parasitoid Fly Emblemasoma sp.: the Biomechanics of Directional Hearing,” J. Experimental Biology, vol. 202, pp 1865-1876, 1999. [2] R. Reeve and B. Webb, “New neural circuits for robot phonotaxis”, Philosophical Transactions of the Royal Society A, vol. 361, pp. 2245-2266, 2002. [3] G. Cauwenberghs, M. Stanacevic, and G. Zweig, “Blind Broadband Source Localization and Separation in Miniature Sensor Arrays,” Proc. IEEE Int. Symp. Circuits and Systems (ISCAS’2001), Sydney, Australia, May 6-9, 2001. [4] J. Barr`ere and G. Chabriel, “A Compact Sensor Array for Blind Separation of Sources”, IEEE Transactions Circuits and Systems, Part I, vol. 49 (5), pp. 565-574, 2002. [5] J.G. Harris, C.-J. Pu, J.C. Principe, “A Neuromorphic Monaural Sound Localizer,” Proc. Neural Inf. Proc. Sys. (NIPS*1998), Cambridge MA: MIT Press, vol. 10, pp. 692-698, 1999. [6] G. Cauwenberghs and M. Stanacevic, “Micropower Mixed-Signal Acoustic Localizer,” Proc. IEEE Eur. Solid State Circuits Conf. (ESSCIRC’2003), Estoril Portugal, Sept. 1618, 2003. [7] A.J. Bell and T.J. Sejnowski, “An Information Maximization Approach to Blind Separation and Blind Deconvolution,” Neural Comp, vol. 7 (6), pp 1129-1159, Nov 1995. [8] L. Molgedey and G. Schuster, “Separation of a mixture of independent signals using time delayed correlations,” Physical Review Letters, vol. 72, no. 23, pp. 3634–3637, 1994. [9] A. Celik, M. Stanacevic and G. Cauwenberghs, “Mixed-Signal Real-Time Adaptive Blind Source Separation,” Proc. IEEE Int. Symp. Circuits and Systems (ISCAS’2004), Vancouver Canada, May 23-26, 2004. [10] R. Lambert and A. Bell, “Blind separation of multiple speakers in a multipath environment,” Proc. ICASSP’97, M¨unich, 1997. [11] Cohen, M.H., Andreou, A.G. “Analog CMOS Integration and Experimentation with an Autoadaptive Independent Component Analyzer,” IEEE Trans. Circuits and Systems II, vol 42 (2), pp 65-77, Feb. 1995. [12] Gharbi, A.B.A., Salam, F.M.A. “Implementation and Test Results of a Chip for the Separation of Mixed Signals,” Proc. Int. Symp. Circuits and Systems (ISCAS’95), May 1995. [13] M. Cohen and G. Cauwenberghs, “Blind Separation of Linear Convolutive Mixtures through Parallel Stochastic Optimization,” Proc. IEEE Int. Symp. Circuits and Systems (ISCAS’98), Monterey CA, vol. 3, pp. 17-20, 1998.
|
2005
|
196
|
2,821
|
An Alternative Infinite Mixture Of Gaussian Process Experts Edward Meeds and Simon Osindero Department of Computer Science University of Toronto Toronto, M5S 3G4 {ewm,osindero}@cs.toronto.edu Abstract We present an infinite mixture model in which each component comprises a multivariate Gaussian distribution over an input space, and a Gaussian Process model over an output space. Our model is neatly able to deal with non-stationary covariance functions, discontinuities, multimodality and overlapping output signals. The work is similar to that by Rasmussen and Ghahramani [1]; however, we use a full generative model over input and output space rather than just a conditional model. This allows us to deal with incomplete data, to perform inference over inverse functional mappings as well as for regression, and also leads to a more powerful and consistent Bayesian specification of the effective ‘gating network’ for the different experts. 1 Introduction Gaussian process (GP) models are powerful tools for regression, function approximation, and predictive density estimation. However, despite their power and flexibility, they suffer from several limitations. The computational requirements scale cubically with the number of data points, thereby necessitating a range of approximations for large datasets. Another problem is that it can be difficult to specify priors and perform learning in GP models if we require non-stationary covariance functions, multi-modal output, or discontinuities. There have been several attempts to circumvent some of these lacunae, for example [2, 1]. In particular the Infinite Mixture of Gaussian Process Experts (IMoGPE) model proposed by Rasmussen and Ghahramani [1] neatly addresses the aforementioned key issues. In a single GP model, an n by n matrix must be inverted during inference. However, if we use a model composed of multiple GP’s, each responsible only for a subset of the data, then the computational complexity of inverting an n by n matrix is replaced by several inversions of smaller matrices — for large datasets this can result in a substantial speed-up and may allow one to consider large-scale problems that would otherwise be unwieldy. Furthermore, by combining multiple stationary GP experts, we can easily accommodate non-stationary covariance and noise levels, as well as distinctly multi-modal outputs. Finally, by placing a Dirichlet process prior over the experts we can allow the data and our prior beliefs (which may be rather vague) to automatically determine the number of components to use. In this work we present an alternative infinite model that is strongly inspired by the work in [1], but which uses a different formulation for the mixture of experts that is in the style presented in, for example [3, 4]. This alternative approach effectively uses posterior rePSfrag replacements yi xi zi N PSfrag replacements yi xi zi N Figure 1: Left: Graphical model for the standard MoE model [6]. The expert indicators {z(i)} are specified by a gating network applied to the inputs {x(i)}. Right: An alternative view of MoE model using a full generative model [4]. The distribution of input locations is now given by a mixture model, with components for each expert. Conditioned on the input locations, the posterior responsibilities for each mixture component behave like a gating network. sponsibilities from a mixture distribution as the gating network. Even if the task at hand is simply output density estimation or regression, we suggest a full generative model over inputs and outputs might be preferable to a purely conditional model. The generative approach retains all the strengths of [1] and also has a number of potential advantages, such as being able to deal with partially specified data (e.g. missing input co-ordinates) and being able to infer inverse functional mappings (i.e. the input space given an output value). The generative approach also affords us a richer and more consistent way of specifying our prior beliefs about how the covariance structure of the outputs might vary as we move within input space. An example of the type of generative model which we propose is shown in figure 2. We use a Dirichlet process prior over a countably infinite number of experts and each expert comprises two parts: a density over input space describing the distribution of input points associated with that expert, and a Gaussian Process model over the outputs associated with that expert. In this preliminary exposition, we restrict our attention to experts whose input space densities are given a single full covariance Gaussian. Even this simple approach demonstrates interesting performance and capabilities. However, in a more elaborate setup the input density associated with each expert might itself be an infinite mixture of simpler distributions (for instance, an infinite mixture of Gaussians [5]) to allow for the most flexible partitioning of input space amongst the experts. The structure of the paper is as follows. We begin in section 2 with a brief overview of two ways of thinking about Mixtures of Experts. Then, in section 3, we give the complete specification and graphical depiction of our generative model, and in section 4 we outline the steps required to perform Monte Carlo inference and prediction. In section 5 we present the results of several simple simulations that highlight some of the salient features of our proposal, and finally in section 6, we discuss our work and place it in relation to similar techniques. 2 Mixtures of Experts In the standard mixture of experts (MoE) model [6], a gating network probabilistically mixes regression components. One subtlety in using GP’s in a mixture of experts model is that IID assumptions on the data no longer hold and we must specify joint distributions for each possible assignment of experts to data. Let {x(i)} be the set of d-dimensional input vectors, {y(i)} be the set of scalar outputs, and {z(i)} be the set of expert indicators which assign data points to experts. The likelihood of the outputs, given the inputs, is specified in equation 1, where θGP r represents the GP parameters of the rth expert, θg represents the parameters of the gating network, and the summation is over all possible configurations of indicator variables. Σx νS fS aνc bνc ν0 f0 µx S νc Σ0 Σr µr aα0 bα0 α0 zr i xr (i) Yr {z(i)} v0r v1r wjr j = 1 : D r = 1 : K a0 b0 a1 b1 aw bw µ0 i = 1 : Nr Figure 2: The graphical model representation of the alternative infinite mixture of GP experts (AiMoGPE) model proposed in this paper. We have used xr (i) to represent the ith data point in the set of input data whose expert label is r, and Yr to represent the set of all output data whose expert label is r. In other words, input data are IID given their expert label, whereas the sets of output data are IID given their corresponding sets of input data. The lightly shaded boxes with rounded corners represent hyper-hyper parameters that are fixed (Ωin the text). The DP concentration parameter α0, the expert indicators variables, {z(i)}, the gate hyperparameters, φx = {µ0, Σ0, νc, S}, the gate component parameters, ψx r = {µr, Σr}, and the GP expert parameters, θGP r = {v0r, v1r, wjr}, are all updated for all r and j. P({y(i)}|{x(i)}, θ)= X Z P({z(i)}|{x(i)}, θg) Y r P({y(i) : z(i) = r}|{x(i) : z(i) = r}, θGP r ) (1) There is an alternative view of the MoE model in which the experts also generate the inputs, rather than simply being conditioned on them [3, 4] (see figure 1). This alternative view employs a joint mixture model over input and output space, even though the objective is still primarily that of estimating conditional densities i.e. outputs given inputs. The gating network effectively gets specified by the posterior responsibilities of each of the different components in the mixture. An advantage of this perspective is that it can easily accommodate partially observed inputs and it also allows ‘reverse-conditioning’, should we wish to estimate where in input space a given output value is likely to have originated. For a mixture model using Gaussian Processes experts, the likelihood is given by P({x(i)},{y(i)}|θ) = X Z P({z(i)}|θg)× Y r P({y(i) : z(i) = r}|{x(i) : z(i) = r}, θGP r )P({x(i) : z(i) = r}|θg) (2) where the description of the density over input space is encapsulated in θg. 3 Infinite Mixture of Gaussian Processes: A Joint Generative Model The graphical structure for our full generative model is shown in figure 2. Our generative process does not produce IID data points and is therefore most simply formulated either as a joint distribution over a dataset of a given size, or as a set of conditionals in which we incrementally add data points.To construct a complete set of N sample points from the prior (specified by top-level hyper-parameters Ω) we would perform the following operations: 1. Sample Dirichlet process concentration variable α0 given the top-level hyperparameters. 2. Construct a partition of N objects into at most N groups using a Dirichlet process. This assignment of objects is denoted by using a set the indicator variables {z(i)}N i=1. 3. Sample the gate hyperparameters φx given the top-level hyperparameters. 4. For each grouping of indicators {z(i) : z(i) = r}, sample the input space parameters ψx r conditioned on φx. ψx r defines the density in input space, in our case a full-covariance Gaussian. 5. Given the parameters ψx r for each group, sample the locations of the input points Xr ≡{x(i) : z(i) = r}. 6. For each group, sample the hyper-parameters for the GP expert associated with that group, θGP r . 7. Using the input locations Xr and hyper-parameters θGP r for the individual groups, formulate the GP output covariance matrix and sample the set of output values, Yr ≡{y(i) : z(i) = r} from this joint Gaussian distribution. We write the full joint distribution of our model as follows. P({x(i), y(i)}N i=1, {z(i)}N i=1, {ψx r }N r=1, {θGP r }N r=1, α0, φx|N, Ω) = N Y r=1 HN r P(ψx r |φx)P(Xr|ψx r )P(θGP r |Ω)P(Yr|Xr, θGP r ) + (1 −HN r )D0(ψx r , θGP r ) × P({z(i)}N i=1|N, α0)P(α0|Ω)P(φx|Ω) (3) Where we have used the supplementary notation: HN r = 0 if {{z(i)} : z(i) = r} is the empty set and HN r = 1 otherwise; and D0(ψx r , θGP r ) is a delta function on an (irrelevant) dummy set of parameters to ensure proper normalisation. For the GP components, we use a standard, stationary covariance function of the form Q(x(i), x(h)) = v0 exp −1 2 XD j=1 x(i)j −x(h)j 2 /w2 j + δ(i, h)v1 (4) The individual distributions in equation 3 are defined as follows1: P(α0|Ω) = G(α0; aα0, bα0) (5) P({z(i)}N i=1|N, Ω) = PU(α0, N) (6) P(φx|Ω) = N(µ0; µx, Σx/f0)W(Σ−1 0 ; ν0, f0Σ−1 x /ν0) G(νc; aνc, bνc)W(S−1; νS, fSΣx/νS) (7) P(ψx r |Ω) = N(µr; µ0, Σ0)W(Σ−1 r ; νc, S/νc) (8) P(Xr|ψx r ) = N(Xr; µr, Σr) (9) P(θGP r |Ω) = G(v0r; a0, b0)G(v1r; a1, b1) YD j=1 LN(wjr; aw, bw) (10) P(Yr|Xr, θGP r ) = N(Yr; µQr, σ2 Qr) (11) 1We use the notation N, W, G, and LN to represent the normal, the Wishart, the gamma, and the log-normal distributions, respectively; we use the parameterizations found in [7] (Appendix A). The notation PU refers to the Polya urn distribution [8]. In an approach similar to Rasmussen [5], we use the input data mean µx and covariance Σx to provide an automatic normalisation of our dataset. We also incorporate additional hyperparameters f0 and fS, which allow prior beliefs about the variation in location of µr and size of Σr, relative to the data covariance. 4 Monte Carlo Updates Almost all the integrals and summations required for inference and learning operations within our model are analytically intractable, and therefore necessitate Monte Carlo approximations. Fortunately, all the necessary updates are relatively straightforward to carry out using a Markov Chain Monte Carlo (MCMC) scheme employing Gibbs sampling and Hybrid Monte Carlo. We also note that in our model the predictive density depends on the entire set of test locations (in input space). This transductive behaviour follows from the non-IID nature of the model and the influence that test locations have on the posterior distribution over mixture parameters. Consequently, the marginal predictive distribution at a given location can depend on the other locations for which we are making simultaneous predictions. This may or may not be desired. In some situations the ability to incorporate the additional information about the input density at test time may be beneficial. However, it is also straightforward to effectively ‘ignore’ this new information and simply compute a set of independent single location predictions. Given a set of test locations {x∗ (t)}, along with training data pairs {x(i), y(i)} and top-level hyper-parameters Ω, we iterate through the following conditional updates to produce our predictive distribution for unknown outputs {y∗ (t)}. The parameter updates are all conjugate with the prior distributions, except where noted: 1. Update indicators {z(i)} by cycling through the data and sampling one indicator variable at a time. We use algorithm 8 from [9] with m = 1 to explore new experts. 2. Update input space parameters. 3. Update GP hyper-params using Hybrid Monte Carlo [10]. 4. Update gate hyperparameters. Note that νc is updated using slice sampling [11]. 5. Update DP hyperparameter α0 using the data augmentation technique of Escobar and West [12]. 6. Resample missing output values by cycling through the experts, and jointly sampling the missing outputs associated with that GP. We perform some preliminary runs to estimate the longest auto-covariance time, τmax for our posterior estimates, and then use a burn-in period that is about 10 times this timescale before taking samples every τmax iterations.2 For our simulations the auto-covariance time was typically 40 complete update cycles, so we use a burn-in period of 500 iterations and collect samples every 50. 5 Experiments 5.1 Samples From The Prior In figure 3 (A) we give an example of data drawn from our model which is multi-modal and non-stationary. We also use this artificial dataset to confirm that our MCMC algorithm performs well and is able recover sensible posterior distributions. Posterior histograms for some of the inferred parameters are shown in figure 3 (B) and we see that they are well clustered around the ‘true’ values. 2This is primarily for convenience. It would also be valid to use all the samples after the burn-in period, and although they could not be considered independent, they could be used to obtain a more accurate estimator. −8 −6 −4 −2 0 2 4 6 8 10 −60 −50 −40 −30 −20 −10 0 10 20 30 40 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 0 5 10 15 α0 count 3 4 5 6 0 20 40 60 80 100 k count (A) (B) Figure 3: (A) A set of samples from our model prior. The different marker styles are used to indicate the sets of points from different experts. (B) The posterior distribution of log α0 with its true value indicated by the dashed line (top) and the distribution of occupied experts (bottom). We note that the posterior mass is located in the vicinity of the true values. 5.2 Inference On Toy Data To illustrate some of the features of our model we constructed a toy dataset consisting of 4 continuous functions, to which we added different levels of noise. The functions used were: f1(a1) = 0.25a2 1 −40 a1 ∈(0 . . . 15) Noise SD: 7 (12) f2(a2) = −0.0625(a2 −18)2 + .5a2 + 20 a2 ∈(35 . . . 60) Noise SD: 7 (13) f3(a3) = 0.008(a3 −60)3 −70 a3 ∈(45 . . . 80) Noise SD: 4 (14) f4(a4) = −sin(0.25a4) −6 a4 ∈(80 . . . 100) Noise SD: 2 (15) The resulting data has non-stationary noise levels, non-stationary covariance, discontinuities and significant multi-modality. Figure 4 shows our results on this dataset along with those from a single GP for comparison. We see that in order to account for the entire data set with a single GP, we are forced to infer an unnecessarily high level of noise in the function. Also, a single GP is unable to capture the multi-modality or non-stationarity of the data distribution. In contrast, our model seems much more able to deal with these challenges. Since we have a full generative model over both input and output space, we are also able to use our model to infer likely input locations given a particular output value. There are a number of applications for which this might be relevant, for example if one wanted to sample candidate locations at which to evaluate a function we are trying to optimise. We provide a simple illustration of this in figure 4 (B). We choose three output levels and conditioned on the output having these values, we sample for the input location. The inference seems plausible and our model is able to suggest locations in input space for a maximal output value (+40) that was not seen in the training data. 5.3 Regression on a simple “real-world” dataset We also apply our model and algorithm to the motorcycle dataset of [13]. This is a commonly used dataset in the GP community and therefore serves as a useful basis for comparison. In particular, it also makes it easy to see how our model compares with standard GP’s and with the work of [1]. Figure 5 compares the performance of our model with that of a single GP. In particular, we note that although the median of our model closely resembles the mean of the single GP, our model is able to more accurately model the low noise level −20 0 20 40 60 80 100 120 −120 −100 −80 −60 −40 −20 0 20 40 60 80 Training Data AiMoGPE Single GP −20 0 20 40 60 80 100 120 −120 −100 −80 −60 −40 −20 0 20 40 60 80 (A) (B) Figure 4: Results on a toy dataset. (A) The training data is shown along with the predictive mean of a stationary covariance GP and the median of the predictive distribution of our model. (B) The small dots are samples from the model (160 samples per location) evaluated at 80 equally spaced locations across the range (but plotted with a small amount of jitter to aid visualisation). These illustrate the predictive density from our model. The solid the lines show the ± 2 SD interval from a regular GP. The circular markers at ordinates of 40, 10 and −100 show samples from ‘reverse-conditioning’ where we sample likely abscissa locations given the test ordinate and the set of training data. on the left side of the dataset. For the remainder of the dataset, the noise level modeled by our model and a single GP are very similar, although our model is better able to capture the behaviour of the data at around 30 ms. It is difficult to make an exact comparison to [1], however we can speculate that our model is more realistically modeling the noise at the beginning of the dataset by not inferring an overly “flat” GP expert at that location. We can also report that our expert adjacency matrix closely resembles that of [1]. 6 Discussion We have presented an alternative framework for an infinite mixture of GP experts. We feel that our proposed model carries over the strengths of [1] and augments these with the several desirable additional features. The pseudo-likelihood objective function used to adapt the gating network defined in [1] is not guaranteed to lead to a self-consistent distribution and therefore the results may depend on the order in which the updates are performed; our model incorporates a consistent Bayesian density formulation for both input and output spaces by definition. Furthermore, in our most general framework we are more naturally able to specify priors over the partitioning of space between different expert components. Also, since we have a full joint model we can infer inverse functional mappings. There should be considerable gains to be made by allowing the input density models be more powerful. This would make it easier for arbitrary regions of space to share the same covariance structures; at present the areas ‘controlled’ by a particular expert tend to be local. Consequently, a potentially undesirable aspect of the current model is that strong clustering in input space can lead us to infer several expert components even if a single GP would do a good job of modelling the data. An elegant way of extending the model in this way might be to use a separate infinite mixture distribution for the input density of each expert, perhaps incorporating a hierarchical DP prior across the infinite set of experts to allow information to be shared. With regard to applications, it might be interesting to further explore our model’s capability to infer inverse functional mappings; perhaps this could be useful in an optimisation or active learning context. Finally, we note that although we have focused on rather small examples so far, it seems that the inference techniques should scale well to larger problems 0 10 20 30 40 50 60 −150 −100 −50 0 50 100 Time (ms) Acceleration (g) Training Data AiMoGPE SingleGP 0 10 20 30 40 50 60 −150 −100 −50 0 50 100 Time (ms) Acceleration (g) (A) (B) Figure 5: (A) Motorcycle impact data together with the median of our model’s point-wise predictive distribution and the predictive mean of a stationary covariance GP model. (B) The small dots are samples from our model (160 samples per location) evaluated at 80 equally spaced locations across the range (but plotted with a small amount of jitter to aid visualisation). The solid lines show the ± 2 SD interval from a regular GP. and more practical tasks. Acknowledgments Thanks to Ben Marlin for sharing slice sampling code and to Carl Rasmussen for making minimize.m available. References [1] C.E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In Advances in Neural Information Processing Systems 14, pages 881–888. MIT Press, 2002. [2] V. Tresp. Mixture of Gaussian processes. In Advances in Neural Information Processing Systems, volume 13. MIT Press, 2001. [3] Z. Ghahramani and M. I. Jordan. Supervised learning from incomplete data via an EM approach. In Advances in Neural Information Processing Systems 6, pages 120–127. MorganKaufmann, 1995. [4] L. Xu, M. I. Jordan, and G. E. Hinton. An alternative model for mixtures of experts. In Advances in Neural Information Processing Systems 7, pages 633–640. MIT Press, 1995. [5] C. E. Rasmussen. The infinite Gaussian mixture model. In Advances in Neural Information Processing Systems, volume 12, pages 554–560. MIT Press, 2000. [6] R.A. Jacobs, M.I. Jordan, and G.E. Hinton. Adaptive mixture of local experts. Neural Computation, 3, 1991. [7] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman and Hall, 2nd edition, 2004. [8] D. Blackwell and J. B. MacQueen. Ferguson distributions via Polya urn schemes. The Annals of Statistics, 1(2):353–355, 1973. [9] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [10] R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1, University of Toronto, 1993. [11] R. M. Neal. Slice sampling (with discussion). Annals of Statistics, 31:705–767, 2003. [12] M. Escobar and M. West. Computing Bayesian nonparametric hierarchical models. In Practical Nonparametric and Semiparametric Bayesian Statistics, number 133 in Lecture Notes in Statistics. Springer-Verlag, 1998. [13] B. W. Silverman. Some aspects of the spline smoothing approach to non-parametric regression curve fitting. J. Royal Stayt Society. Ser. B, 47:1–52, 1985.
|
2005
|
197
|
2,822
|
Silicon Growth Cones Map Silicon Retina Brian Taba and Kwabena Boahen∗ Department of Bioengineering University of Pennsylvania Philadelphia, PA 19104 {btaba,boahen}@seas.upenn.edu Abstract We demonstrate the first fully hardware implementation of retinotopic self-organization, from photon transduction to neural map formation. A silicon retina transduces patterned illumination into correlated spike trains that drive a population of silicon growth cones to automatically wire a topographic mapping by migrating toward sources of a diffusible guidance cue that is released by postsynaptic spikes. We varied the pattern of illumination to steer growth cones projected by different retinal ganglion cell types to self-organize segregated or coordinated retinotopic maps. 1 Introduction Engineers have long admired the brain’s ability to effortlessly adapt to novel situations without instruction, and sought to endow digital computers with a similar capacity for unsupervised self-organization. One prominent example is Kohonen’s self-organizing map [1], which achieved popularity by distilling neurophysiological insights into a simple set of mathematical equations. Although these algorithms are readily simulated in software, previous hardware implementations have required high precision components that are expensive in chip area (e.g. [2, 3]). By contrast, neurobiological systems can self-organize components possessing remarkably heterogeneous properties. To pursue this biological robustness against component mismatch, we designed circuits that mimic neurophysiological function down to the subcellular level. In this paper, we demonstrate topographic refinement of connections between a silicon retina and the first neuromorphic self-organizing map chip, previously reported in [5], which is based on axon migration in the developing brain. During development, neurons wire themselves into their mature circuits by extending axonal and dendritic precursors called neurites. Each neurite is tipped by a motile sensory structure called a growth cone that guides the elongating neurite based on local chemical cues. Growth cones move by continually sprouting and retracting finger-like extensions called filopodia whose dynamics can be biased by diffusible ligands in an activitydependent manner [4]. Based on these observations, we designed and fabricated the Neurotrope1 chip to implement a population of silicon growth cones [5]. We interfaced Neu∗www.neuroengineering.upenn.edu/boahen x n(x) c(a) a source target a b c d Figure 1: Neurotropic axon guidance. a. Active source cells (grey) relay spikes down their axons to their growth cones, which excite nearby target cells. b. Active target cell bodies secrete neurotropin. c. Neurotropin spreads laterally, establishing a spatial concentration gradient that is sampled by active growth cones. d. Active growth cones climb the local neurotropin gradient, translating temporal activity coincidence into spatial position coincidence. Growth cones move by displacing other growth cones. rotrope1 directly to a spiking silicon retina to illustrate its applicability to larger neuromorphic systems. This paper is organized as follows. In Section 2, we present an algorithm for axon migration under the guidance of a diffusible chemical whose release and uptake is gated by activity. In Section 3, we describe our hardware implementation of this algorithm. In Section 4, we examine the Neurotrope1 system’s performance on a topographic refinement task when driven by spike trains generated by a silicon retina in response to several types of illumination stimuli. 2 Neurotropic axon guidance We model the self-organization of connections between two layers of neurons (Fig. 1). Cells in the source layer innervate cells in the target layer with excitatory axons that are tipped by motile growth cones. Growth cones tow their axons within the target layer as directed by a diffusible guidance factor called neurotropin that they bind from the local extracellular environment. Neurotropin is released by postsynaptically active target cell bodies and bound by presynaptically active growth cones, so the retrograde transfer of neurotropin from a target cell to a source cell measures the temporal coincidence of their spike activities. Growth cones move to maximize their neurotropic uptake, a Hebbian-like learning rule that causes cells that fire at the same time to wire to the same place. To prevent the population of growth cones from attempting to trivially maximize their uptake by all exciting the same target cell, we impose a synaptic density constraint that requires a migrating growth cone to displace any other growth cone occupying its path. To state the model more formally, source cell bodies occupy nodes of a regular twodimensional (2D) lattice embedded in the source layer, while growth cones and target cell bodies occupy nodes on separate 2D lattices that are interleaved in the target layer. We index nodes by their positions in their respective layers, using Greek letters for source layer positions (e.g., α ∈Z2) and Roman letters for target layer positions (e.g., x, c ∈Z2). Each source cell α fires spikes at a rate aSC(α) and conveys this presynaptic activity down an axon that elaborates an excitatory arbor in the target layer centered on c(α). In principle, every branch of this arbor is tipped by its own motile growth cone, but to facilitate efficient GC GC N AER AER N GC GC GC GC N N 3 0 4 1 2 0 1 2 3 4 Neurotrope1 RAM µC USB Computer Silicon retina 0 1 2 3 4 1 3 4 0 2 a b c Figure 2: a. Neurotrope1 system. Spike communication is by address-events (AER). b. Neurotrope1 cell mosaic. The extracellular medium (grey) is laid out as a monolithic honeycomb lattice. Growth cones (GC) occupy nodes of this lattice and extend filopodia to the adjacent nodes. Neurotropin receptors (black) are located at the tip of each filopodium and at the growth cone body. Target cells (N) occupy nodes of an interleaved triangular lattice. c. Detail of chip layout. hardware implementation, we abstract the collection of branch growth cones into a single central growth cone that tows the arbor’s trunk around the target layer, dragging the rest of the arbor with it. The arbor overlaps nearby target cells with a branch density A(x −c(α)) that diminishes with distance ∥x −c(α)∥from the arbor center. The postsynaptic activity aTC(x) of target cell x is proportional to the linear sum of its excitation. aTC(x) = X α aSC(α)A(x −c(α)) (1) Postsynaptically active target cell bodies release neurotropin, which spreads laterally until consumed by constitutive decay processes. The neurotropin n(x′) present at target site x′ is assembled from contributions from all active release sites. The contribution of each target cell x is proportional to its postsynaptic activity and weighted by a spreading kernel N(x −x′) that is a decreasing function of its distance ∥x −x′∥from the measurement site x′. n(x′) = X x aTC(x)N(x −x′) (2) A presynaptically active growth cone located at c(α) computes the direction of the local neurotropin gradient by identifyingthe adjacent lattice node c′(α) ∈C(c(α)) with the most neurotropin, where C(c(α)) includes c(α) and its nearest neighbors. c′(α) = arg maxx′∈C(c(α))n(x′) (3) Once the growth cone has identified c′(α), it swaps positions with the growth cone already located at c′(α), increasing its own neurotropic uptake while preserving a constant synaptic density. Growth cones compute position updates independently, at a rate λ(α) ∝aSC(α) maxy∈C(c(α)) n(x′). Updates are executed asynchronously, in order of their arrival. Software simulation of a similar set of equations generates self-organized feature maps when driven by appropriately correlated source cell activity [6]. Here, we illustrate topographic map formation in hardware using correlated spike trains generated by a silicon retina. 0 3 4 1 2 1 0 2 3 4 1 0 2 3 4 3 1 4 0 2 Route 0 3 4 1 2 1 0 2 3 4 1 0 2 3 4 3 1 4 0 2 Update 0 2 4 1 3 1 0 2 3 4 1 0 2 3 4 3 1 0 4 2 Route a b c Figure 3: Virtual axon remapping. a. Cell bodies tag their spikes with their own source layer addresses, which the forward lookup table translates into target layer destinations. b. Axon updates are computed by growth cones, which decode their own target layer addresses through the reverse lookup table to obtain the source layer addresses of their cell bodies that identify their entries in the forward lookup table. c. Growth cones move by modifying their entries in the forward and reverse lookup tables to reroute their spikes to updated locations. 3 Neurotrope1 system Our hardware implementation splits the model into three stages: the source layer, the target layer, and the intervening axons (Fig. 2a). Any population of spiking neurons can act as a source layer; in this paper we employ the silicon retina of [7]. The target layer is implemented by a full custom VLSI chip that interleaves a 48 × 20 array of growth cone circuits with a 24 × 20 array of target cell circuits. There is also a spreading network that represents the intervening medium for propagating neurotropin. The Neurotrope1 chip was fabricated by MOSIS using the TSMC 0.35µm process and has an area of 11.5 mm2. Connections are specified as entries in a pair of lookup tables, stored in an off-chip RAM, that are updated by a Ubicom ip2022 microcontrolleras instructed by the Neurotrope1 chip. The ip2022 also controls a USB link that allows a computer to write and read the contents of the RAM. Subsection 3.1 explains how updates are computed by the Neurotrope1 chip and Subsection 3.2 describes the procedure for executing these updates. 3.1 Axon updates Axon updates are computed by the Neurotrope1 chip using the transistor circuits described in [5]. Here, we provide a brief description. The Neurotrope1 chip represents neurotropin as charge spreading through a monolithic transistor channel laid out as a honeycomb lattice. Each growth cone occupies one node of this lattice and extends filopodia to the three adjacent nodes, expressing neurotropin receptors at all four locations (Fig. 2b-c). When a growth cone receives a presynaptic spike, its receptor circuits tap charge from all four nodes onto separate capacitors. The first capacitor voltage to integrate to a threshold resets all of the growth cone’s capacitors and transmits a request off-chip to update the growth cone’s position by swapping locations with the growth cone currently occupying the winning node. 3.2 Address-event remapping Chips in the Neurotrope1 system exchange spikes encoded in the address-event representation (AER) [8], an asynchronous communication protocol that merges spike trains from every cell on the same chip onto a single shared data link instead of requiring a dedicated wire for each connection. Each spike is tagged with the address of its originating cell for transmission off-chip. Between chips, spikes are routed through a forward lookup table that translates their original source layer addresses into their destined target layer addresses Retina color map n=0 n=85 25 50 75 n 5 10 15 20 25 <FH n L> a b c d Figure 4: Retinotopic self-organization of ON-center RGCs. a. Silicon retina color map of ON-center RGC body positions. A representative RGC body is outlined in white, as are the RGC neighbors that participate in its topographic order parameter Φ(n). b. Target layer color map of growth cone positions for sample n = 0, colored by the retinal positions of their cell bodies. Growth cones projected by the representative RGC and its nearest neighbors are outlined in white. Grey lines denote target layer distances used to compute Φ(n). c. Target layer color map at n = 85. d. Order parameter evolution. on the receiving chip (Fig. 3a). An axon entry in this forward lookup table is indexed by the source layer address of its cell body and contains the target layer address of its growth cone. The virtual axon moves by updating this entry. Axon updates are computed by growth cone circuits on the Neurotrope1 chip, encoded as address-events, and sent to the ip2022 for processing. Each update identifies a pair of axon terminals to be swapped. These growth cone addresses are translated through a reverse lookup table into the source layer addresses that index the relevant forward lookup table entries (Fig. 3b). Modification of the affected entries in each lookup table completes the axon migration (Fig. 3c). 4 Retinotopic self-organization We programmed the growth cone population to self-organize retinotopic maps by driving them with correlated spike trains generated by the silicon retina. The silicon retina translates patterned illumination in real-time into spike trains that are fed into the Neurotrope1 chip as presynaptic input from different retinal ganglion cell (RGC) types. An ON-center RGC is excited by a spot of light in the center of its receptive field and inhibited by light in the surrounding annulus, while an OFF-center RGC responds analogously to the absence of light. There is an ON-center and an OFF-center RGC located at every retinal coordinate. To generate appropriately correlated RGC spike trains, we illuminated the silicon retina with various mixtures of light and dark spot stimuli. Each spot stimulus was presented against a uniformly grey background for 100 ms and covered a contiguous cluster of RGCs centered on a pseudorandomly selected position in the retinal plane, eliciting overlapping bursts of spikes whose coactivity established a spatially restricted presynaptic correlation kernel containing enough information to instruct topographic ordering [9]. Strongly driven RGCs could fire at nearly 1 kHz, which was the highest mean rate at which the silicon retina could still be tuned to roughly balance ON- and OFF-center RGC excitability. We tracked the evolution of the growth cone population by reading out the contents of the lookup table every five minutes, a sampling interval selected to include enough patch stimuli to allow each of the 48 × 20 possible patches to be activated on average at least once per sample. We first induced retinotopic self-organization within a single RGC cell type by illuminating the silicon retina with a sequence of randomly centered spots of light presented against a grey background, selectively activating only ON-center RGCs. Each of the 960 growth ON-center OFF-center RGC stimulus Rate H kHz L x 1 x 1 n=0 n=310 100 200 300 n 5 10 15 20 25 <FH n L> a b c d e Figure 5: Segregation by cell type under separate light and dark spot stimulation. Top: ON-center; bottom: OFF-center. a. Silicon retina image of representative spot stimulus. Light or dark intensity denotes relative ON- or OFF-center RGC output rate. b. Spike rates for ON-center (grey) and OFF-center (black) RGCs in column x of a cross-section of a representative spot stimulus. c. Target layer color maps of RGC growth cones at sample n = 0. Black indicates the absence of a growth cone projected by an RGC of this cell type. Other colors as in Fig. 4. d. Target layer color maps at n = 310. e. Order parameter evolution for ON-center (grey) and OFF-center (black) RGCs. cones was randomly assigned to a different ON-center RGC, creating a scrambled map from retina to target layer (Fig. 4a-b). The ON-center RGC growth cone population visibly refined the topography of the nonretinotopic initial state (Fig. 4c). We quantify this observation by introducing an order parameter Φ(n) whose value measures the instantaneous retinotopyfor an RGC at the nth sample. The definitionof retinotopyis that adjacent RGCs innervate adjacent target cells, so we define Φ(n) for a given RGC to be the average target layer distance separating its growth cone from the growth cones projected by the six adjacent RGCs of the same cell type. The population average ⟨Φ(n)⟩converges to a value that represents the achievable performance on this task (Fig. 4d). We next induced growth cones projected by each cell type to self-organize disjoint topographic maps by illuminating the silicon retina with a sequence of randomly centered light or dark spots presented against a grey background (Fig. 5a-b). Half the growth cones were assigned to ON-center RGCs and the other half were assigned to the corresponding OFF-center RGCs. We seeded the system with a random projection that evenly distributed growth cones of both cell types across the entire target layer (Fig. 5c). Since only RGCs of the same cell type were coactive, growth cones segregated into ON- and OFF-center clusters on opposite sides of the target layer (Fig. 5d). OFF-center RGCs were slightly more excitable on average than ON-center RGCs, so their growth cones refined their topography more quickly (Fig. 5e) and clustered in the right half of the target layer, which was also more excitable due to poor power distribution on the Neurotrope1 chip. Finally, we induced growth cones of both cell types to self-organize coordinated retinotopic maps by illuminating the retina with center-surround stimuli that oscillate radially from light to dark or vice versa (Fig. 6). The light-dark oscillation injected enough coactivity between neighboring ON- and OFF-center RGCs to prevent their growth cones from segregating by cell type into disjoint clusters. Instead, both subpopulations developed and maintained coarse retinotopic maps that cover the entire target layer and are oriented in register with one another, properties sufficient to seed more interesting circuits such as oriented receptive fields [10]. Performance in this hardware implementation is limited mainly by variability in the behavON-center OFF-center RGC stimulus Rate H kHz L x 1 x 1 n=0 n=335 100 200 300 n 5 10 15 20 25 <FH n L> a b c d e Figure 6: Coordinated retinotopy under center-surround stimulation. Top: ON-center; bottom: OFF-center. a. Silicon retina image of a representative center-surround stimulus. Light or dark intensity denotes relative ON- or OFF-center RGC output rate. b. Spike rates for ON-center (grey) and OFF-center (black) RGCs in column x of a cross-section of a representative center-surround stimulus. c. Target layer color maps of RGC growth cones for sample n = 0. Colors as in Fig. 5. d. Target layer color maps at n = 335. e. Order parameter evolution for ON-center (grey) and OFF-center (black) RGCs. ior of nominally identical circuits on the Neurotrope1 chip and the silicon retina. In the silicon retina, the wide variance of the RGC output rates [7] limits both the convergence speed and the final topographic level achieved by the spot-driven growth cone population. Growth cones move faster when stimulated at higher rates, but elevating the mean output rate of the RGC population allows more excitable RGCs to fire spontaneously at a sustained rate, swamping growth cone-specific guidance signals with stimulus-independent postsynaptic activity that globally attracts all growth cones. The mean RGC output rate must remain low enough to suppress these spontaneous distractors, limiting convergence speed. Variance in the output rates of neighboring RGCs also distorts the shape of the spot stimulus, eroding the fidelity of the correlation-encoded instructionsreceived by the growth cones. Variability in the Neurotrope1 chip further limits topographic convergence. Migrating growth cones are directed by the local neurotropin landscape, which forms an image of recent presynaptic activity correlations as filtered through the postsynaptic activation of the target cell population. This image is distorted by variations between the properties of individual target cell and neurotropin circuits that are introduced during fabrication. In particular, poor power distribution on the Neurotrope1 chip creates a systematic gradient in target cell excitability that warps a growth cone’s impression of the relative coactivity of its neighbors, attracting it preferentially toward the more excitable target cells on the right side of the array. 5 Conclusions In this paper, we demonstrated a completely neuromorphic implementation of retinotopic self-organization. This is the first time every stage of the process has been implemented entirely in hardware, from photon transduction through neural map formation. The only comparable system was described in [11], which processed silicon retina data offline using a software model of neurotrophicguidance runningon a workstation. Our system computes results in real time at low power, two prerequisites for autonomous mobile applications. The novel infrastructure developed to implement virtual axon migration allows silicon growth cones to directly interface with an existing family of AER-compliant devices, enabling a host of multimodal neuromorphic self-organizing applications. In particular, the silicon retina’s ability to translate arbitrary visual stimuli into growth cone-compatible spike trains in real-time opens the door to more ambitious experiments such as using natural video correlations to automatically wire more complicated visual feature maps. Our faithful adherence to cellular level details yields an algorithm that is well suited to physical implementation. In contrast to all previous self-organizing map chips (e.g. [2, 3]), which implemented a global winner-take-all function to induce competition, our silicon growth cones compute their own updates using purely local information about the neurotropin gradient, a cellular approach that scales effortlessly to larger populations. Performance might be improved by supplementing our purely morphogenetic model with additional physiologically-inspired mechanisms to prune outliers and consolidate well-placed growth cones into permanent synapses. Acknowledgments We would like to thank J. Arthur for developing a USB system to facilitate data collection. This project was funded by the David and Lucille Packard Foundation and the NSF/BITS program (EIA0130822). References [1] T. Kohonen (1982), “Self-organized formation of topologically correct feature maps,” Biol. Cybernetics, vol. 43, no. 1, pp. 59-69. [2] W.-C. Fang, B.J. Sheu, O.T.-C. Chen, and J. Choi (1992), “A VLSI neural processor for image data compression using self-organization networks,” IEEE Trans. Neural Networks, vol. 3, no. 3, pp. 506-518. [3] S. Rovetta and R. Zunino (1999), “Efficient training of neural gas vector quantizers with analog circuit implementation,” IEEE Trans. Circ. & Sys. II, vol. 46, no. 6, pp. 688-698. [4] E.W. Dent and F.B. Gertler (2003), “Cytoskeletal dynamics and transport in growth cone mobility and axon guidance,” Neuron, vol. 40, pp. 209-227. [5] B. Taba and K. Boahen (2003), “Topographic map formation by silicon growth cones,” in: Advances in NeuralInformation Processing Systems 15 (MIT Press, Cambridge, eds. S. Becker, S. Thrun, and K. Obermayer), pp. 1163-1170. [6] S.Y.M. Lam, B.E. Shi, and K.A. Boahen (2005), “Self-organized cortical map formation by guiding connections,” Proc. 2005 IEEE Int. Symp. Circ. & Sys., in press. [7] K.A. Zaghloul and K. Boahen (2004), “Optic nerve signals in a neuromorphic chip I: Outer and inner retina models,” IEEE Trans. Bio-Med. Eng., vol. 51, no. 4, pp. 657-666. [8] K. Boahen (2000), “Point-to-point connectivity between neuromorphic chips using addressevents,” IEEE Trans. Circ. & Sys. II, vol. 47, pp. 416-434. [9] K. Miller (1994), “A model for the development of simple cell receptive fields and the ordered arrangement of orientation columns through activity-dependent competition between on- and off-center inputs,” J. Neurosci., vol. 14, no. 1, pp. 409-441. [10] D. Ringach (2004), “Haphazard wiring of simple receptive fields and orientation columns in visual cortex,” J. Neurophys., vol. 92, no. 1, pp. 468-476. [11] T. Elliott and J. Kramer (2002), “Coupling an aVLSI neuromorphic vision chip to a neurotrophic model of synaptic plasticity: the development of topography,” Neural Comp., vol. 14, no. 10, pp. 2353-2370.
|
2005
|
198
|
2,823
|
Bayesian models of human action understanding Chris L. Baker, Joshua B. Tenenbaum & Rebecca R. Saxe {clbaker,jbt,saxe}@mit.edu Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Abstract We present a Bayesian framework for explaining how people reason about and predict the actions of an intentional agent, based on observing its behavior. Action-understanding is cast as a problem of inverting a probabilistic generative model, which assumes that agents tend to act rationally in order to achieve their goals given the constraints of their environment. Working in a simple sprite-world domain, we show how this model can be used to infer the goal of an agent and predict how the agent will act in novel situations or when environmental constraints change. The model provides a qualitative account of several kinds of inferences that preverbal infants have been shown to perform, and also fits quantitative predictions that adult observers make in a new experiment. 1 Introduction A woman is walking down the street. Suddenly, she turns 180 degrees and begins running in the opposite direction. Why? Did she suddenly realize she was going the wrong way, or change her mind about where she should be headed? Did she remember something important left behind? Did she see someone she is trying to avoid? These explanations for the woman’s behavior derive from taking the intentional stance: treating her as a rational agent whose behavior is governed by beliefs, desires or other mental states that refer to objects, events, or states of the world [5]. Both adults and infants have been shown to make robust and rapid intentional inferences about agents’ behavior, even from highly impoverished stimuli. In “sprite-world” displays, simple shapes (e.g., circles) move in ways that convey a strong sense of agency to adults, and that lead to the formation of expectations consistent with goal-directed reasoning in infants [9, 8, 14]. The importance of the intentional stance in interpreting everyday situations, together with its robust engagement even in preverbal infants and with highly simplified perceptual stimuli, suggest that it is a core capacity of human cognition. In this paper we describe a computational framework for modeling intentional reasoning in adults and infants. Interpreting an agent’s behavior via the intentional stance poses a highly underconstrained inference problem: there are typically many configurations of beliefs and desires consistent with any sequence of behavior. We define a probabilistic generative model of an agent’s behavior, in which behavior is dependent on hidden variables representing beliefs and desires. We then model intentional reasoning as a Bayesian inference about these hidden variables given observed behavior sequences. It is often said that “vision is inverse graphics” – the inversion of a causal physical process of scene formation. By analogy, our analysis of intentional reasoning might be called “inverse planning”, where the observer infers an agent’s intentions, given observations of the agent’s behavior, by inverting a model of how intentions cause behavior. The intentional stance assumes that an agent’s actions depend causally on mental states via the principle of rationality: rational agents tend to act to achieve their desires as optimally as possible, given their beliefs. To achieve their desired goals, agents must typically not only select single actions but must construct plans, or sequences of intended actions. The standards of “optimal plan” may vary with agent or circumstance: possibilities include achieving goals “as quickly as possible”, “as cheaply ...”, “as reliably ...”, and so on. We assume a soft, probabilistic version of the rationality principle, allowing that agents can often only approximate the optimal sequence of actions, and occasionally act in unexpected ways. The paper is organized as follows. We first review several theoretical accounts of intentional reasoning from the cognitive science and artificial intelligence literatures, along with some motivating empirical findings. We then present our computational framework, grounding the discussion in a specific sprite-world domain. Lastly, we present results of our model on two sprite-world examples inspired by previous experiments in developmental psychology, and results of the model on our own experiments. 2 Empirical studies of intentional reasoning in infants and adults 2.1 Inferring an invariant goal The ability to predict how an agent’s behavior will adapt when environmental circumstances change, such as when an obstacle is inserted or removed, is a critical aspect of intentional reasoning. Gergely, Csibra and colleagues [8, 4] showed that preverbal infants can infer an agent’s goal that appears to be invariant across different circumstances, and can predict the agent’s future behavior by effectively assuming that it will act to achieve its goal in an efficient way, subject to the constraints of its environment. Their experiments used a looking-time (violation-of-expectation) paradigm with sprite-world stimuli. Infant participants were assigned to one of two groups. In the “obstacle” condition, infants were habituated to a sprite (a colored circle) moving (“jumping”) in a curved path over an obstacle to reach another object. The size of the obstacle varied across trials, but the sprite always followed a near-shortest path over the obstacle to reach the other object. In the “no obstacle” group, infants were habituated to the sprite following the same curved “jumping” trajectory to the other object, but without an obstacle blocking its path. Both groups were then presented with the same test conditions, in which the obstacle was placed out of the sprite’s way, and the sprite followed either the old, curved path or a new direct path to the other object. Infants from the “obstacle” group looked longer at the sprite following the unobstructed curved path, which (in the test condition) was now far from the most efficient route to the other object. Infants in the “no obstacle” group looked equally at both test stimuli. That is, infants in the “obstacle” condition appeared to interpret the sprite as moving in a rational goal-directed fashion, with the other object as its goal. They expected the sprite to plan a path to the goal that was maximally efficient, subject to environmental constraints when present. Infants in the “no obstacle” group appeared more uncertain about whether the sprite’s movement was actually goal-directed or about what its goal was: was it simply to reach the other object, or something more complex, such as reaching the object via a particular curved path? 2.2 Inferring goals of varying complexity: rational means-ends analysis Gergely et al. [6], expanding on work by Meltzoff [11], showed that infants can infer goals of varying complexity, again by interpreting agents’ behaviors as rational responses to environmental constraints. In two conditions, infants saw an adult demonstrate an unfamiliar complex action: illuminating a light-box by pressing its top with her forehead. In the “hands occupied” condition, the demonstrator pretended to be cold and wrapped a blanket around herself, so that she was incapable of using a more typical means (i.e., her hands) to achieve the same goal. In the “hands free” condition the demonstrator had no such constraint. Most infants in the “hands free” condition spontaneously performed the head-press action when shown the light-box one week later, but only a few infants in the “hands occupied” condition did so; the others illuminated the light-box simply by pressing it with their hands. Thus infants appear to assume that rational agents will take the most efficient path to their goal, and that if an agent appears to systematically employ an inefficient means, it is likely because the agent has adopted a more complex goal that includes not only the end state but also the means by which that end should be achieved. 2.3 Inductive inference in intentional reasoning Gergely and colleagues interpret their findings as if infants are reasoning about intentional action in an almost logical fashion, deducing the goal of an agent from its observed behavior, the rationality principle, and other implicit premises. However, from a computational point of view, it is surely oversimplified to think that the intentional stance could be implemented in a deductive system. There are too many sources of uncertainty and the inference problem is far too underconstrained for a logical approach to be successful. In contrast, our model posits that intentional reasoning is probabilistic. People’s inferences about an agent’s goal should be graded, reflecting a tradeoff between the prior probability of a candidate goal and its likelihood in light of the agent’s observed behavior. Inferences should become more confident as more of the agent’s behavior is observed. To test whether human intentional reasoning is consistent with a probabilistic account, it is necessary to collect data in greater quantities and with greater precision than infant studies allow. Hence we designed our own sprite-world experimental paradigm, to collect richer quantitative judgments from adult observers. Many experiments are possible in this paradigm, but here we describe just one study of statistical effects on goal inference. (both groups) Test paths Training paths simple complex Training cond (a) (b) (c) 1 2 3 4 1 2 rating # stimuli # stimuli 1 2 3 4 1 3 5 7 Cond: simple Cond: complex Figure 1: (a) Training stimuli in complex and simple goal conditions. (b) Test stimuli 1 and 2. Test stimuli was the same for each group. (c) Mean of subjects’ ratings with standard error bars (n=16). Sixteen observers were told that they would be watching a series of animations of a mouse running in a simple maze (a box with a single internal wall). The displays were shown from an overhead perspective, with an animated schematic trace of the mouse’s path as it ran through the box. In each display, the mouse was placed in a different starting location and ran to recover a piece of cheese at a fixed, previously learned location. Observers were told that the mouse had learned to follow a more-or-less direct path to the cheese, regardless of its starting location. Subjects saw two conditions in counterbalanced order. In one condition (“simple goal”), observers saw four displays consistent with this prior knowledge. In another condition (“complex goal”), observers saw movements suggestive of a more complex, path-dependent goal for the mouse: it first ran directly to a particular location in the middle of the box (the “via-point”), and only then ran to the cheese. Fig. 1(a) shows the mouse’s four trajectories in each of these conditions. Note that the first trajectory was the same in both conditions, while the next three were different. Also, all four trajectories in both conditions passed through the same hypothetical via-point in the middle of the box, which was not marked in any conspicuous way. Hence both the simple goal (“get to the cheese”) and complex goal (“get to the cheese via point X”) were logically possible interpretations in both conditions. Observers’ interpretations were assessed after viewing each of the four trajectories, by showing them diagrams of two test paths (Fig. 1(b)) running from a novel starting location to the cheese. They were asked to rate the probability of the mouse taking one or the other test path using a 1-7 scale: 1 = definitely path 1, 7 = definitely path 2, with intermediate values expressing intermediate degrees of confidence. Observers in the simple-goal condition always leaned towards path 1, the direct route that was consistent with the given prior knowledge. Observers in the complex-goal condition initially leaned just as much towards path 1, but after seeing additional trajectories they became increasingly confident that the mouse would follow path 2 (Fig. 1(c)). Importantly, the latter group increased its average confidence in path 2 with each subsequent trajectory viewed, consistent with the notion that goal inference results from something like a Bayesian integration process: prior probability favors the simple goal, but successive observations are more likely under the complex goal. 3 Previous models of intentional reasoning The above phenomena highlight two capacities than any model of intentional reasoning should capture. First, representations of agents’ mental states should include at least primitive planning capacities, with a constrained space of candidate goals and subgoals (or intended paths) that can refer to objects or locations in space, and the tendency to choose action sequences that achieve goals as efficiently as possible. Second, inferences about agents’ goals should be probabilistic, and be sensitive to both prior knowledge about likely goals as well as statistical evidence for more complex or less likely goals that better account for observed actions. These two components are clearly not sufficient for a complete account of human intentional reasoning, but most previous accounts do not include even these capacities. Gergely, Csibra and colleagues [7] have proposed an informal (noncomputational) model in which agents are essentially treated as rational planners, but inferences about agents’ goals are purely deductive, without a role for probabilistic expectations or gradations of confidence. A more statistically sophisticated computational framework for inferring goals from behavior has been proposed by [13], but this approach does not incorporate planning capacities. In this framework, the observer learns to represent an agent’s policies, conditional on the agent’s goals. Within a static environment, this knowledge allows an observer to infer the goal of an agent’s actions, predict subsequent actions, and perform imitation, but it does not support generalization to new environments where the agent’s policy must adapt in response. Further, because generalization is not based on strong prior knowledge such as the principle of rationality, many observations are needed for good performance. Likewise, probabilistic approaches to plan recognition in AI (e.g., [3, 10]) typically represent plans in terms of policies (state-action pairs) that do not generalize when the structure of the environment changes in some unexpected way, and that require much data to learn from observations of behavior. Perhaps closest to how people reason with the intentional stance are methods for inverse reinforcement learning (IRL) [12], or methods for learning an agent’s utility function [2]. Both approaches assume a rational agent who maximizes expected utility, and attempt to infer the agent’s utility function from observations of its behavior. However, the utility functions that people attribute to intentional agents are typically much more structured and constrained than in conventional IRL. Goals are typically defined as relations towards objects or other agents, and may include subgoals, preferred paths, or other elements. In the next section we describe a Bayesian framework for modeling intentional reasoning that is similar in spirit to IRL, but more focused on the kinds of goal structures that are cognitively natural to human adults and infants. 4 The Bayesian framework We propose to model intentional reasoning by combining the inferential power of statistical approaches to action understanding [12, 2, 13] with simple versions of the representational structures that psychologists and philosophers [5, 7] have argued are essential in theory of mind. This section first presents our general approach, and then presents a specific mathematical model for the “mouse” sprite-world introduced above. Most generally, we assume a world that can be represented in terms of entities, attributes, and relations. Some attributes and relations are dynamic, indexed by a time dimension. Some entities are agents, who can perform actions at any time t with the potential to change the world state at time t+1. We distinguish between environmental state, denoted W, and agent states, denoted S. For simplicity, we will assume that there is exactly one intentional agent in the world, and that the agent’s actions can only affect its own state s ∈S. Let s0:T be a sequence of T +1 agent states. Typically, observations of multiple state sequences of the agent are available, and in general each may occur in a separate environment. Let s1:N 0:T be a set of N state sequences, and let w1:N be a set of N corresponding environments. Let As be the set of actions available to the agent from state s, and let C(a) be the cost to the agent of action a ∈As. Let P(st+1|at, st, w) be the distribution over the agent’s next state st+1, given the current state st, an action at ∈Ast, and the environmental state w. The agent’s actions are assumed to depend on mental states such as beliefs and desires. In our context, beliefs correspond to knowledge about the environmental state. Desires may be simple or complex. A simple desire is an end goal: a world state or class of states that the agent will act to bring about. There are many possibilities for more complex goals, such as achieving a certain end by means of a certain route, achieving a certain sequence of states in some order, and so on. We specify a particular goal space G of simple and complex goals for sprite-worlds in the next subsection. The agent draws goals g ∈G from a prior distribution P(g|w1:N), which constrains goals to be feasible in the environments w1:N from which observations of the agent’s behavior are available. Given the agent’s goal g and an environment w, we can define a value Vg,w(s) for each state s. The value function can be defined in various ways depending on the domain, task, and agent type. We specify a particular value function in the next subsection that reflects the goal structure of our sprite-world agent. The agent is assumed to choose actions according to a probabilistic policy, with a preference for actions with greater expected increases in value. Let Qg,w(s, a) = P s′ P(s′|a, s, w)Vg,w(s′) −C(a) be the expected value of the state resulting from action a, minus the cost of the action. The agent’s policy is P(at|st, g, w) ∝exp(βQg,w(st, at)). (1) The parameter β controls how likely the agent is to select the most valuable action. This policy embodies a “soft” principle of rationality, which allows for inevitable sources of suboptimal planning, or unexplained deviations from the direct path. A graphical model illustrating the relationship between the environmental state, and the agent’s goals, actions, and states is shown in Fig. 2. The observer’s task is to infer g from the agent’s behavior. We assume that state sequences are independent given the environment and the goal. The observer infers g from s1:N 0:T via Bayes’ rule, conditional on w1:N: P(g|s1:N 0:T , w1:N) ∝P(g|w1:N) QN i=1 P(si 0:T |g, wi). (2) We assume that state transition probabilities and action probabilities are conditionally independent given the agent’s goal g, the agent’s current state st, and the environment w. The likelihood of a state sequence s0:T given a goal g and an environment w is computed by marginalizing over possible actions generating state transitions: P(s0:T |g, w) = QT −1 t=0 P at∈Ast P(st+1|at, st, w)P(at|st, g, w). (3) G At At+1 St+1 St W ... ... ... ... Figure 2: Two time-slice dynamic Bayes net representation of our model, where W is the environmental state, G is the agent’s goal, St is the agent’s state at time t, and At is the agent’s action at time t. Beliefs, desires, and actions intuitively map onto W , G and A, respectively. 4.1 Modeling sprite-world inferences Several additional assumptions are necessary to apply the above framework to any specific domain, such as the sprite-worlds discussed in §2. The size of the grid, the location of obstacles, and likely goal points (such as the location of the cheese in our experimental stimuli) are represented by W, and assumed to be known to both the agent and the observer. The agent’s state space S consists of valid locations in the grid. All state sequences are assumed to be of the same length. The action space As consists of moves in all compass directions {N, S, E, W, NE, NW, SE, SW}, except where blocked by an obstacle, and action costs are Euclidean. The agent can also choose to remain still with cost 1. We assume P(st+1|at, st, w) takes the agent to the desired adjacent grid point deterministically. The set of possible goals G includes both simple and complex goals. Simple goals will just be specific end states in S. While many kinds of complex goals are possible, we assume here that a complex goal is just the combination of a desired end state with a desired means to achieving that end. In our sprite-worlds, we identify “desired means” with a constraint that the agent must pass through an additional specified location enroute, such as the viapoint in the experiment from §2.3. Because the number of complex goals defined in this way is much larger than the number of simple goals, the likelihood of each complex goal is small relative to the likelihood of individual simple goals. In addition, although pathdependent goals are possible, they should not be likely a priori. We thus set the prior P(g|w1:N) to favor simple goals by a factor of γ. For simplicity, we assume that the agent draws just a single invariant goal g ∈G from P(g|w1:N), and we assume that this prior distribution is known to the observer. More generally, an agent’s goals may vary across different environments, and the prior P(g|w1:N) may have to be learned. We define the value of a state Vg,w(s) as the expected total cost to the agent of achieving g while following the policy given in Eq. 1. We assume the desired end-state is absorbing and cost-free, which implies that the agent attempts the stochastic shortest path (with respect to its probabilistic policy) [1]. If g is a complex goal, Vg,w(s) is based on the stochastic shortest path through the specified via-point. The agent’s value function is computed using the value iteration algorithm [1] with respect to the policy given in Eq. 1. Finally, to compare our model’s predictions with behavioral data from human observers, we must specify how to compute the probability of novel trajectories s′ 0:T in a new environment w′, such as the test stimuli in Fig. 1, conditioned on an observed sequence s0:T in environment w. This is just an average over the predictions for each possible goal g: P(s′ 0:T |s0:T , w, w′) = P g∈G P(s′ 0:T |g, w′)P(g|s0:T , w, w′). (4) 5 Sprite-world simulations 5.1 Inferring an invariant goal As a starting point for testing our model, we return to the experiments of Gergely et al. [8, 4, 7], reviewed in §2.1. Our input to the model, shown in Fig. 3(a,b), differs slightly from the original stimuli used in [8], but the relevant details of interest are spared: goal-directed action in the presence of constraints. Our model predictions, shown in Fig. 3(c), capture the qualitative results of these experiments, showing a large contrast between the straight path and the curved path in the condition with an obstacle, and a relatively small contrast in the condition with no obstacle. In the “no obstacle” condition, our model infers that the agent has a more complex goal, constrained by a via-point. This significantly increases the probability of the curved test path, to the point where the difference between the probability of observing curved and straight paths is negligible. Training paths Test paths obst no obst Training cond (a) (b) (c) 1 2 1 2 −log(P(Test|Cond)) 0 5 10 15 20 Cond: obst Cond: no obst Test: straight Test: curved Figure 3: Inferring an invariant goal. (a) Training input in obstacle and no obstacle conditions. (b) Test input is the same in each condition. (c) Model predictions: negative log likelihoods of test paths 1 and 2 given data from training condition. In the obstacle condition, a large dissociation is seen between path 1 and path 2, with path 1 being much more likely. In the no obstacle condition, there is not a large preference for either path 1 or path 2, qualitatively matching Gergely et al.’s results [8]. 5.2 Inferring goals of varying complexity: rational means-ends analysis Our next example is inspired by the studies of Gergely et al. [6] described in §2.2. In our sprite-world version of the experiment, we varied the amount of evidence for a simple versus a complex goal, by inputting the same three trajectories with and without an obstacle present (Fig. 4(a)). In the “obstacle” condition, the trajectories were all approximately shortest paths to the goal, because the agent was forced to take indirect paths around the obstacle. In the “no obstacle” condition, no such constraint was present to explain the curved paths. Thus a more complex goal is inferred, with a path constrained to pass through a via-point. Given a choice of test paths, shown in Fig. 4(b), the model shows a doubledissociation between the probability of the direct path and the curved path through the putative via-point, given each training condition (Fig. 4(c)), similar to the results in [6]. Training paths Test paths obst no obst Training cond (a) (b) (c) 1 2 3 1 2 0 5 10 15 20 −log(P(Test|Cond)) Cond: obst Cond: no obst Test: straight Test: curved Figure 4: Inferring goals of varying complexity. (a) Training input in obstacle and no obstacle conditions. (b) Test input in each condition. (c) Model predictions: a double dissociation between probability of test paths 1 and 2 in the two conditions. This reflects a preference for the straight path in the first condition, where there is an obstacle to explain the agent’s deflections in the training input, and a preference for the curved path in the second condition, where a complex goal is inferred. 5.3 Inductive inference in intentional reasoning Lastly, we present the results of our model on our own behavioral experiment, first described in §2.3 and shown in Fig. 1. These data demonstrated the statistical nature of people’s intentional inferences. Fig. 5 compares people’s judgments of the probability that the agent takes a particular test path with our model’s predictions. To place model predictions and human judgments on a comparable scale, we fit a sigmoidal psychometric transformation to the computed log posterior odds for the curved test path versus the straight path. The Bayesian model captures the graded shift in people’s expectations in the “complex goal” condition, as evidence accumulates that the agent always seeks to pass through an arbitrary via-point enroute to the end state. # stimuli rating 1 2 3 4 1 3 5 7 Cond: simple Cond: complex Figure 5: Experimental results: model fit for behavioral data. Mean ratings are plotted as hollow circles. Error bars give standard error. The log posterior odds from the model were fit to subjects’ ratings using a scaled sigmoid function with range (1, 7). The sigmoid function includes bias and gain parameters, which were fit to the human data by minimizing the sum-squared error between the model predictions and mean subject ratings. 6 Conclusion We presented a Bayesian framework to explain several core aspects of intentional reasoning: inferring the goal of an agent based on observations of its behavior, and predicting how the agent will act when constraints or initial conditions for action change. Our model captured basic qualitative inferences that even preverbal infants have been shown to perform, as well as more subtle quantitative inferences that adult observers made in a novel experiment. Two future challenges for our computational framework are: representing and learning multiple agent types (e.g. rational, irrational, random, etc.), and representing and learning hierarchically structured goal spaces that vary across environments, situations and even domains. These extensions will allow us to further test the power of our computational framework, and will support its application to the wide range of intentional inferences that people constantly make in their everyday lives. Acknowledgments: We thank Whitman Richards, Konrad K¨ording, Kobi Gal, Vikash Mansinghka, Charles Kemp, and Pat Shafto for helpful comments and discussions. References [1] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, Belmont, MA, 2nd edition, 2001. [2] U. Chajewska, D. Koller, and D. Ormoneit. Learning an agent’s utility function by observing behavior. In Proc. of the 18th Intl. Conf. on Machine Learning (ICML), pages 35–42, 2001. [3] E. Charniak and R. Goldman. A probabilistic model of plan recognition. In Proc. AAAI, 1991. [4] G. Csibra, G. Gergely, S. Bir´o, O. Ko´os, and M. Brockbank. Goal attribution without agency cues: the perception of ‘pure reason’ in infancy. Cognition, 72:237–267, 1999. [5] D. C. Dennett. The Intentional Stance. Cambridge, MA: MIT Press, 1987. [6] G. Gergely, H. Bekkering, and I. Kir´aly. Rational imitation in preverbal infants. Nature, 415:755, 2002. [7] G. Gergely and G. Csibra. Teleological reasoning in infancy: the na¨ıve theory of rational action. Trends in Cognitive Sciences, 7(7):287–292, 2003. [8] G. Gergely, Z. N´adasdy, G. Csibra, and S. Bir´o. Taking the intentional stance at 12 months of age. Cognition, 56:165–193, 1995. [9] F. Heider and M. A. Simmel. An experimental study of apparent behavior. American Journal of Psychology, 57:243–249, 1944. [10] L. Liao, D. Fox, and H. Kautz. Learning and inferring transportation routines. In Proc. AAAI, pages 348–353, 2004. [11] A. N. Meltzoff. Infant imitation after a 1-week delay: Long-term memory for novel acts and multiple stimuli. Developmental Psychology, 24:470–476, 1988. [12] A. Y. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proc. of the 17th Intl. Conf. on Machine Learning (ICML), pages 663–670, 2000. [13] R. P. N. Rao, A. P. Shon, and A. N. Meltzoff. A Bayesian model of imitation in infants and robots. In Imitation and Social Learning in Robots, Humans, and Animals. (in press). [14] B. J. Scholl and P. D. Tremoulet. Perceptual causality and animacy. Trends in Cognitive Sciences, 4(8):299–309, 2000.
|
2005
|
199
|
2,824
|
On Local Rewards and Scaling Distributed Reinforcement Learning J. Andrew Bagnell Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 dbagnell@ri.cmu.edu Andrew Y. Ng Computer Science Department Stanford University Stanford, CA 94305 ang@cs.stanford.edu Abstract We consider the scaling of the number of examples necessary to achieve good performance in distributed, cooperative, multi-agent reinforcement learning, as a function of the the number of agents n. We prove a worstcase lower bound showing that algorithms that rely solely on a global reward signal to learn policies confront a fundamental limit: They require a number of real-world examples that scales roughly linearly in the number of agents. For settings of interest with a very large number of agents, this is impractical. We demonstrate, however, that there is a class of algorithms that, by taking advantage of local reward signals in large distributed Markov Decision Processes, are able to ensure good performance with a number of samples that scales as O(log n). This makes them applicable even in settings with a very large number of agents n. 1 Introduction Recently there has been great interest in distributed reinforcement learning problems where a collection of agents with independent action choices attempts to optimize a joint performance metric. Imagine, for instance, a traffic engineering application where each traffic signal may independently decide when to switch colors, and performance is measured by aggregating the throughput at all traffic stops. Problems with such factorizations where the global reward decomposes in to a sum of local rewards are common and have been studied in the RL literature. [10] The most straightforward and common approach to solving these problems is to apply one of the many well-studied single agent algorithms to the global reward signal. Effectively, this treats the multi-agent problem as a single agent problem with a very large action space. Peshkin et al. [9] establish that policy gradient learning factorizes into independent policy gradient learning problems for each agent using the global reward signal. Chang et al. [3] use global reward signals to estimate effective local rewards for each agent. Guestrin et al. [5] consider coordinating agent actions using the global reward. We argue from an information theoretic perspective that such algorithms are fundamentally limited in their scalability. In particular, we show in Section 3 that as a function of the number of agents n, such algorithms will need to see1 ˜Ω(n) trajectories in the worst case to achieve good performance. We suggest an alternate line of inquiry, pursued as well by other researchers (including 1Big-˜Ωnotation omits logarithmic terms, similar to how big-Ωnotation drops constant values. notably [10]), of developing algorithms that capitalize on the availability of local reward signals to improve performance. Our results show that such local information can dramatically reduce the number of examples necessary for learning to O(log n). One approach that the results suggest to solving such distributed problems is to estimate model parameters from all local information available, and then to solve the resulting model offline. Although this clearly still carries a high computational burden, it is much preferable to requiring a large amount of real-world experience. Further, useful approximate multiple agent Markov Decision Process (MDP) solvers that take advantage of local reward structure have been developed. [4] 2 Preliminaries We consider distributed reinforcement learning problems, modeled as MDPs, in which there are n (cooperative) agents, each of which can directly influence only a small number of its neighbors. More formally, let there be n agents, each with a finite state space S of size |S| states and a finite action space A of size |A|. The joint state space of all the agents is therefore Sn, and the joint action space An. If st ∈Sn is the joint state of the agents at time t, we will use s(i) t to denote the state of agent i. Similarly, let a(i) t denote the action of agent i. For each agent i ∈{1, . . . , n}, we let neigh(i) ⊆{1, . . . , n} denote the subset of agents that i’s state directly influences. For notational convenience, we assume that if i ∈neigh(j), then j ∈neigh(i), and that i ∈neigh(i). Thus, the agents can be viewed as living on the vertices of a graph, where agents have a direct influence on each other’s state only if they are connected by an edge. This is similar to the graphical games formalism of [7], and is also similar to the Dynamic Bayes Net (DBN)-MDP formalisms of [6] and [2]. (Figure 1 depicts a DBN and an agent influence graph.) DBN formalisms allow the more refined notion of directionality in the influence between neighbors. More formally, each agent i is associated with a CPT (conditional probability table) Pi(s(i) t+1|s(neigh(i)) t , a(i) t ), where s(neigh(i)) t denotes the state of agent i’s neighbors at time t. Given the joint action a of the agents, the joint state evolves according to p(st+1|st, at) = n Y i=1 p(s(i) t+1|s(neigh(i)) t , a(i) t ). (1) For simplicity, we have assumed that agent i’s state is directly influenced by the states of neigh(i) but not their actions; the generalization offers no difficulties. The initial state s1 is distributed according to some initial-state distribution D. A policy is a map π : Sn 7→An. Writing π out explicitly as a vector-valued function, we have π(s) = (π1(s), . . . , πn(s)), where πi(s) : Sn 7→A is the local policy of agent i. For some applications, we may wish to consider only policies in which agent i chooses its local action as a function of only its local state s(i) (and possibly its neighbors); in this case, πi can be restricted to depend only on s(i). Each agent has a local reward function Ri(s(i), a(i)), which takes values in the unit interval [0, 1]. The total payoff in the MDP at each step is R(s, a) = (1/n) Pn i=1 R(s(i), a(i)). We call this R(s, a) the global reward function, since it reflects the total reward received by the joint set of agents. We will consider the finite-horizon setting, in which the MDP terminates after T steps. Thus, the utility of a policy π in an MDP M is U(π) = UM(π) = Es1∼D[V π(s1)] = E " 1 n T X t=1 n X i=1 Ri(s(i) t , a(i) t )|π # . In the reinforcement learning setting, the dynamics (CPTs) and rewards of the problem are unknown, and a learning algorithm has to take actions in the MDP and use the resulting observations of state transitions and rewards to learn a good policy. Each “trial” taken by a reinforcement learning algorithm shall consist of a T-step sequence in the MDP. Figure 1: (Left) A DBN description of a multi-agent MDP. Each row of (round) nodes in the DBN corresponds to one agent. (Right) A graphical depiction of the influence effects in a multi-agent MDP. A connection between nodes in the graph implies arrows connecting the nodes in the DBN. Our goal is to characterize the scaling of the sample complexity for various reinforcement learning approaches (i.e., how many trials they require in order to learn a near-optimal policy) for large numbers of agents n. Thus, in our bounds below, no serious attempt has been made to make our bounds tight in variables other than n. 3 Global rewards hardness result Below we show that if an RL algorithm uses only the global reward signal, then there exists a very simple MDP—one with horizon, T = 1, only one state/trivial dynamics, and two actions per agent—on which the learning algorithm will require ˜Ω(n) trials to learn a good policy. Thus, such algorithms do not scale well to large numbers of agents. For example, consider learning in the traffic signal problem described in the introduction with n = 100, 000 traffic lights. Such an algorithm may then require on the order of 100, 000 days of experience (trials) to learn. In contrast, in Section 4, we show that if a reinforcement learning algorithm is given access to the local rewards, it can be possible to learn in such problems with an exponentially smaller O(log n) sample complexity. Theorem 3.1: Let any 0 < ǫ < 0.05 be fixed. Let any reinforcement learning algorithm L be given that only uses the global reward signal R(s), and does not use the local rewards Ri(s(i)) to learn (other than through their sum). Then there exists an MDP with time horizon T = 1, so that: 1. The MDP is very “simple” in that it has only one state (|S| = 1, |Sn| = 1); trivial state transition probabilities (since T = 1); two actions per agent (|A| = 2); and deterministic binary (0/1)-valued local reward functions. 2. In order for L to output a policy ˆπ that is near-optimal satisfying2 U(ˆπ) ≥ maxπ U(π) −ǫ,it is necessary that the number of trials m be at least m ≥0.32n + log(1/4) log(n + 1) = ˜Ω(n). Proof. For simplicity, we first assume that L is a deterministic learning algorithm, so that in each of the m trials, its choice of action is some deterministic function of the outcomes of the earlier trials. Thus, in each of the m trials, L chooses a vector of actions a ∈AN, and receives the global reward signal R(s, a) = 1 n Pn i=1 R(s(i), a(i)). In our MDP, each local reward R(s(i), a(i)) will take values only 0 and 1. Thus, R(s, a) can take only n + 1 different values (namely, 0 n, 1 n, . . . , n n). Since T = 1, the algorithm receives only one such reward value in each trial. Let r1, . . . , rm be the m global reward signals received by L in the m trials. Since L is deterministic, its output policy ˆπ will be chosen as some deterministic function of these 2For randomized algorithms we consider instead the expectation of U(ˆπ) under the algorithm’s randomization. rewards r1, . . . , rm. But the vector (r1, . . . , rm) can take on only (n+1)m different values (since each rt can take only n + 1 different values), and thus ˆπ itself can also take only at most (n + 1)m different values. Let Πm denote this set of possible values for ˆπ. (|Πm| ≤ (n + 1)m). Call each local agent’s two actions a1, a2. We will generate an MDP with randomly chosen parameters. Specifically, each local reward Ri(s(i), a(i)) function is randomly chosen with equal probability to either give reward 1 for action a1 and reward 0 for action a2; or vice versa. Thus, each local agent has one “right” action that gives reward 1, but the algorithm has to learn which of the two actions this is. Further, by choosing the right actions, the optimal policy π∗attains U(π∗) = 1. Fix any policy π. Then UM(π) = 1 n Pn i=1 R(s(i), π(s(i))) is the mean of n independent Bernoulli(0.5) random variables (since the rewards are chosen randomly), and has expected value 0.5. Thus, by the Hoeffding inequality, P(UM(π) ≥1−2ǫ) ≤exp(−2(0.5−2ǫ)2n). Thus, taking a union bound over all policies π ∈ΠM, we have P(∃π ∈ΠM s.t. UM(π) ≥1 −2ǫ) ≤ |ΠM| exp(−2(0.5 −2ǫ)2n) (2) ≤ (n + 1)m exp(−2(0.5 −2ǫ)2n) (3) Here, the probability is over the random MDP M. But since L outputs a policy in ΠM, the chance of L outputting a policy ˆπ with UM(ˆπ) ≥1 −2ǫ is bounded by the chance that there exists such a policy in ΠM. Thus, P(UM(ˆπ) ≥1 −2ǫ) ≤(n + 1)m exp(−2(0.5 −2ǫ)2n). (4) By setting the right hand side to 1/4 and solving for m, we see that so long as m < 2(0.5 −2ǫ)2n + log(1/4) log(n + 1) ≤0.32n + log(1/4) log(n + 1) , (5) we have that P(UM(ˆπ) ≥1 −2ǫ) < 1/4. (The second equality above follows by taking ǫ < 0.05, ensuring that no policy will be within 0.1 of optimal.) Thus, under this condition, by the standard probabilistic method argument [1], there must be at least one such MDP under which L fails to find an ǫ-optimal policy. For randomized algorithms L, we can define for each string of input random numbers to the algorithm ω a deterministic algorithm Lω. Given m samples above, the expected performance of algorithm Lω over the distribution of MDPs Ep(M)[Lω] ≤ Pr(UM(Lω) ≥1 −2ǫ)1 + (1 −Pr(UM(Lω) ≥1 −2ǫ))(1 −2ǫ) < 1 4 + 3 4(1 −2ǫ) < 1 −ǫ Since Ep(M)Ep(ω)[UM(Lω)] = Ep(ω)Ep(M)[UM(Lω)] < Ep(ω)[1 −ǫ] it follows again from the probabilistic method there must be at least one MDP for which the L has expected performance less than 1 −ǫ. □ 4 Learning with local rewards Assuming the existence of a good exploration policy, we now show a positive result that if our learning algorithm has access to the local rewards, then it is possible to learn a nearoptimal policy after a number of trials that grows only logarithmically in the number of agents n. In this section, we will assume that the neighborhood structure (encoded by neigh(i)) is known, but that the CPT parameters of the dynamics and the reward functions are unknown. We also assume that the size of the largest neighborhood is bounded by maxi |neigh(i)| = B. Definition. A policy πexplore is a (ρ, ν)-exploration policy if, given any i, any configuration of states s(neigh(i)) ∈S|neigh(i)|, and any action a(i) ∈A, on a trial of length T the policy πexplore has at least a probability ν · ρB of executing action a(i) while i and its neighbors are in state s(neigh(i)). Proposition 4.1: Suppose the MDP’s initial state distribution is random, so that the state s(i) i of each agent i is chosen independently from some distribution Di. Further, assume that Di assigns probability at least ρ > 0 to each possible state value s ∈S. Then the “random” policy π (that on each time-step chooses each agent’s action uniformly at random over A) is a (ρ, 1 |A|)-exploration policy. Proof. For any agent i, the initial state of s(neigh(i)) has has at least a ρB chance of being any particular vector of values, and the random action policy has a 1/|A| chance of taking any particular action from this state. □ In general, it is a fairly strong assumption to assume that we have an exploration policy. However, this assumption serves to decouple the problem of exploration from the “sample complexity” question of how much data we need from the MDP. Specifically, it guarantees that we visit each local configuration sufficiently often to have a reasonable amount of data to estimate each CPT. 3 In the envisioned procedure, we will execute an exploration policy for m trials, and then use the resulting data we collect to obtain the maximum-likelihood estimates for the CPT entries and the rewards. We call the resulting estimates ˆp(s(i) t+1|s(neigh(i)) t , a(i) t ) and ˆR(s(i), a(i)).4 The following simple lemma shows that, with a number of trials that grows only logarithmically in n, this procedure will give us good estimates for all CPTs and local rewards. Lemma 4.2: Let any ǫ0 > 0, δ > 0 be fixed. Suppose |neigh(i)| ≤B for all i, and let a (ρ, ν)-exploration policy be executed for m trials. Then in order to guarantee that, with probability at least 1 −δ, the CPT and reward estimates are ǫ0-accurate: |ˆp(s(i) t+1|s(neigh(i)) t , a(i) t ) −p(s(i) t+1|s(neigh(i)) t , a(i) t )| ≤ǫ0 for all i, s(i) t+1, s(neigh(i)) t , a(i) t | ˆR(s(i), a(i))| −R(s(i), a(i))| ≤ǫ0 for all i, s(i), a(i), (6) it suffices that the number of trials be m = O((log n) · poly( 1 ǫ0 , 1 δ , |S|, |A|, 1/(νρB), B, T)). Proof (Sketch). Given c examples to estimate a particular CPT entry (or a reward table entry), the probability that this estimate differs from the true value by more than ǫ0 can be controlled by the Hoeffding bound: P(|ˆp(s(i) t+1|s(neigh(i)) t , a(i) t ) −p(s(i) t+1|s(neigh(i)) t , a(i) t )| ≥ǫ0) ≤2 exp(−2ǫ2 0c). Each CPT has at most |A||S|B+1 entries and there are n such tables. There are also n|S||A| possible local reward values. Taking a union bound over them, setting our probability of incorrectly estimating any CPTs or rewards to δ/2, and solving for c gives c ≥ 2 ǫ2 0 log( 4 n |A||S|B+1 δ ). For each agent i we see each local configurations of states and actions (s(neigh(i)), a(i)) with probability ≥ρBν. For m trajectories the expected number 3Further, it is possible to show a stronger version of our result than that stated below, showing that a random action policy can always be used as our exploration policy, to obtain a sample complexity bound with the same logarithmic dependence on n (but significantly worse dependencies on T and B). This result uses ideas from the random trajectory method of [8], with the key observation that local configurations that are not visited reasonably frequently by the random exploration policy will not be visited frequently by any policy, and thus inaccuracies in our estimates of their CPT entries will not significantly affect the result. 4We let ˆp(s(i) t+1|s(neigh(i)) t , a(i) t ) be the uniform distribution if (s(neigh(i)) t , a(i) t ) was never observed in the training data, and similarly let ˆR(s(i), a(i)) = 0 if ˆR(s(i), a(i)) was never observed. of samples we see for each CPT entry is at least mρBν. Call S(s(neigh(i)),a(i)) m the number of samples we’ve seen of a configuration (s(neigh(i)), a(i)) in m trajectories. Note then that: P(S(s(neigh(i)),a(i)) m ≤c) ≤P(S(s(neigh(i)),a(i)) m −E[S(s(neigh(i)),a(i)) m ] ≤c −mρBν). and another application of Hoeffding’s bound ensures that: P(S(s(neigh(i)),a(i)) m −E[S(s(neigh(i)),a(i)) m ] ≤c −mρBν) ≤exp( −2 mT 2 (c −mρBν)2). Applying again the union bound to ensure that the probability of failure here is ≤δ/2 and solving for m gives the result. □ Definition. Define the radius of influence r(t) after t steps to be the maximum number of nodes that are within t steps in the neighborhood graph of any single node. Viewed differently, r(t) upper bounds the number of nodes in the t-th timeslice of the DBN (as in Figure 1) which are decendants of any single node in the 1-st timeslice. In a DBN as shown in Figure 1, we have r(t) = O(t). If the neighborhood graph is a 2-d lattice in which each node has at most 4 neighbors, then r(t) = O(t2). More generally, we might expect to have r(t) = O(t2) for “most” planar neigborhood graphs. Note that, even in the worst case, by our assumption of each node having B neighbors, we still have the bound r(t) ≤Bt, which is a bound independent of the number of agents n. Theorem 4.3: Let any ǫ > 0, δ > 0 be fixed. Suppose |neigh(i)| ≤B for all i, and let a (ρ, ν)-exploration policy be executed for m trials in the MDP M. Let ˆ M be the maximum likelihood MDP, estimated from data from these m trials. Let Π be a policy class, and let ˆπ = arg max π∈Π U ˆ M(π) be the best policy in the class, as evaluated on ˆ M. Then to ensure that, with probability 1 −δ, we have that ˆπ is near-optimal within Π, i.e., that UM(ˆπ) ≥max π∈Π UM(π) −ǫ, it suffices that the number of trials be: m = O((log n) · poly(1/ǫ, 1/δ, |S|, |A|, 1/(νρB)), B, T, r(T)). Proof. Our approach is essentially constructive: we show that for any policy, finite-horizon value-iteration using approximate CPTs and rewards in its backups will correctly estimate the true value function for that policy within ǫ/2. For simplicity, we assume that the initial state distribution is known (and thus the same in ˆ M and M); the generalization offers no difficulties. By lemma (4.2) with m samples we can know both CPTs and rewards with the probability required within any required ǫ0. Note also that for any MDP with the given DBN or neighborhood graph structure (including both M and ˆ M) the value function for every policy π and at each time-step has a property of bounded variation: | ˆVt(s(1), . . . s(n)) −ˆVt(s(1), . . . s(i−1), s(i) changed, s(i+1), . . . , s(n)| ≤r(T)T n This follows since a change in state can effect at most r(T) agents’ states, so the resulting change in utility must be bounded by r(T)T/n. To compute a bound on the error in our estimate of overall utility we compute a bound on the error induced by a one-step Bellman backup ||B ˆV −ˆB ˆV ||∞. This quantity can be bounded in turn by considering the sequence of partially correct backup operators ˆB0, . . . , ˆBn where ˆBi is defined as the Bellman operator for policy π using the exact transitions and rewards for agents 1, 2, . . . , i, and the estimated transitions rewards/transitions 0 500 1000 1500 2000 2500 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 number of training examples performance 200 agents, 20% noise is observed rewards local learner global learner 0 50 100 150 200 250 300 350 400 0 500 1000 1500 2000 2500 number of agents number of samples necessary local learner global learner Figure 2: (Left) Scaling of performance as a function of the number of trajectories seen for a global reward and local reward algorithms. (Right) Scaling of the number of samples necessary to achieve near optimal reward as a function of the number of agents. for agents i + 1, . . . , n. From this definition it is immediate that the total error is equivalent to the telescoping sum: ||B ˆV −ˆB ˆV ||∞= || ˆB0 ˆV −ˆB1 ˆV + ˆB1 ˆV −... + ˆBn−1 ˆV −ˆBn ˆV ||∞ (7) That sum is upper-bounded by the sum of term-by-term errors Pn−1 i=0 || ˆBi ˆV −ˆBi+1 ˆV ||∞. We can show that each of the terms in the sum is less than ǫ0r(T)(T + 1)/n since the Bellman operators ˆBi ˆV −ˆBi+1 ˆV differ in the immediate reward contribution of agent i + 1 by ≤ǫ0 and differ in computing the expected value of the future value by EQi+1 j=1 p(sj t+1|st,π) Qn j=i+2 p(sj t+1|st,π)[ X si+1 ∆p(si+1 t+1|st, π) ˆVt+1(s)], with ∆p(si+1 t+1|st, π) ≤ǫ0 the difference in the CPTs between ˆBi and ˆBi+1. By the bounded variation argument this total is then less than ǫ0r(T)T|S|/n. It follows then P i || ˆBi ˆV −ˆBi+1 ˆV ||∞≤ǫ0 r(T) (T + 1)|S|. We now appeal to finite-horizon bounds on the error induced by Bellman backups [11] to show that the || ˆV −V ||∞≤T||B ˆV − ˆB ˆV ||∞≤T(T + 1) ǫ0 r(T)|S|. Taking the expectation of ˆV with respect to the initial state distribution D and setting m according to Lemma (4.2) with ǫ0 = ǫ 2|S|r(T ) T (T +1) completes the proof. □ 5 Demonstration We first present an experimental domain that hews closely to the theory in Section (3) above to demonstrate the importance of local rewards. In our simple problem there are n = 400 independent agents who each choose an action in {0, 1}. Each agent has a “correct” action that earns it reward Ri = 1 with probability 0.8, and reward 0 with probability 0.2. Equally, if the agents chooses the wrong action, it earns reward Ri = 1 with probability 0.2. We compare two methods on this problem. Our first global algorithm uses only the global rewards R and uses this to build a model of the local rewards, and finally solves the resulting estimated MDP exactly. The local reward functions are learnt by a least-squares procedure with basis functions for each agent. The second algorithm also learns a local reward function, but does so taking advantage of the local rewards it observes as opposed to only the global signal. Figure (2) demonstrates the advantages of learning using a global reward signal.5 On the right in Figure (2), we compute the time required to achieve 1 4 of optimal reward for each algorithm, as a function of the number of agents. In our next example, we consider a simple variant of the multi-agent SYSADMIN6 prob5A gradient-based model-free approach using the global reward signal was also tried, but its performance was significantly poorer than that of the two algorithms depicted in Figure (2, left). 6In SYSADMIN there is a network of computers that fail randomly. A computer is more likely to fail if a neighboring computer (arranged in a ring topology) fails. The goal is to reboot machines in such a fashion so a maximize the number of running computers. lem [4]. Again, we consider two algorithms: a global REINFORCE [9] learner, and a REINFORCE algorithm run using only local rewards, even through the local REINFORCE algorithm run in this way is not guaranteed to converge to the globally optimal (cooperative) solution. We note that the local algorithm learns much more quickly than using the global reward. (Figure 3) The learning speed we observed for the global algorithm correlates well with the observations in [5] that the number of samples needed scales roughly linearly in the number of agents. The local algorithm continued to require essentially the same number of examples for all sizes used (up to over 100 agents) in our experiments. 0 50 100 150 200 250 300 350 400 450 500 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Global Local 0 50 100 150 200 250 300 350 400 450 500 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 Global Local Figure 3: REINFORCE applied to the multi-agent SYSADMIN problem. Local refers to REINFORCE applied using only neighborhood (local) rewards while global refers to standard REINFORCE (applied to the global reward signal). (Left) shows averaged reward performance as a function of number of iterations for 10 agents. (Right) depicts the performance for 20 agents. References [1] N. Alon and J. Spencer. The Probabilistic Method. Wiley, 2000. [2] C. Boutilier, T. Dean, and S. Hanks. Decision theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 1999. [3] Y. Chang, T. Ho, and L. Kaelbling. All learning is local: Multi-agent learning in global reward games. In Advances in NIPS 14, 2004. [4] C. Guestrin, D. Koller, and R. Parr. Multi-agent planning with factored MDPs. In NIPS-14, 2002. [5] C. Guestrin, M. Lagoudakis, and R. Parr. Coordinated reinforcement learning. In ICML, 2002. [6] M. Kearns and D. Koller. Efficient reinforcement learning in factored mdps. In IJCAI 16, 1999. [7] M. Kearns, M. Littman, and S. Singh. Graphical models for game theory. In UAI, 2001. [8] M. Kearns, Y. Mansour, and A. Ng. Approximate planning in large POMDPs via reusable trajectories. (extended version of paper in NIPS 12), 1999. [9] L. Peshkin, K-E. Kim, N. Meleau, and L. Kaelbling. Learning to cooperate via policy search. In UAI 16, 2000. [10] J. Schneider, W. Wong, A. Moore, and M. Riedmiller. Distributed value functions. In ICML, 1999. [11] R. Williams and L. Baird. Tight performance bounds on greedy policies based on imperfect value functions. Technical report, Northeastern University, 1993.
|
2005
|
2
|
2,825
|
Bayesian Sets Zoubin Ghahramani∗and Katherine A. Heller Gatsby Computational Neuroscience Unit University College London London WC1N 3AR, U.K. {zoubin,heller}@gatsby.ucl.ac.uk Abstract Inspired by “Google™Sets”, we consider the problem of retrieving items from a concept or cluster, given a query consisting of a few items from that cluster. We formulate this as a Bayesian inference problem and describe a very simple algorithm for solving it. Our algorithm uses a modelbased concept of a cluster and ranks items using a score which evaluates the marginal probability that each item belongs to a cluster containing the query items. For exponential family models with conjugate priors this marginal probability is a simple function of sufficient statistics. We focus on sparse binary data and show that our score can be evaluated exactly using a single sparse matrix multiplication, making it possible to apply our algorithm to very large datasets. We evaluate our algorithm on three datasets: retrieving movies from EachMovie, finding completions of author sets from the NIPS dataset, and finding completions of sets of words appearing in the Grolier encyclopedia. We compare to Google™ Sets and show that Bayesian Sets gives very reasonable set completions. 1 Introduction What do Jesus and Darwin have in common? Other than being associated with two different views on the origin of man, they also have colleges at Cambridge University named after them. If these two names are entered as a query into Google™Sets (http://labs.google.com/sets) it returns a list of other colleges at Cambridge. Google™Sets is a remarkably useful tool which encapsulates a very practical and interesting problem in machine learning and information retrieval.1 Consider a universe of items D. Depending on the application, the set D may consist of web pages, movies, people, words, proteins, images, or any other object we may wish to form queries on. The user provides a query in the form of a very small subset of items Dc ⊂D. The assumption is that the elements in Dc are examples of some concept / class / cluster in the data. The algorithm then has to provide a completion to the set Dc—that is, some set D′ c ⊂D which presumably includes all the elements in Dc and other elements in D which are also in this concept / class / cluster2. ∗ZG is also at CALD, Carnegie Mellon University, Pittsburgh PA 15213. 1Google™Sets is a large-scale clustering algorithm that uses many millions of data instances extracted from web data (Simon Tong, personal communication). We are unable to describe any details of how the algorithm works due its proprietary nature. 2From here on, we will use the term “cluster” to refer to the target concept. We can view this problem from several perspectives. First, the query can be interpreted as elements of some unknown cluster, and the output of the algorithm is the completion of that cluster. Whereas most clustering algorithms are completely unsupervised, here the query provides supervised hints or constraints as to the membership of a particular cluster. We call this view clustering on demand, since it involves forming a cluster once some elements of that cluster have been revealed. An important advantage of this approach over traditional clustering is that the few elements in the query can give useful information as to the features which are relevant for forming the cluster. For example, the query “Bush”, “Nixon”, “Reagan” suggests that the features republican and US President are relevant to the cluster, while the query “Bush”, “Putin”, “Blair” suggests that current and world leader are relevant. Given the huge number of features in many real world data sets, such hints as to feature relevance can produce much more sensible clusters. Second, we can think of the goal of the algorithm to be to solve a particular information retrieval problem [2, 3, 4]. As in other retrieval problems, the output should be relevant to the query, and it makes sense to limit the output to the top few items ranked by relevance to the query. In our experiments, we take this approach and report items ranked by relevance. Our relevance criterion is closely related to a Bayesian framework for understanding patterns of generalization in human cognition [5]. 2 Bayesian Sets Let D be a data set of items, and x ∈D be an item from this set. Assume the user provides a query set Dc which is a small subset of D. Our goal is to rank the elements of D by how well they would “fit into” a set which includes Dc. Intuitively, the task is clear: if the set D is the set of all movies, and the query set consists of two animated Disney movies, we expect other animated Disney movies to be ranked highly. We use a model-based probabilistic criterion to measure how well items fit into Dc. Having observed Dc as belonging to some concept, we want to know how probable it is that x also belongs with Dc. This is measured by p(x|Dc). Ranking items simply by this probability is not sensible since some items may be more probable than others, regardless of Dc. For example, under most sensible models, the probability of a string decreases with the number of characters, the probability of an image decreases with the number of pixels, and the probability of any continuous variable decreases with the precision to which it is measured. We want to remove these effects, so we compute the ratio: score(x) = p(x|Dc) p(x) (1) where the denominator is the prior probability of x and under most sensible models will scale exactly correctly with number of pixels, characters, discretization level, etc. Using Bayes rule, this score can be re-written as: score(x) = p(x, Dc) p(x) p(Dc) (2) which can be interpreted as the ratio of the joint probability of observing x and Dc, to the probability of independently observing x and Dc. Intuitively, this ratio compares the probability that x and Dc were generated by the same model with the same, though unknown, parameters θ, to the probability that x and Dc came from models with different parameters θ and θ′ (see figure 1). Finally, up to a multiplicative constant independent of x, the score can be written as: score(x) = p(Dc|x), which is the probability of observing the query set given x (i.e. the likelihood of x). From the above discussion, it is still not clear how one would compute quantities such as p(x|Dc) and p(x). A natural model-based way of defining a cluster is to assume that Figure 1: Our Bayesian score compares the hypotheses that the data was generated by each of the above graphical models. the data points in the cluster all come independently and identically distributed from some simple parameterized statistical model. Assume that the parameterized model is p(x|θ) where θ are the parameters. If the data points in Dc all belong to one cluster, then under this definition they were generated from the same setting of the parameters; however, that setting is unknown, so we need to average over possible parameter values weighted by some prior density on parameter values, p(θ). Using these considerations and the basic rules of probability we arrive at: p(x) = Z p(x|θ) p(θ) dθ (3) p(Dc) = Z Y xi∈Dc p(xi|θ) p(θ) dθ (4) p(x|Dc) = Z p(x|θ) p(θ|Dc) dθ (5) p(θ|Dc) = p(Dc|θ) p(θ) p(Dc) (6) We are now fully equipped to describe the “Bayesian Sets” algorithm: Bayesian Sets Algorithm background: a set of items D, a probabilistic model p(x|θ) where x ∈D, a prior on the model parameters p(θ) input: a query Dc = {xi} ⊂D for all x ∈D do compute score(x) = p(x|Dc) p(x) end for output: return elements of D sorted by decreasing score We mention two properties of this algorithm to assuage two common worries with Bayesian methods—tractability and sensitivity to priors: 1. For the simple models we will consider, the integrals (3)-(5) are analytical. In fact, for the model we consider in section 3 computing all the scores can be reduced to a single sparse matrix multiplication. 2. Although it clearly makes sense to put some thought into choosing sensible models p(x|θ) and priors p(θ), we will show in 5 that even with very simple models and almost no tuning of the prior one can get very competitive retrieval results. In practice, we use a simple empirical heuristic which sets the prior to be vague but centered on the mean of the data in D. 3 Sparse Binary Data We now derive in more detail the application of the Bayesian Sets algorithm to sparse binary data. This type of data is a very natural representation for the large datasets we used in our evaluations (section 5). Applications of Bayesian Sets to other forms of data (realvalued, discrete, ordinal, strings) are also possible, and especially practical if the statistical model is a member of the exponential family (section 4). Assume each item xi ∈Dc is a binary vector xi = (xi1, . . . , xiJ) where xij ∈{0, 1}, and that each element of xi has an independent Bernoulli distribution: p(xi|θ) = J Y j=1 θxij j (1 −θj)1−xij (7) The conjugate prior for the parameters of a Bernoulli distribution is the Beta distribution: p(θ|α, β) = J Y j=1 Γ(αj + βj) Γ(αj)Γ(βj) θαj−1 j (1 −θj)βj−1 (8) where α and β are hyperparameters, and the Gamma function is a generalization of the factorial function. For a query Dc = {xi} consisting of N vectors it is easy to show that: p(Dc|α, β) = Y j Γ(αj + βj) Γ(αj)Γ(βj) Γ(˜αj)Γ(˜βj) Γ(˜αj + ˜βj) (9) where ˜α = α + PN i=1 xij and ˜β = β + N −PN i=1 xij. For an item x = (x·1 . . . x·J) the score, written with the hyperparameters explicit, can be computed as follows: score(x) = p(x|Dc, α, β) p(x|α, β) = Y j Γ(αj+βj+N) Γ(αj+βj+N+1) Γ(˜αj+x·j)Γ( ˜βj+1−x·j) Γ(˜αj)Γ( ˜βj) Γ(αj+βj) Γ(αj+βj+1) Γ(αj+x·j)Γ(βj+1−x·j) Γ(αj)Γ(βj) (10) This daunting expression can be dramatically simplified. We use the fact that Γ(x) = (x −1) Γ(x −1) for x > 1. For each j we can consider the two cases x·j = 0 and x·j = 1 and separately. For x·j = 1 we have a contribution αj+βj αj+βj+N ˜αj αj . For x·j = 0 we have a contribution αj+βj αj+βj+N ˜βj βj . Putting these together we get: score(x) = Y j αj + βj αj + βj + N ˜αj αj x·j ˜βj βj !1−x·j (11) The log of the score is linear in x: log score(x) = c + X j qjx·j (12) where c = X j log(αj + βj) −log(αj + βj + N) + log ˜βj −log βj (13) and qj = log ˜αj −log αj −log ˜βj + log βj (14) If we put the entire data set D into one large matrix X with J columns, we can compute the vector s of log scores for all points using a single matrix vector multiplication s = c + Xq (15) For sparse data sets this linear operation can be implemented very efficiently. Each query Dc corresponds to computing the vector q and scalar c. This can also be done efficiently if the query is also sparse, since most elements of q will equal log βj −log(βj + N) which is independent of the query. 4 Exponential Families We generalize the above result to models in the exponential family. The distribution for such models can be written in the form p(x|θ) = f(x)g(θ) exp{θ⊤u(x)}, where u(x) is a K-dimensional vector of sufficient statistics, θ are the natural parameters, and f and g are non-negative functions. The conjugate prior is p(θ|η, ν) = h(η, ν)g(θ)η exp{θ⊤ν}, where η and ν are hyperparameters, and h normalizes the distribution. Given a query Dc = {xi} with N items, and a candidate x, it is not hard to show that the score for the candidate is: score(x) = h(η + 1, ν + u(x)) h(η + N, ν + P i u(xi)) h(η, ν) h(η + N + 1, ν + u(x) + P i u(xi)) (16) This expression helps us understand when the score can be computed efficiently. First of all, the score only depends on the size of the query (N), the sufficient statistics computed from each candidate, and from the whole query. It therefore makes sense to precompute U, a matrix of sufficient statistics corresponding to X. Second, whether the score is a linear operation on U depends on whether log h is linear in the second argument. This is the case for the Bernoulli distribution, but not for all exponential family distributions. However, for many distributions, such as diagonal covariance Gaussians, even though the score is nonlinear in U, it can be computed by applying the nonlinearity elementwise to U. For sparse matrices, the score can therefore still be computed in time linear in the number of non-zero elements of U. 5 Results We ran our Bayesian Sets algorithm on three different datasets: the Groliers Encyclopedia dataset, consisting of the text of the articles in the Encyclopedia, the EachMovie dataset, consisting of movie ratings by users of the EachMovie service, and the NIPS authors dataset, consisting of the text of articles published in NIPS volumes 0-12 (spanning the 1987-1999 conferences). The Groliers dataset is 30991 articles by 15276 words, where the entries are the number of times each word appears in each document. We preprocess (binarize) the data by column normalizing each word, and then thresholding so that a (article,word) entry is 1 if that word has a frequency of more than twice the article mean. We do essentially no tuning of the hyperparameters. We use broad empirical priors, where α = c×m, β = c × (1−m) where m is a mean vector over all articles, and c = 2. The analogous priors are used for both other datasets. The EachMovie dataset was preprocessed, first by removing movies rated by less than 15 people, and people who rated less than 200 movies. Then the dataset was binarized so that a (person, movie) entry had value 1 if the person gave the movie a rating above 3 stars (from a possible 0-5 stars). The data was then column normalized to account for overall movie popularity. The size of the dataset after preprocessing was 1813 people by 1532 movies. Finally the NIPS author dataset (13649 words by 2037 authors), was preprocessed very similarly to the Grolier dataset. It was binarized by column normalizing each author, and then thresholding so that a (word,author) entry is 1 if the author uses that word more frequently than twice the word mean across all authors. The results of our experiments, and comparisons with Google Sets for word and movie queries are given in tables 2 and 3. Unfortunately, NIPS authors have not yet achieved the kind of popularity on the web necessary for Google Sets to work effectively. Instead we list the top words associated with the cluster of authors given by our algorithm (table 4). The running times of our algorithm on all three datasets are given in table 1. All experiments were run in Matlab on a 2GHz Pentium 4, Toshiba laptop. Our algorithm is very fast both at pre-processing the data, and answering queries (about 1 sec per query). GROLIERS EACHMOVIE NIPS SIZE 30991 × 15276 1813 × 1532 13649 × 2037 NON-ZERO ELEMENTS 2,363,514 517,709 933,295 PREPROCESS TIME 6.1S 0.56S 3.22S QUERY TIME 1.1S 0.34S 0.47S Table 1: For each dataset we give the size of that dataset along with the time taken to do the (onetime) preprocessing and the time taken to make a query (both in seconds). QUERY: WARRIOR, SOLDIER QUERY: ANIMAL QUERY: FISH, WATER, CORAL GOOGLE SETS BAYES SETS GOOGLE SETS BAYES SETS GOOGLE SETS BAYES SETS WARRIOR SOLDIER ANIMAL ANIMAL FISH WATER SOLDIER WARRIOR PLANT ANIMALS WATER FISH SPY MERCENARY FREE PLANT CORAL SURFACE ENGINEER CAVALRY LEGAL HUMANS AGRICULTURE SPECIES MEDIC BRIGADE FUNGAL FOOD FOREST WATERS SNIPER COMMANDING HUMAN SPECIES RICE MARINE DEMOMAN SAMURAI HYSTERIA MAMMALS SILK ROAD FOOD PYRO BRIGADIER VEGETABLE AGO RELIGION TEMPERATURE SCOUT INFANTRY MINERAL ORGANISMS HISTORY POLITICS OCEAN PYROMANIAC COLONEL INDETERMINATE VEGETATION DESERT SHALLOW HWGUY SHOGUNATE FOZZIE BEAR PLANTS ARTS FT Table 2: Clusters of words found by Google Sets and Bayesian Sets based on the given queries. The top few are shown for each query and each algorithm. Bayesian Sets was run using Grolier Encyclopedia data. It is very difficult to objectively evaluate our results since there is no ground truth for this task. One person’s idea of a good query cluster may differ drastically from another person’s. We chose to compare our algorithm to Google Sets since it was our main inspiration and it is currently the most public and commonly used algorithm for performing this task. Since we do not have access to the Google Sets algorithm it was impossible for us to run their method on our datasets. Moreover, Google Sets relies on vast amounts of web data, which we do not have. Despite those two important caveats, Google Sets clearly “knows” a lot about movies3 and words, and the comparison to Bayesian Sets is informative. We found that Google Sets performed very well when the query consisted of items which can be found listed on the web (e.g. Cambridge colleges). On the other hand, for more abstract concepts (e.g. “soldier” and “warrior”, see Table 2) our algorithm returned more sensible completions. While we believe that most of our results are self-explanatory, there are a few details that we would like to elaborate on. The top query in table 3 consists of two classic romantic movies, 3In fact, one of the example queries on the Google Sets website is a query of movie titles. QUERY: GONE WITH THE WIND, CASABLANCA GOOGLE SETS BAYES SETS CASABLANCA (1942) GONE WITH THE WIND (1939) GONE WITH THE WIND (1939) CASABLANCA (1942) ERNEST SAVES CHRISTMAS (1988) THE AFRICAN QUEEN (1951) CITIZEN KANE (1941) THE PHILADELPHIA STORY (1940) PET DETECTIVE (1994) MY FAIR LADY (1964) VACATION (1983) THE ADVENTURES OF ROBIN HOOD (1938) WIZARD OF OZ (1939) THE MALTESE FALCON (1941) THE GODFATHER (1972) REBECCA (1940) LAWRENCE OF ARABIA (1962) SINGING IN THE RAIN (1952) ON THE WATERFRONT (1954) IT HAPPENED ONE NIGHT (1934) QUERY: MARY POPPINS, TOY STORY QUERY: CUTTHROAT ISLAND, LAST ACTION HERO GOOGLE SETS BAYES SETS GOOGLE SETS BAYES SETS TOY STORY MARY POPPINS LAST ACTION HERO CUTTHROAT ISLAND MARY POPPINS TOY STORY CUTTHROAT ISLAND LAST ACTION HERO TOY STORY 2 WINNIE THE POOH GIRL KULL THE CONQUEROR MOULIN ROUGE CINDERELLA END OF DAYS VAMPIRE IN BROOKLYN THE FAST AND THE FURIOUS THE LOVE BUG HOOK SPRUNG PRESQUE RIEN BEDKNOBS AND BROOMSTICKS THE COLOR OF NIGHT JUDGE DREDD SPACED DAVY CROCKETT CONEHEADS WILD BILL BUT I’M A CHEERLEADER THE PARENT TRAP ADDAMS FAMILY I HIGHLANDER III MULAN DUMBO ADDAMS FAMILY II VILLAGE OF THE DAMNED WHO FRAMED ROGER RABBIT THE SOUND OF MUSIC SINGLES FAIR GAME Table 3: Clusters of movies found by Google Sets and Bayesian Sets based on the given queries. The top 10 are shown for each query and each algorithm. Bayesian Sets was run using the EachMovie dataset. and while most of the movies returned by Bayesian Sets are also classic romances, hardly any of the movies returned by Google Sets are romances, and it would be difficult to call “Ernest Saves Christmas” either a romance or a classic. Both “Cutthroat Island” and “Last Action Hero” are action movie flops, as are many of the movies given by our algorithm for that query. All the Bayes Sets movies associated with the query “Mary Poppins” and “Toy Story” are children’s movies, while 5 of Google Sets’ movies are not. “But I’m a Cheerleader”, while appearing to be a children’s movie, is actually an R rated movie involving lesbian and gay teens. QUERY: A.SMOLA, B.SCHOLKOPF QUERY: L.SAUL, T.JAAKKOLA QUERY: A.NG, R.SUTTON TOP MEMBERS TOP WORDS TOP MEMBERS TOP WORDS TOP MEMBERS TOP WORDS A.SMOLA VECTOR L.SAUL LOG R.SUTTON DECISION B.SCHOLKOPF SUPPORT T.JAAKKOLA LIKELIHOOD A.NG REINFORCEMENT S.MIKA KERNEL M.RAHIM MODELS Y.MANSOUR ACTIONS G.RATSCH PAGES M.JORDAN MIXTURE B.RAVINDRAN REWARDS R.WILLIAMSON MACHINES N.LAWRENCE CONDITIONAL D.KOLLER REWARD K.MULLER QUADRATIC T.JEBARA PROBABILISTIC D.PRECUP START J.WESTON SOLVE W.WIEGERINCK EXPECTATION C.WATKINS RETURN J.SHAWE-TAYLOR REGULARIZATION M.MEILA PARAMETERS R.MOLL RECEIVED V.VAPNIK MINIMIZING S.IKEDA DISTRIBUTION T.PERKINS MDP T.ONODA MIN D.HAUSSLER ESTIMATION D.MCALLESTER SELECTS Table 4: NIPS authors found by Bayesian Sets based on the given queries. The top 10 are shown for each query along with the top 10 words associated with that cluster of authors. Bayesian Sets was run using NIPS data from vol 0-12 (1987-1999 conferences). The NIPS author dataset is rather small, and co-authors of NIPS papers appear very similar to each other. Therefore, many of the authors found by our algorithm are co-authors of a NIPS paper with one or more of the query authors. An example where this is not the case is Wim Wiegerinck, who we do not believe ever published a NIPS paper with Lawrence Saul or Tommi Jaakkola, though he did have a NIPS paper on variational learning and graphical models. As part of the evaluation of our algorithm, we showed 30 na¨ıve subjects the unlabeled results of Bayesian Sets and Google Sets for the queries shown from the EachMovie and Groliers Encyclopedia datasets, and asked them to choose which they preferred. The results of this study are given in table 5. QUERY % BAYES SETS P-VALUE WARRIOR 96.7 < 0.0001 ANIMAL 93.3 < 0.0001 FISH 90.0 < 0.0001 GONE WITH THE WIND 86.7 < 0.0001 MARY POPPINS 96.7 < 0.0001 CUTTHROAT ISLAND 81.5 0.0008 Table 5: For each evaluated query (listed by first query item), we give the percentage of respondents who preferred the results given by Bayesian Sets and the p-value rejecting the null hypothesis that Google Sets is preferable to Bayesian Sets on that particular query. Since, in the case of binary data, our method reduces to a matrix-vector multiplication, we also came up with ten heuristic matrix-vector methods which we ran on the same queries, using the same datasets. Descriptions and results can be found in supplemental material on the authors websites. 6 Conclusions We have described an algorithm which takes a query consisting of a small set of items, and returns additional items which belong in this set. Our algorithm computes a score for each item by comparing the posterior probability of that item given the set, to the prior probability of that item. These probabilities are computed with respect to a statistical model for the data, and since the parameters of this model are unknown they are marginalized out. For exponential family models with conjugate priors, our score can be computed exactly and efficiently. In fact, we show that for sparse binary data, scoring all items in a large data set can be accomplished using a single sparse matrix-vector multiplication. Thus, we get a very fast and practical Bayesian algorithm without needing to resort to approximate inference. For example, a sparse data set with over 2 million nonzero entries (Grolier) can be queried in just over 1 second. Our method does well when compared to Google Sets in terms of set completions, demonstrating that this Bayesian criterion can be useful in realistic problem domains. One of the problems we have not yet addressed is deciding on the size of the response set. Since the scores have a probabilistic interpretation, it should be possible to find a suitable threshold to these probabilities. In the future, we will incorporate such a threshold into our algorithm. The problem of retrieving sets of items is clearly relevant to many application domains. Our algorithm is very flexible in that it can be combined with a wide variety of types of data (e.g. sequences, images, etc.) and probabilistic models. We plan to explore efficient implementations of some of these extensions. We believe that with even larger datasets the Bayesian Sets algorithm will be a very useful tool for many application areas. Acknowledgements: Thanks to Avrim Blum and Simon Tong for useful discussions, and to Sam Roweis for some of the data. ZG was partially supported at CMU by the DARPA CALO project. References [1] Google ™Sets. http://labs.google.com/sets [2] Lafferty, J. and Zhai, C. (2002) Probabilistic relevance models based on document and query generation. In Language modeling and information retrieval. [3] Ponte, J. and Croft, W. (1998) A language modeling approach to information retrieval. SIGIR. [4] Robertson, S. and Sparck Jones, K. (1976). Relevance weighting of search terms. J Am Soc Info Sci. [5] Tenenbaum, J. B. and Griffiths, T. L. (2001). Generalization, similarity, and Bayesian inference. Behavioral and Brain Sciences, 24:629–641. [6] Tong, S. (2005). Personal communication.
|
2005
|
20
|
2,826
|
Learning Influence among Interacting Markov Chains Dong Zhang IDIAP Research Institute CH-1920 Martigny, Switzerland zhang@idiap.ch Daniel Gatica-Perez IDIAP Research Institute CH-1920 Martigny, Switzerland gatica@idiap.ch Samy Bengio IDIAP Research Institute CH-1920 Martigny, Switzerland bengio@idiap.ch Deb Roy Massachusetts Institute of Technology Cambridge, MA 02142, USA dkroy@media.mit.edu Abstract We present a model that learns the influence of interacting Markov chains within a team. The proposed model is a dynamic Bayesian network (DBN) with a two-level structure: individual-level and group-level. Individual level models actions of each player, and the group-level models actions of the team as a whole. Experiments on synthetic multi-player games and a multi-party meeting corpus show the effectiveness of the proposed model. 1 Introduction In multi-agent systems, individuals within a group coordinate and interact to achieve a goal. For instance, consider a basketball game where a team of players with different roles, such as attack and defense, collaborate and interact to win the game. Each player performs a set of individual actions, evolving based on their own dynamics. A group of players interact to form a team. Actions of the team and its players are strongly correlated, and different players have different influence on the team. Taking another example, in conversational settings, some people seem particularly capable of driving the conversation and dominating its outcome. These people, skilled at establishing the leadership, have the largest influence on the group decisions, and often shift the focus of the meeting when they speak [8]. In this paper, we quantitatively investigate the influence of individual players on their team using a dynamic Bayesian network, that we call two-level influence model. The proposed model explicitly learns the influence of individual player on the team with a two-level structure. In the first level, we model actions of individual players. In the second one, we model team actions as a whole. The model is then applied to determine (a) the influence of players in multi-player games, and (b) the influence of participants in meetings. The paper is organized as follows. Section 2 introduces the two-level influence model. Section 3 reviews related models. Section 4 presents results on multi-player games, and Section 5 presents results on a meeting corpus. Section 6 provides concluding remarks. (a) observation individual player state (b) S S S team state player A player B player C t-1 t t+1 G S t-1 G S t G S t+1 1 S t 1 S t-1 1 S t+1 2 S t-1 2 S t 3 S t 3 S t-1 3 S t+1 2 S t+1 i S t-1 i O t-1 i O t i S t+1 i O t+1 Q=1 Q=2 Q=N Q G S (c) N S 2 S 1 S i S t Figure 1: (a) Markov Model for individual player. (b) Two-level influence model (for simplicity, we omit the observation variables of individual Markov chains, and the switching parent variable Q). (c) Switching parents. Q is called a switching parent of SG, and {S1 · · · SN} are conditional parents of SG. When Q = i, Si is the only parent of SG. 2 Two-level Influence Model The proposed model, called two-level influence model, is a dynamic Bayesian network (DBN) with a two-level structure: the player level and the team level (Fig. 1). The player level represents the actions of individual players, evolving based on their own Markovian dynamics (Fig. 1 (a)). The team level represents group-level actions (the action belongs to the team as a whole, not to a particular player). In Fig. 1 (b), the arrows up (from players to team) represent the influence of the individual actions on the group actions, and the arrows down (from team to players) represent the influence of the group actions on the individual actions. Let Oi and Si denote the observation and state of the ith player respectively, and SG denotes the team state. For N players, and observation sequences of identical length T, the joint distribution of our model is given by P(S, O)= N Y i=1 P(Si 1) T Y t=1 N Y i=1 P(Oi t|Si t) T Y t=1 P(SG t |S1 t · · · SN t ) T Y t=2 N Y i=1 P(Si t|Si t−1, SG t−1). (1) Regarding the player level, we model the actions of each individual with a first-order Markov model (Fig. 1 (a)) with one observation variable Oi and one state variable Si. Furthermore, to capture the dynamics of all the players interacting as a team, we add a hidden variable SG (team state), which is responsible to model the group-level actions. Different from individual player state that has its own Markovian dynamics, team state is not directly influenced by its previous state . SG could be seen as the aggregate behaviors of the individuals, yet provides a useful level of description beyond individual actions. There are two kinds of relationships between the team and players: (1) The team state at time t influences the players’ states at the next time (down arrow in Fig. 1 (b)). In other words, the state of the ith player at time t + 1 depends on its previous state as well as on the team state, i.e., P(Si t+1|Si t, SG t ). (2) The team state at time t is influenced by all the players’ states at the current time (up arrow in Fig. 1 (b)), resulting in a conditional state transition distribution P(SG t |S1 t · · · SN t ). To reduce the model complexity, we add one hidden variable Q in the model, to switch parents for SG. The idea of switching parent (also called Bayesian multi-nets in [3]) is as follows: a variable -SG in this case- has a set of parents {Q, S1 · · · SN} (Fig. 1(c)). Q is the switching parent that determines which of the other parents to use, conditioned on the current value of the switching parent. {S1 · · · SN} are the conditional parents. In Fig. 1(c), Q switches the parents of SG among {S1 · · · SN}, corresponding to the distribution P(SG t |S1 t · · · SN t ) = N X i=1 P(SG t , Q = i|S1 t · · · SN t ) (2) = N X i=1 P(Q = i|S1 t · · · SN t )P(SG t |Si t · · · SN t , Q = i) (3) = N X i=1 P(Q = i)P(SG t |Si t) = N X i=1 αiP(SG t |Si t). (4) From Eq. 3 to Eq. 4, we made two assumptions: (i) Q is independent of {S1 · · · SN}; and (ii) when Q = i, SG t only depends on Si t. The distribution over the switching-parent variable P(Q) essentially describes how much influence or contribution the state transitions of the player variables have on the state transitions of the team variable. We refer to αi = P(Q = i) as the influence value of the ith player. Obviously, PN i=1 αi = 1. If we further assume that all player variables have the same number of states NS, and the team variable has NG possible states, the joint log probability is given by log P(S, O) = N X i=1 NS X j=1 zi j,1 · log P(Si 1 = j) | {z } initial probability + T X t=1 N X i=1 NS X j=1 zi j,t · log P(Oi t|Si t = j) | {z } emission probability + T X t=2 N X i=1 NS X j=1 NS X k=1 NG X g=1 zi j,t · zi k,t−1 · zG g,t−1 · log P(Si t = j|Si t−1 = k, SG t−1 = g) | {z } group influence on individual transition + T X t=1 NS X k=1 NG X g=1 zG g,t · zi k,t · log{ N X i=1 αiP(SG t = g|Si t = k) | {z } individual influence on group }, (5) where the indicator variable zj,t = 1 if St = j, otherwise zj,t = 0. We can see that the model has complexity O(T · N · NG · N 2 S). For T = 2000, NS = 10, NG = 5, N = 4, a total of 106 operations is required, which is still tractable. For the model implementation, we used the Graphical Models Toolkit (GMTK) [4], a DBN system for speech, language, and time series data. Specifically, we used the switching parents feature of GMTK, which greatly facilitates the implementation of the two-level model to learn the influence values using the Expectation Maximization (EM) algorithm. Since EM has the problem of local maxima, good initialization is very important. To initialize the emission probability distribution in Eq. 5, we first train individual action models (Fig. 1 (a)) by pooling all observation sequences together. Then we use the trained emission distribution from the individual action model to initialize the emission distribution of the two-level influence model.This procedure is beneficial because we use data from all individual streams together, and thus have a larger amount of training data for learning. 3 Related Models The proposed two-level influence model is related to a number of models, namely mixedmemory Markov model (MMM) [14, 11], coupled HMM (CHMM) [13], influence model [1, 2, 6] and dynamical systems trees (DSTs) [10]. MMMs decompose a complex model into mixtures of simpler ones, for example, a K-order Markov model, into mixtures of firstorder models: P(St|St−1St−2 · · · St−K) = PK i=1 αiP(St|St−i). The CHMM models interactions of multiple Markov chains by directly linking the current state of one stream with the previous states of all the streams (including itself): P(Si t|S1 t−1S2 t−1 · · · SN t−1). However, the model becomes computationally intractable for more than two streams. The influence model [1, 2, 6] simplifies the state transition distribution of the CHMM into a Figure 2: (a) A snapshot of the multi-player games: four players move along the pathes labeled in the map. (b) A snapshot of four-participant meetings. convex combination of pairwise conditional distributions, i.e., P(Si t|S1 t−1S2 t−1 · · · SN t−1) = PN j=1 αjiP(Si t|Sj t−1). We can see that influence model and MMM take the same strategy to reduce complex models with large state spaces to a combination of simpler ones with smaller state spaces. In [2, 6], the influence model was used to analyze speaking patterns in conversations (i.e., turn-taking) to determine how much influence one participant has on others. In such model, αji is regarded as the influence of the jth player on the ith player. All these models, however, limit themselves to modeling the interactions between individual players, i.e., the influence of one player on another player. The proposed two-level influence model extends these models by using the group-level variable SG that allows to model the influence between all the players and the team: P(SG t |S1 t S2 t · · · SN t ) = PN i=1 αiP(SG t |Si t), and additionally conditioning the dynamics of each player on the team state: P(Si t+1|Si t, SG t ). DSTs [10] have a tree structure that models interacting processes through the parent hidden Markov chains. There are two differences between DSTs and our model: (1) In DSTs, the parent chain has its own Markovian dynamics, while the team state of our model is not directly influenced by the previous team state. Thus, our model captures the emergent phenomena in which the group action is “nothing more” than the aggregate behaviors of individuals, yet it provides a useful level of representation beyond individual actions. (2) The influence between players and team in our model is “bi-direction” (up and down arrows in Fig. 1(b)). In DSTs, the influence between child and parent chains is “uni-direction”: parent chains could influence child chains, while child chains could not influence their parent chains. 4 Experiments on Synthetic Data We first test our model on multi-player synthetic games, in which four players (labeled A-D) move along a number of predetermined paths manually labeled in a map (Fig. 2(a)), based on the following rules: • Game I: Player A moves randomly. Player B and C are meticulously following player A. Player D moves randomly. • Game II: Player A moves randomly. Player B is meticulously following player A. Player C moves randomly. Player D is meticulously following player C. • Game III: All four players, A, B, C and D, move randomly. A follower moves randomly until it lies on the same path of its target, and after that it tries to reach the target by following the target’s direction. The initial positions and speeds of players are randomly generated. The observation of an individual player is its motion trajectory in the form of a sequence of positions, (x1, y1), (x2, y2) · · · (xt, yt), each of which belongs to one of 20 predetermined paths in the map. Therefore, we set NS = 20. The number of team states is set to NG = 5. In experiments, we found that the final results were not sensitive to the specific number of team states for this dataset in a wide range. The length of each game sequence is T = 2000 frames. EM iterations were stopped once 10 20 30 0 0.2 0.4 0.6 0.8 1 Game I EM Iterations Influence Value Player A Player B Player C Player D 20 40 60 0 0.2 0.4 0.6 0.8 1 Game II EM Iterations Influence Value Player A Player B Player C Player D 20 40 60 0 0.2 0.4 0.6 0.8 1 Game III EM Iterations Influence Value Player A Player B Player C Player D Figure 3: Influence values with respect to the EM iterations in different games. the relative difference in the global log likelihood was less than 2%. Fig. 3 shows the learned influence value for each of the four players in the different games with respect to the number of EM iterations. We can see that for Game I, player A is the leader player based on the defined rules. The final learned influence value for player A is almost 1, while the influence for the rest three players are almost 0. For Game II, player A and player C are both leaders based on the defined rules. The learned influence values for player A and C are indeed close to 0.5, which indicates they have similar influence on the team. For Game III, the four players are moving randomly, and the learned influence values are around 0.25, which indicates that all players have similar influence on the team. The results on these toy data suggest that our model is capable of learning sensible values for {αi}, in good agreement with the concept of influence we have described before. 5 Experiments on Meeting Data As an application of the two-level influence model, we investigate the influence of participants in meetings. Status, dominance, and influence are important concepts in social psychology for which our model could be particularly suitable in a (dynamic) conversational setting [8]. We used a public meeting corpus (available at http://mmm.idiap.ch), which consists of 30 five-minute four-participant meetings collected in a room equipped with synchronized multi-channel audio and video recorders [12]. A snapshot of the meeting is shown in Fig. 2 (b). These meetings have pre-defined topics and an action agenda, designed to ensure discussions and monologues. Manual speech transcripts are also available. We first describe how we manually collected influence judgements, and the performance measure we used. We then report our results using audio and language features, compared with simple baseline methods. 5.1 Manually Labeling Influence Values and the Performance Measure The manual annotation of influence of meeting participants is to some degree a subjective task, as a definite ground-truth does not exist. In our case, each meeting was labeled by three independent annotators who had no access to any information about the participants (e.g. job titles and names). This was enforced to avoid any bias based on prior knowledge of the meeting participants (e.g. a student would probably assign a large influence value to his supervisor). After watching an entire meeting, the three annotators were asked to assign a probability-based value (ranging from 0 to 1, all adding up to 1) to meeting participants, which indicated their influence in the meeting (Fig. 5(b-d)). From the three annotations, we computed the pairwise Kappa statistics [7], a commonly used measure for inter-rate agreement. The obtained pairwise Kappa ranges between 0.68 and 0.72, which demonstrates a good agreement among the different annotators. We estimated the ground-truth influence values by averaging the results from the three annotators (Fig. 5(a)). We use Kullback-Leibler (KL) divergence to evaluate the results. For the jth meeting, given an automatically determined influence distribution ˜P(Q), and the ground truth influence distribution P(Q), the KL divergence is given by: Dj( ˜P∥P) = silence speaking person A audio language 0022200003333300444444000 0011100001111100111111000 0000001100000011000000111 0000002200000033000000444 audio language person B timeline timeline Figure 4: Illustration of state sequences using audio and language features respectively: Using audio, there are two states: speaking and silence. Using language, the number of states equals PLSA topics plus one silence state. PN i=1 ˜P(Q = i) log2 ˜ P (Q=i) P (Q=i), where N is the number of participants. The smaller Dj, the better the performance (if ˜P = P ⇒Dj = 0). Note that KL divergence is not symmetric. We calculate the average KL divergence for all the meetings: D = 1 M PM j=1 Dj( ˜P∥P), where M is the number of meetings. 5.2 Audio and Language Features We first extract audio features useful to detect speaking turns in conversations. We compute the SRP-PHAT measure using the signals from a 8-microphone array [12], which is a continuous value indicating the speech activity from a particular participant. We use a Gaussian emission probability, and set NS = 2, each state corresponding to speaking and non-speaking (silence), respectively (Fig. 4). Additionally, language features were extracted from manual transcripts. After removing stop words, the meeting corpus contains 2175 unique terms. We then employed probabilistic latent semantic analysis (PLSA) [9], which is a language model that projects documents in the high-dimensional bag-of-words space into a topic-based space of lower dimension. Each dimension in this new space represents a “topic”, and each document is represented as a mixture of topics. In our case, a document corresponds to one speech utterance (ts, te, w1w2 · · · wk), where ts is the start time, te is the end time, and w1w2 · · · wk is a sequence of words. PLSA is thus used as a feature extractor that could potentially capture “topic turns” in meetings. We embedded PLSA into our model by treating the states of individual players as instances of PLSA topics (similar to [5]). Therefore, the PLSA model determines the emission probability in Eq. 5. We repeat the PLSA topic within the same utterance (ts ≤t ≤te). The topic for the silence segments was set to 0 (Fig. 4). We can see that using audio-only features can be seen as a special case of using language features, by using only one topic in the PLSA model (i.e., all utterances belong to the same topic). We set 10 topics in PLSA (NS = 10), and set NG = 5 using simple reasonable a priori knowledge. EM iterations were stopped once the relative difference in the global log likelihood was less than 2%. 5.3 Results and Discussions We compare our model with a method based on the speaking length (how much time each of the participants speaks). In this case, the influence value of a meeting participant is defined to be proportional to his speaking length: P(Q = i) = Li/ PN i=1 Li, where Li is the speaking length of participant i. As a second baseline model, we randomly generated 1000 combinations of influence values (under the constraint that the sum of the four values equals 1), and report the average performance. The results are shown in Table 1 (left) and Fig. 5(e-h). We can see that the results of the three methods: model + language, model + audio, and speaking-length (Fig. 5 (e-g)) are significantly better than the result of randomization (Fig. 5 (h)). Using language features 5 10 15 20 25 30 2 4 0 0.5 1 (a) 5 10 15 20 25 30 2 4 0 0.5 1 (b) 5 10 15 20 25 30 2 4 0 0.5 1 (c) 5 10 15 20 25 30 2 4 0 0.5 1 (d) 5 10 15 20 25 30 2 4 0 0.5 1 (e) 5 10 15 20 25 30 2 4 0 0.5 1 (f) 5 10 15 20 25 30 2 4 0 0.5 1 (g) 5 10 15 20 25 30 2 4 0 0.5 1 (h) Figure 5: Influence values of the 4 participants (y-axis) in the 30 meetings (x-axis) (a) ground-truth (average of the three human annotations: A1, A2, A3). (b) A1 : human annotation 1 (c) A2 : human annotation 2 (d) A3 : human annotation 3 (e) our model + language (f) our model + audio (g) speaking-length (h) randomization. Table 1: Results on meetings (“model” denotes the two-level influence model). Method KL divergence Human Annotation KL divergence model + Language 0.106 Ai vs. Aj 0.090 model + Audio 0.135 Ai vs. Ai 0.053 Speaking length 0.226 Ai vs. GT 0.037 Randomization 0.863 with our model achieves the best performance. Our model (using either audio or language features) outperforms the speaking-length based method, which suggests that the learned influence distributions are in better accordance with the influence distributions from human judgements. As shown in Fig. 4, using audio features can be seen as a special case of using language features. We use language features to capture “topic turns” by factorizing the two states: “speaking, silence” into more states: “topic1, topic2, ..., silence”. We can see that the result using language features is better than that using audio features. In other words, compared with “speaking turns”, “topic turns” improves the performance of our model to learn the influence of participants in meetings. It is interesting to look at the KL divergence between any pair of the three human annotations (Ai vs. Aj), any one against the average of the others (Ai vs. Ai), and any one against the ground-truth (Ai vs. GT). The average results are shown in Table 1 (right). We can see that the result of “Ai vs. GT” is the best, which is reasonable since “GT” is the average of A1, A2, and A3. Fig. 6(a) shows the histogram of KL divergence between any pair of human annotations for the 30 meetings. The histogram has a distribution of µ = 0.09, σ = 0.11. We can see that the results of our model (language: 0.106, audio: 0.135) are very close to the mean (µ = 0.09), which indicates that our model is comparable to human performance. With our model, we can calculate the cumulative influence of each meeting participant over time. Fig. 6(b) shows such an example using the two-level influence model with audio features. We can see that the cumulative influence is related to the meeting agenda: The meeting starts with the monologue of person1 (monologue1). The influence of person1 is almost 1, while the influences of the other persons are nearly 0. When four participants are 0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.03 0.06 0.09 0.12 0.15 0.18 KL divergence (a) 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 Time (min.) Influence Value monologue4 monologue1 discussion Person1 Person2 Person3 Person4 discussion (b) Figure 6: (a) Histogram of KL divergence between any pair of the human annotations (Ai vs. Aj) for the 30 meetings. (b) The evolution of cumulative influence over time (5 minutes). The dotted vertical lines indicate the predefined meeting agenda. involved in a discussion, the influence of person1 decreases, and the influences of the other three persons increase. The influence of person4 increases quickly during monologue4. The final influence of participants becomes stable in the second discussion. 6 Conclusions We have presented a two-level influence model that learns the influence of all players within a team. The model has a two-level structure: individual-level and group-level. Individual level models actions of individual players and group-level models the group as a whole. Experiments on synthetic multi-player games and a multi-party meeting corpus showed the effectiveness of the proposed model. More generally, we anticipate that our approach to multi-level influence modeling may provide a means for analyzing a wide range of social dynamics to infer patterns of emergent group behaviors. Acknowledgements This work was supported by the Swiss National Center of Competence in Research on Interactive Multimodal Information Management (IM2), and the EC project AMI (Augmented Multi-Party Interaction) (pub. AMI-124). We thank Florent Monay (IDIAP) and Jeff Bilmes (University of Washington) for sharing PLSA code and the GMTK. We also thank the annotators for their efforts. References [1] C. Asavathiratham. The influence model: A tractable representation for the dynamics of networked markov chains. Ph.D. dissertation, Dept. of EECS, MIT, Cambridge, 2000. [2] S. Basu, T. Choudhury, B. Clarkson, and A. Pentland. Learning human interactions with the influence model. MIT Media Laboratory Technical Note No. 539, 2001. [3] J. Bilmes. Dynamic bayesian multinets. In Uncertainty in Artificial Intelligence, 2000. [4] J. Bilmes and G. Zweig. The graphical models toolkit: An open source software system for speech and time series processing. Proc. ICASSP, vol. 4:3916–3919, 2002. [5] D. Blei and P. Moreno. Topic segmentation with an aspect hidden markov model. Proc. of ACM SIGIR conference on Research and development in information retrieval, pages 343–348, 2001. [6] T. Choudhury and S. Basu. Modeling conversational dynamics as a mixed memory markov process. Proc. of Intl. Conference on Neural Information and Processing Systems (NIPS), 2004. [7] J.A. Cohen. A coefficient of agreement for nominal scales. Educ Psych Meas, 20:37–46, 1960. [8] S. L. Ellyson and J. F. Dovidio, editors. Power, Dominance, and Nonverbal Behavior. SpringerVerlag., 1985. [9] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. In Machine Learning, 42:177–196, 2001. [10] A. Howard and T. Jebara. Dynamical systems trees. In Uncertainty in Artificial Intelligence’01. [11] K. Kirchhoff, S. Parandekar, and J. Bilmes. Mixed-memory markov models for automatic language identification. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, 2000. [12] I. McCowan, D. Gatica-Perez, S. Bengio, G. Lathoud, M. Barnard, and D. Zhang. Automatic analysis of multimodal group actions in meetings. In IEEE Transactions on PAMI, volume 27(3), 2005. [13] N. Oliver, B. Rosario, and A. Pentland. Graphical models for recognizing human interactions. Proc. of Intl. Conference on Neural Information and Processing Systems (NIPS), 1998. [14] L. K. Saul and M. I. Jordan. Mixed memory markov models: Decomposing complex stochastic processes as mixtures of simpler ones. Machine Learning, 37(1):75–87, 1999.
|
2005
|
200
|
2,827
|
Off-policy Learning with Options and Recognizers Doina Precup McGill University Montreal, QC, Canada Richard S. Sutton University of Alberta Edmonton, AB, Canada Cosmin Paduraru University of Alberta Edmonton, AB, Canada Anna Koop University of Alberta Edmonton, AB, Canada Satinder Singh University of Michigan Ann Arbor, MI, USA Abstract We introduce a new algorithm for off-policy temporal-difference learning with function approximation that has lower variance and requires less knowledge of the behavior policy than prior methods. We develop the notion of a recognizer, a filter on actions that distorts the behavior policy to produce a related target policy with low-variance importance-sampling corrections. We also consider target policies that are deviations from the state distribution of the behavior policy, such as potential temporally abstract options, which further reduces variance. This paper introduces recognizers and their potential advantages, then develops a full algorithm for linear function approximation and proves that its updates are in the same direction as on-policy TD updates, which implies asymptotic convergence. Even though our algorithm is based on importance sampling, we prove that it requires absolutely no knowledge of the behavior policy for the case of state-aggregation function approximators. Off-policy learning is learning about one way of behaving while actually behaving in another way. For example, Q-learning is an off- policy learning method because it learns about the optimal policy while taking actions in a more exploratory fashion, e.g., according to an ε-greedy policy. Off-policy learning is of interest because only one way of selecting actions can be used at any time, but we would like to learn about many different ways of behaving from the single resultant stream of experience. For example, the options framework for temporal abstraction involves considering a variety of different ways of selecting actions. For each such option one would like to learn a model of its possible outcomes suitable for planning and other uses. Such option models have been proposed as fundamental building blocks of grounded world knowledge (Sutton, Precup & Singh, 1999; Sutton, Rafols & Koop, 2005). Using off-policy learning, one would be able to learn predictive models for many options at the same time from a single stream of experience. Unfortunately, off-policy learning using temporal-difference methods has proven problematic when used in conjunction with function approximation. Function approximation is essential in order to handle the large state spaces that are inherent in many problem domains. Q-learning, for example, has been proven to converge to an optimal policy in the tabular case, but is unsound and may diverge in the case of linear function approximation (Baird, 1996). Precup, Sutton, and Dasgupta (2001) introduced and proved convergence for the first off-policy learning algorithm with linear function approximation. They addressed the problem of learning the expected value of a target policy based on experience generated using a different behavior policy. They used importance sampling techniques to reduce the off-policy case to the on-policy case, where existing convergence theorems apply (Tsitsiklis & Van Roy, 1997; Tadic, 2001). There are two important difficulties with that approach. First, the behavior policy needs to be stationary and known, because it is needed to compute the importance sampling corrections. Second, the importance sampling weights are often ill-conditioned. In the worst case, the variance could be infinite and convergence would not occur. The conditions required to prevent this were somewhat awkward and, even when they applied and asymptotic convergence was assured, the variance could still be high and convergence could be slow. In this paper we address both of these problems in the context of off-policy learning for options. We introduce the notion of a recognizer. Rather than specifying an explicit target policy (for instance, the policy of an option), about which we want to make predictions, a recognizer specifies a condition on the actions that are selected. For example, a recognizer for the temporally extended action of picking up a cup would not specify which hand is to be used, or what the motion should be at all different positions of the cup. The recognizer would recognize a whole variety of directions of motion and poses as part of picking the cup. The advantage of this strategy is not that one might prefer a multitude of different behaviors, but that the behavior may be based on a variety of different strategies, all of which are relevant, and we would like to learn from any of them. In general, a recognizer is a function that recognizes or accepts a space of different ways of behaving and thus, can learn from a wider range of data. Recognizers have two advantages over direct specification of a target policy: 1) they are a natural and easy way to specify a target policy for which importance sampling will be well conditioned, and 2) they do not require the behavior policy to be known. The latter is important because in many cases we may have little knowledge of the behavior policy, or a stationary behavior policy may not even exist. We show that for the case of state aggregation, even if the behavior policy is unknown, convergence to a good model is achieved. 1 Non-sequential example The benefits of using recognizers in off-policy learning can be most easily seen in a nonsequential context with a single continuous action. Suppose you are given a sequence of sample actions ai ∈[0,1], selected i.i.d. according to probability density b : [0,1] 7→ℜ+ (the behavior density). For example, suppose the behavior density is of the oscillatory form shown as a red line in Figure 1. For each each action, ai, we observe a corresponding outcome, zi ∈ℜ, a random variable whose distribution depends only on ai. Thus the behavior density induces an outcome density. The on-policy problem is to estimate the mean mb of the outcome density. This problem can be solved simply by averaging the sample outcomes: ˆmb = (1/n)∑n i=1 zi. The off-policy problem is to use this same data to learn what the mean would be if actions were selected in some way other than b, for example, if the actions were restricted to a designated range, such as between 0.7 and 0.9. There are two natural ways to pose this off-policy problem. The most straightforward way is to be equally interested in all actions within the designated region. One professes to be interested in actions selected according to a target density π : [0,1] 7→ℜ+, which in the example would be 5.0 between 0.7 and 0.9, and zero elsewhere, as in the dashed line in 0 0.7 0.9 1 Action 0 12 Probability density functions 10 100 200 300 400 500 0 .5 1 1.5 Empirical variances (average of 200 sample variances) without recognizer with recognizer Number of sample actions Target policy with recognizer Behavior policy Target policy w/o recognizer Figure 1: The left panel shows the behavior policy and the target policies for the formulations of the problem with and without recognizers. The right panel shows empirical estimates of the variances for the two formulations as a function of the number sample actions. The lowest line is for the formulation using empirically-estimated recognition probabilities. Figure 1 (left). The importance- sampling estimate of the mean outcome is ˆmπ = 1 n n ∑ i=1 π(ai) b(ai)zi. (1) This approach is problematic if there are parts of the region of interest where the behavior density is zero or very nearly so, such as near 0.72 and 0.85 in the example. Here the importance sampling ratios are exceedingly large and the estimate is poorly conditioned (large variance). The upper curve in Figure 1 (right) shows the empirical variance of this estimate as a function of the number of samples. The spikes and uncertain decline of the empirical variance indicate that the distribution is very skewed and that the estimates are very poorly conditioned. The second way to pose the problem uses recognizers. One professes to be interested in actions to the extent that they are both selected by b and within the designated region. This leads to the target policy shown in blue in the left panel of Figure 1 (it is taller because it still must sum to 1). For this problem, the variance of (1) is much smaller, as shown in the lower two lines of Figure 1 (right). To make this way of posing the problem clear, we introduce the notion of a recognizer function c : A 7→ℜ+. The action space in the example is A = [0,1] and the recognizer is c(a) = 1 for a between 0.7 and 0.9 and is zero elsewhere. The target policy is defined in general by π(a) = c(a)b(a) ∑x c(x)b(x) = c(a)b(a) µ . (2) where µ = ∑x c(x)b(x) is a constant, equal to the probability of recognizing an action from the behavior policy. Given π, ˆmπ from (1) can be rewritten in terms of the recognizer as ˆmπ = 1 n n ∑ i=1 zi π(ai) b(ai) = 1 n n ∑ i=1 zi c(ai)b(ai) µ 1 b(ai) = 1 n n ∑ i=1 zi c(ai) µ (3) Note that the target density does not appear at all in the last expression and that the behavior distribution appears only in µ, which is independent of the sample action. If this constant is known, then this estimator can be computed with no knowledge of π or b. The constant µ can easily be estimated as the fraction of recognized actions in the sample. The lowest line in Figure 1 (right) shows the variance of the estimator using this fraction in place of the recognition probability. Its variance is low, no worse than that of the exact algorithm, and apparently slightly lower. Because this algorithm does not use the behavior density, it can be applied when the behavior density is unknown or does not even exist. For example, suppose actions were selected in some deterministic, systematic way that in the long run produced an empirical distribution like b. This would be problematic for the other algorithms but would require no modification of the recognition-fraction algorithm. 2 Recognizers improve conditioning of off-policy learning The main use of recognizers is in formulating a target density π about which we can successfully learn predictions, based on the current behavior being followed. Here we formalize this intuition. Theorem 1 Let A = {a1,...ak} ⊆A be a subset of all the possible actions. Consider a fixed behavior policy b and let πA be the class of policies that only choose actions from A, i.e., if π(a) > 0 then a ∈A. Then the policy induced by b and the binary recognizer cA is the policy with minimum-variance one-step importance sampling corrections, among those in πA: π as given by (2) = arg min π∈πA Eb "π(ai) b(ai) 2# (4) Proof: Denote π(ai) = πi, b(ai) = bi. Then the expected variance of the one-step importance sampling corrections is: Eb "πi bi 2# −E2 b πi bi = ∑ i bi πi bi 2 −1 = ∑ i π2 i bi −1, where the summation (here and everywhere below) is such that the action ai ∈A. We want to find πi that minimizes this expression, subject to the constraint that ∑i πi = 1. This is a constrained optimization problem. To solve it, we write down the corresponding Lagrangian: L(πi,β) = ∑ i π2 i bi −1+β(∑ i πi −1) We take the partial derivatives wrt πi and β and set them to 0: ∂L ∂πi = πi 2 bi +β = 0 ⇒πi = −βbi 2 (5) ∂L ∂β = ∑ i πi −1 = 0 (6) By taking (5) and plugging into (6), we get the following expression for β: −β 2 ∑ i bi = 1 ⇒β = −2 ∑i bi By substituting β into (5) we obtain: πi = bi ∑i bi This is exactly the policy induced by the recognizer defined by c(ai) = 1 iff ai ∈A. ⋄ We also note that it is advantageous, from the point of view of minimizing the variance of the updates, to have recognizers that accept a broad range of actions: Theorem 2 Consider two binary recognizers c1 and c2, such that µ1 > µ2. Then the importance sampling corrections for c1 have lower variance than the importance sampling corrections for c2. Proof: From the previous theorem, we have the variance of a recognizer cA: Var = ∑ i π2 i bi −1 = ∑ i bi ∑j∈A bj 2 1 bi −1 = 1 ∑j∈A bj −1 = 1 µ −1 ⋄ 3 Formal framework for sequential problems We turn now to the full case of learning about sequential decision processes with function approximation. We use the standard framework in which an agent interacts with a stochastic environment. At each time step t, the agent receives a state st and chooses an action at. We assume for the moment that actions are selected according to a fixed behavior policy, b : S ×A →[0,1] where b(s,a) is the probability of selecting action a in state s. The behavior policy is used to generate a sequence of experience (observations, actions and rewards). The goal is to learn, from this data, predictions about different ways of behaving. In this paper we focus on learning predictions about expected returns, but other predictions can be tackled as well (for instance, predictions of transition models for options (Sutton, Precup & Singh, 1999), or predictions specified by a TD-network (Sutton & Tanner, 2005; Sutton, Rafols & Koop, 2006)). We assume that the state space is large or continuous, and function approximation must be used to compute any values of interest. In particular, we assume a space of feature vectors Φ and a mapping φ : S →Φ. We denote by φs the feature vector associated with s. An option is defined as a triple o = ⟨I,π,β⟩where I ⊆S is the set of states in which the option can be initiated, π is the internal policy of the option and β : S →[0,1] is a stochastic termination condition. In the option work (Sutton, Precup & Singh, 1999), each of these elements has to be explicitly specified and fixed in order for an option to be well defined. Here, we will instead define options implicitly, using the notion of a recognizer. A recognizer is defined as a function c : S × A →[0,1], where c(s,a) indicates to what extent the recognizer allows action a in state s. An important special case, which we treat in this paper, is that of binary recognizers. In this case, c is an indicator function, specifying a subset of actions that are allowed, or recognized, given a particular state. Note that recognizers do not specify policies; instead, they merely give restrictions on the policies that are allowed or recognized. A recognizer c together with a behavior policy b generates a target policy π, where: π(s,a) = b(s,a)c(s,a) ∑x b(s,x)c(s,x) = b(s,a)c(s,a) µ(s) (7) The denominator of this fraction, µ(s) = ∑x b(s,x)c(s,x), is the recognition probability at s, i.e., the probability that an action will be accepted at s when behavior is generated according to b. The policy π is only defined at states for which µ(s) > 0. The numerator gives the probability that action a is produced by the behavior and recognized in s. Note that if the recognizer accepts all state-action pairs, i.e. c(s,a) = 1,∀s,a, then π is the same as b. Since a recognizer and a behavior policy can specify together a target policy, we can use recognizers as a way to specify policies for options, using (7). An option can only be initiated at a state for which at least one action is recognized, so µ(s) > 0,∀s ∈I. Similarly, the termination condition of such an option, β, is defined as β(s) = 1 if µ(s) = 0. In other words, the option must terminate if no actions are recognized at a given state. At all other states, β can be defined between 0 and 1 as desired. We will focus on computing the reward model of an option o, which represents the expected total return. The expected values of different features at the end of the option can be estimated similarly. The quantity that we want to compute is Eo{R(s)} = E{r1 +r2 +...+rT|s0 = s,π,β} where s ∈I, experience is generated according to the policy of the option, π, and T denotes the random variable representing the time step at which the option terminates according to β. We assume that linear function approximation is used to represent these values, i.e. Eo{R(s)} ≈θTφs where θ is a vector of parameters. 4 Off-policy learning algorithm In this section we present an adaptation of the off-policy learning algorithm of Precup, Sutton & Dasgupta (2001) to the case of learning about options. Suppose that an option’s policy π was used to generate behavior. In this case, learning the reward model of the option is a special case of temporal-difference learning of value functions. The forward view of this algorithm is as follows. Let ¯R(n) t denote the truncated n-step return starting at time step t and let yt denote the 0-step truncated return, ¯R(0) t . By the definition of the n-step truncated return, we have: ¯R(n) t = rt+1 +(1−βt+1) ¯R(n−1) t+1 . This is similar to the case of value functions, but it accounts for the possibility of terminating the option at time step t +1. The λ-return is defined in the usual way: ¯Rλ t = (1−λ) ∞ ∑ n=1 λn−1 ¯R(n) t . The parameters of the linear function approximator are updated on every time step proportionally to: ∆¯θt = h ¯Rλ t −yt i ∇θyt(1−β1)···(1−βt). In our case, however, trajectories are generated according to the behavior policy b. The main idea of the algorithm is to use importance sampling corrections in order to account for the difference in the state distribution of the two policies. Let ρt = π(st,at) b(st,at) be the importance sampling ratio at time step t. The truncated n-step return, R(n) t , satisfies: R(n) t = ρt[rt+1 +(1−βt+1)R(n−1) t+1 ]. The update to the parameter vector is proportional to: ∆θt = h Rλ t −yt i ∇θytρ0(1−β1)···ρt−1(1−βt). The following result shows that the expected updates of the on-policy and off-policy algorithms are the same. Theorem 3 For every time step t ≥0 and any initial state s, Eb[∆θt|s] = Eπ[∆¯θt|s]. Proof: First we will show by induction that Eb{R(n) t |s} = Eπ{ ¯R(n) t |s},∀n (which implies that Eb{Rλ t |s} = Eπ( ¯Rλ t |s}). For n = 0, the statement is trivial. Assuming that it is true for n−1, we have Eb n R(n) t |s o = ∑ a b(s,a)∑ s′ Pa ss′ρ(s,a) h ra ss′ +(1−β(s′))Eb n R(n−1) t+1 |s′oi = ∑ a ∑ s′ Pa ss′b(s,a)π(s,a) b(s,a) h ra ss′ +(1−β(s′))Eπ n ¯R(n−1) t+1 |s′oi = ∑ a π(s,a)∑ s′ Pa ss′ h ra ss′ +(1−β(s′))Eπ n ¯R(n−1) t+1 |s′oi = Eπ n ¯R(n) t |s o . Now we are ready to prove the theorem’s main statement. Defining Ωt to be the set of all trajectory components up to state st, we have: Eb {∆θt|s} = ∑ ω∈Ωt Pb(ω|s)Eb n (Rλ t −yt)∇θyt|ω ot−1 ∏ i=0 ρi(1−βi+1) = ∑ ω∈Ωt t−1 ∏ i=0 biPai sisi+1 !h Eb n Rλ t |st o −yt i ∇θyt t−1 ∏ i=0 πi bi (1−βi+1) = ∑ ω∈Ωt t−1 ∏ i=0 πiPai sisi+1 !h Eπ n ¯Rλ t |st o −yt i ∇θyt(1−β1)...(1−βt) = ∑ ω∈Ωt Pπ(ω|s)Eπ n ( ¯Rλ t −yt)∇θyt|ω o (1−β1)...(1−βt) = Eπ ∆¯θt|s . Note that we are able to use st and ω interchangeably because of the Markov property. ⋄ Since we have shown that Eb[∆θt|s] = Eπ[∆¯θt|s] for any state s, it follows that the expected updates will also be equal for any distribution of the initial state s. When learning the model of options with data generated from the behavior policy b, the starting state distribution with respect to which the learning is performed, I0 is determined by the stationary distribution of the behavior policy, as well as the initiation set of the option I. We note also that the importance sampling corrections only have to be performed for the trajectory since the initiation of the updates for the option. No corrections are required for the experience prior to this point. This should generate updates that have significantly lower variance than in the case of learning values of policies (Precup, Sutton & Dasgupta, 2001). Because of the termination condition of the option, β, ∆θ can quickly decay to zero. To avoid this problem, we can use a restart function g : S →[0,1], such that g(st) specifies the extent to which the updating episode is considered to start at time t. Adding restarts generates a new forward update: ∆θt = (Rλ t −yt)∇θyt t ∑ i=0 giρi...ρt−1(1−βi+1)...(1−βt), (8) where Rλ t is the same as above. With an adaptation of the proof in Precup, Sutton & Dasgupta (2001), we can show that we get the same expected value of updates by applying this algorithm from the original starting distribution as we would by applying the algorithm without restarts from a starting distribution defined by I0 and g. We can turn this forward algorithm into an incremental, backward view algorithm in the following way: • Initialize k0 = g0,e0 = k0∇θy0 • At every time step t: δt = ρt (rt+1 +(1−βt+1)yt+1)−yt θt+1 = θt +αδtet kt+1 = ρtkt(1−βt+1)+gt+1 et+1 = λρt(1−βt+1)et +kt+1∇θyt+1 Using a similar technique to that of Precup, Sutton & Dasgupta (2001) and Sutton & Barto (1998), we can prove that the forward and backward algorithm are equivalent (omitted due to lack of space). This algorithm is guaranteed to converge if the variance of the updates is finite (Precup, Sutton & Dasgupta, 2001). In the case of options, the termination condition β can be used to ensure that this is the case. 5 Learning when the behavior policy is unknown In this section, we consider the case in which the behavior policy is unknown. This case is generally problematic for importance sampling algorithms, but the use of recognizers will allow us to define importance sampling corrections, as well as a convergent algorithm. Recall that when using a recognizer, the target policy of the option is defined as: π(s,a) = c(s,a)b(s,a) µ(s) and the recognition probability becomes: ρ(s,a) = π(s,a) b(s,a) = c(s,a) µ(s) Of course, µ(s) depends on b. If b is unknown, instead of µ(s), we will use a maximum likelihood estimate ˆµ : S →[0,1]. The structure used to compute ˆµ will have to be compatible with the feature space used to represent the reward model. We will make this more precise below. Likewise, the recognizer c(s,a) will have to be defined in terms of the features used to represent the model. We will then define the importance sampling corrections as: ˆρ(s,a) = c(s,a) ˆµ(s) We consider the case in which the function approximator used to model the option is actually a state aggregator. In this case, we will define recognizers which behave consistently in each partition, i.e., c(s,a) = c(p,a),∀s ∈p. This means that an action is either recognized or not recognized in all states of the partition. The recognition probability ˆµ will have one entry for every partition p of the state space. Its value will be: ˆµ(p) = N(p,c = 1) N(p) where N(p) is the number of times partition p was visited, and N(p,c = 1) is the number of times the action taken in p was recognized. In the limit, w.p.1, ˆµ converges to ∑s db(s|p)∑a c(p,a)b(s,a) where db(s|p) is the probability of visiting state s from partition p under the stationary distribution of b. At this limit, ˆπ(s,a) = ˆρ(s,a)b(s,a) will be a well-defined policy (i.e., ∑a ˆπ(s,a) = 1). Using Theorem 3, off-policy updates using importance sampling corrections ˆρ will have the same expected value as on-policy updates using ˆπ. Note though that the learning algorithm never uses ˆπ; the only quantities needed are ˆρ, which are learned incrementally from data. For the case of general linear function approximation, we conjecture that a similar idea can be used, where the recognition probability is learned using logistic regression. The development of this part is left for future work. Acknowledgements The authors gratefully acknowledge the ideas and encouragement they have received in this work from Eddie Rafols, Mark Ring, Lihong Li and other members of the rlai.net group. We thank Csaba Szepesvari and the reviewers of the paper for constructive comments. This research was supported in part by iCore, NSERC, Alberta Ingenuity, and CFI. References Baird, L. C. (1995). Residual algorithms: Reinforcement learning with function approximation. In Proceedings of ICML. Precup, D., Sutton, R. S. and Dasgupta, S. (2001). Off-policy temporal-difference learning with function approximation. In Proceedings of ICML. Sutton, R.S., Precup D. and Singh, S (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, vol . 112, pp. 181–211. Sutton,, R.S. and Tanner, B. (2005). Temporal-difference networks. In Proceedings of NIPS-17. Sutton R.S., Raffols E. and Koop, A. (2006). Temporal abstraction in temporal-difference networks”. In Proceedings of NIPS-18. Tadic, V. (2001). On the convergence of temporal-difference learning with linear function approximation. In Machine learning vol. 42, pp. 241-267. Tsitsiklis, J. N., and Van Roy, B. (1997). An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control 42:674–690.
|
2005
|
201
|
2,828
|
A Probabilistic Interpretation of SVMs with an Application to Unbalanced Classification Yves Grandvalet ∗ Heudiasyc, CNRS/UTC 60205 Compi`egne cedex, France grandval@utc.fr Johnny Mari´ethoz Samy Bengio IDIAP Research Institute 1920 Martigny, Switzerland {marietho,bengio}@idiap.ch Abstract In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem to unbalanced classification, when decisions result in unequal (asymmetric) losses. Experiments show improvements over state-of-the-art procedures. 1 Introduction In this paper, we show that support vector machines (SVMs) are the solution of a relaxed maximum a posteriori (MAP) estimation problem. This relaxed problem results from fitting a semi-parametric model of posterior probabilities. This model is decomposed into two components: the parametric component, which is a function of the SVM score, and the non-parametric component which we call a nuisance function. Given a proper binding of the nuisance function adapted to the considered problem, this decomposition enables to concentrate on selected ranges of the probability spectrum. The estimation process can thus allocate model capacity to the neighborhoods of decision boundaries. The connection to semi-parametric models provides a probabilistic interpretation of SVM scores, which may have several applications, such as estimating confidences over the predictions, or dealing with unbalanced losses. (which occur in domains such as diagnosis, intruder detection, etc). Several mappings relating SVM scores to probabilities have already been proposed (Sollich 2000, Platt 2000), but they are subject to arbitrary choices, which are avoided here by their integration to the nuisance function. The paper is organized as follows. Section 2 presents the semi-parametric modeling approach; Section 3 shows how we reformulate SVM in this framework; Section 4 proposes several outcomes of this formulation, including a new method to handle unbalanced losses, which is tested empirically in Section 5. Finally, Section 6 briefly concludes the paper. ∗This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence IST-2002-506778. This publication only reflects the authors’ views. 2 Semi-Parametric Classification We address the binary classification problem of estimating a decision rule from a learning set Ln = {(xi, yi)}n i=1, where the ith example is described by the pattern xi ∈X and the associated response yi ∈{−1, 1}. In the framework of maximum likelihood estimation, classification can be addressed either via generative models, i.e. models of the joint distribution P(X, Y ), or via discriminative methods modeling the conditional P(Y |X). 2.1 Complete and Marginal Likelihood, Nuisance Functions Let p(1|x; θ) denote the model of P(Y = 1|X = x), p(x; ψ) the model of P(X) and ti the binary response variable such that ti = 1 when yi = 1 and ti = 0 when yi = −1. Assuming independent examples, the complete log-likelihood can be decomposed as L(θ, ψ; Ln) = X i ti log(p(1|xi; θ))+(1−ti) log(1−p(1|xi; θ))+log(p(xi; ψ)) , (1) where the two first terms of the right-hand side represent the marginal or conditional likelihood, that is, the likelihood of p(1|x; θ). For classification purposes, the parameter ψ is not relevant, and may thus be qualified as a nuisance parameter (Lindsay 1985). When θ can be estimated independently of ψ, maximizing the marginal likelihood provides the estimate returned by maximizing the complete likelihood with respect to θ and ψ. In particular, when no assumption whatsoever is made on P(X), maximizing the conditional likelihood amounts to maximize the joint likelihood (McLachlan 1992). The density of inputs is then considered as a nuisance function. 2.2 Semi-Parametric Models Again, for classification purposes, estimating P(Y |X) may be considered as too demanding. Indeed, taking a decision only requires the knowledge of sign(2P(Y = 1|X = x)−1). We may thus consider looking for the decision rule minimizing the empirical classification error, but this problem is intractable for non-trivial models of discriminant functions. Here, we briefly explore how semi-parametric models (Oakes 1988) may be used to reduce the modelization effort as compared to the standard likelihood approach. For this, we consider a two-component semi-parametric model of P(Y = 1|X = x), defined as p(1|x; θ) = g(x; θ) + ε(x), where the parametric component g(x; θ) is the function of interest, and where the non-parametric component ε is a constrained nuisance function. Then, we address the maximum likelihood estimation of the semi-parametric model p(1|x; θ) min θ,ε − X i ti log(p(1|xi; θ)) + (1 −ti) log(1 −p(1|xi; θ)) s. t. p(1|x; θ) = g(x; θ) + ε(x) 0 ≤p(1|x; θ) ≤1 ε−(x) ≤ε(x) ≤ε+(x) (2) where ε−and ε+ are user-defined functions, which place constraints on the non-parametric component ε. According to these constraints, one pursues different objectives, which can be interpreted as either weakened or focused versions of the original problem of estimating precisely P(Y |X) on the whole range [0, 1]. At the one extreme, when ε−= ε+, one recovers a parametric maximum likelihood problem, where the estimate of posterior probabilities p(1|x; θ) is simply g(x; θ) shifted by the baseline function ε. At the other extreme, when ε−(x) ≤−g(x) and ε+(x) ≥1 −g(x), p(1|·; θ) perfectly explains (interpolates) any training sample for any θ, and the optimization problem in θ is ill-posed. Note that the optimization problem in ε is always ill-posed, but this is not of concern as we do not wish to estimate the nuisance function. 0 ε 1−ε 1 −ε 0 +ε g(x) ε- (x)/ε+(x) 0 ε 1−ε 1 0 ε 1−ε 1 g(x) p(1|x) 0 0.5 1 −0.5 0 0.5 g(x) ε- (x)/ε+(x) 0 0.5 1 0 0.5 1 g(x) p(1|x) Figure 1: Two examples of ε−(x) (dashed) and ε+(x) (plain) vs. g(x) and resulting ǫ-tube of possible values for the estimate of P(Y = 1|X = x) (gray zone) vs. g(x). Generally, as ε is not estimated, the estimate of posterior probabilities p(1|x; θ) is only known to lie within the interval [g(x; θ) + ε−(x), g(x; θ) + ε+(x)]. In what follows, we only consider functions ε−and ε+ expressed as functions of the argument g(x), for which the interval can be recovered from g(x) alone. We also require ε−(x) ≤0 ≤ε+(x), in order to ensure that g(x; θ) is an admissible value of p(1|x; θ). Two simple examples are displayed in Figure 1. The two first graphs represent ε−and ε+ designed to estimate posterior probabilities up to precision ǫ, and the corresponding ǫ-tube of admissible estimates knowing g(x). The two last graphs represent the same functions for ε−and ε+ defined to focus on the only relevant piece of information regarding decision: estimating where P(Y |X) is above 1/2. 1 2.3 Estimation of the Parametric Component The definitions of ε−and ε+ affect the estimation of the parametric component. Regarding θ, when the values of g(x; θ) + ε−(x) and g(x; θ) + ε+(x) lie within [0, 1], problem (2) is equivalent to the following relaxed maximum likelihood problem min θ,ε − X i ti log(g(xi; θ) + εi) + (1 −ti) log(1 −g(xi; θ) −εi) s. t. ε−(xi) ≤εi ≤ε+(xi) i = 1, . . . , n (3) where ε is an n-dimensional vector of slack variables. The problem is qualified as relaxed compared to the the maximum likelihood estimation of posterior probabilities by g(xi; θ), because modeling posterior probabilities by g(xi; θ) + εi is a looser objective. The monotonicity of the objective function with respect to εi implies that the constraints ε−(xi) ≤εi and εi ≤ε+(xi) are saturated at the solution of (3) for ti = 0 or ti = 1 respectively. Thus, the loss in (3) is the neg-log-likelihood of the lower or the upper bound on p(1|xi; θ) respectively. Provided that g, ε−and ε+ are defined such that ε−(x) ≤ε+(x), 0 ≤g(x) + ε−(x) ≤1 and 0 ≤g(x) + ε+(x) ≤1, the optimization problem with respect to θ reduces to min θ − X i ti log(g(xi; θ) + ε+(xi)) + (1 −ti) log(1 −g(xi; θ) −ε−(xi)) . (4) Figure 2 displays the losses for positive examples corresponding to the choices of ε−and ε+ depicted in Figure 1 (the losses are symmetrical around 0.5 for negative examples). Note that the convexity of the objective function with respect to g depends on the choices of ε−and ε+. One can show that, providing ε+ and ε−are respectively concave and convex functions of g, then the loss (4) is convex in g. When ε−(x) ≤0 ≤ε+(x), g(x) is an admissible estimate of P(Y = 1|x). However, the relaxed loss (4) is optimistic, below the neg-log-likelihood of g. This optimism usually 1Of course, this naive attempt to minimize the training classification error is doomed to failure. Reformulating the problem does not affect its convexity: it remains NP-hard. 0 1−ε 1 g(x) L(g(x),1) 0 0.5 1 g(x) L(g(x),1) Figure 2: Losses for positive examples (plain) and neg-log-likelihood of g(x) (dotted) vs. g(x). Left: for the function ε+ displayed on the left-hand side of Figure 1; right: for the function ε+ displayed on the right-hand side of Figure 1. results in a non-consistent estimation of posterior probabilities (i.e g(x) does not converge towards P(Y = 1|X = x) as the sample size goes to infinity), a common situation in semi-parametric modeling (Lindsay 1985). This lack of consistency should not be a concern here, since the non-parametric component is purposely introduced to address a looser estimation problem. We should therefore restrict consistency requirements to the primary goal of having posterior probabilities in the ǫ-tube [g(x) + ε−(x), g(x) + ε+(x)]. 3 Semi-Parametric Formulation of SVMs Several authors pointed the closeness of SVM and the MAP approach to Gaussian processes (Sollich (2000) and references therein). However, this similarity does not provide a proper mapping from SVM scores to posterior probabilities. Here, we resolve this difficulty thanks to the additional degrees of freedom provided by semi-parametric modelling. 3.1 SVMs and Gaussian Processes In its primal Lagrangian formulation, the SVM optimization problem reads min f,b 1 2∥f∥2 H + C X i [1 −yi(f(xi) + b)]+ , (5) where H is a reproducing kernel Hilbert space with norm ∥· ∥H, C is a regularization parameter and [f]+ = max(f, 0). The penalization term in (5) can be interpreted as a Gaussian prior on f, with a covariance function proportional to the reproducing kernel of H (Sollich 2000). Then, the interpretation of the hinge loss as a marginal log-likelihood requires to identify an affine function of the last term of (5) with the two first terms of (1). We thus look for two constants c0 and c1 ̸= 0, such that, for all values of f(x) + b, there exists a value 0 ≤p(1|x) ≤1 such that p(1|x) = exp −(c0 + c1[1 −(f(x) + b)]+) 1 −p(1|x) = exp −(c0 + c1[1 + (f(x) + b)]+) . (6) The system (6) has a solution over the whole range of possible values of f(x) + b if and only if c0 = log(2) and c1 = 0. Thus, the SVM optimization problem does not implement the MAP approach to Gaussian processes. To proceed with a probabilistic interpretation of SVMs, Sollich (2000) proposed a normalized probability model. The normalization functional was chosen arbitrarily, and the consequences of this choice on the probabilistic interpretation was not evaluated. In what follows, we derive an imprecise mapping, with interval-valued estimates of probabilities, representing the set of all admissible semi-parametric formulations of SVM scores. 3.2 SVMs and Semi-Parametric Models With the semi-parametric models of Section 2.2, one has to identify an affine function of the hinge loss with the two terms of (4). Compared to the previous situation, one has the −6 −4 −2 0 2 4 6 0 0.2 0.4 0.6 0.8 1 f(x)+b p(1|x) −6 −4 −2 0 2 4 6 0 0.5 1 1.5 2 2.5 f(x)+b L(g(x),1) 0 0.25 0.5 0.75 1 0 0.5 1 g(x) p(1|x) Figure 3: Left: lower (dashed) and upper (plain) posterior probabilities [g(x) + ε−(x), g(x) + ε+(x)] vs. SVM scores f(x) + b; center: corresponding neg-log-likelihood of g(x) for positive examples vs. f(x)+b. right: lower (dashed) and upper (plain) posterior probabilities vs. g(x), for g defined in (8). freedom to define the slack functions ε−and ε+. The identification problem is now g(x) + ε+(x) = exp −(c0 + c1[1 −(f(x) + b)]+) 1 −g(x) −ε−(x) = exp −(c0 + c1[1 + (f(x) + b)]+) s.t. 0 ≤g(x) + ε−(x) ≤1 0 ≤g(x) + ε+(x) ≤1 ε−(x) ≤ε+(x) . (7) Provided c0 = 0 and 0 < c1 ≤log(2), there are functions g, ε−and ε+ such that the above problem has a solution. Hence, we obtain a set of probabilistic interpretations fully compatible with SVM scores. The solutions indexed by c1 are nested, in the sense that, for any x, the length of the uncertainty interval, ε+(x)−ε−(x), is monotonically decreasing in c1: the interpretation of SVM scores as posterior probabilities gets tighter as c1 increases. The most restricted subset of admissible interpretations, with the shortest uncertainty intervals, obtained for c1 = log(2), is represented in the left-hand side of Figure 3. The loss incurred by a positive example is represented on the central graph, where the gray zone represents the neg-log-likelihood of all admissible solutions of g(x). Note that the hinge loss is proportional to the neg-log-likelihood of the upper posterior probability g(x) + ε+(x), which is the loss for positive examples in the semi-parametric model in (4). Conversely, the hinge loss for negative examples is reached for g(x) + ε−(x). An important observation, that will be useful in Section 4.2 is that the neg-log-likelihood of any admissible functions g(x) is tangent to the hinge loss at f(x) + b = 0. The solution is unique in terms of the admissible interval [g + ε−, g + ε+], but many definitions of (ε−, ε+, g) solve (7). For example, g may be defined as g(x; θ) = 2−[1−(f(x)+b)]+ 2−[1+(f(x)+b)]+ + 2−[1−(f(x)+b)]+ , (8) which is essentially the posterior probability model proposed by Sollich (2000), represented dotted in the first two graphs of Figure 3. The last graph of Figure 3 displays the mapping from g(x) to admissible values of p(1|x) which results from the choice described in (8). Although the interpretation of SVM scores does not require to specify g, it may worth to list some features common to all options. First, g(x) + ε−(x) = 0 for all g(x) below some threshold g0 > 0, and conversely, g(x) + ε+(x) = 1 for all g(x) above some threshold g1 < 1. These two features are responsible for the sparsity of the SVM solution. Second, the estimation of posterior probabilities is accurate at 0.5, and the length of the uncertainty interval on p(1|x) monotonically increases in [g0, 0.5] and then monotonically decreases in [0.5, g1]. Hence, the training objective of SVMs is intermediate between the accurate estimation of posterior probabilities on the whole range [0, 1] and the minimization of the classification risk. 4 Outcomes of the Probabilistic Interpretation This section gives two consequences of our probabilistic interpretation of SVMs. Further outcomes, still reserved for future research are listed in Section 6. 4.1 Pointwise Posterior Probabilities from SVM Scores Platt (2000) proposed to estimate posterior probabilities from SVM scores by fitting a logistic function over the SVM scores. The only logistic function compatible with the most stringent interpretation of SVMs in the semi-parametric framework, g(x; θ) = 1 1 + 4−(f(x)+b)) , (9) is identical to the model of Sollich (2000) (8) when f(x) + b lies in the interval [−1, 1]. Other logistic functions are compatible with the looser interpretations obtained by letting c1 < log(2), but their use as pointwise estimates is questionable, since the associated confidence interval is wider. In particular, the looser interpretations do not ensure that f(x) + b = 0 corresponds to g(x) = 0.5. Then, the decision function based on the estimated posterior probabilities by g(x) may differ from the SVM decision function. Being based on an arbitrary choice of g(x), pointwise estimates of posterior probabilities derived from SVM scores should be handled with caution. As discussed by Zhang (2004), they may only be consistent at f(x) + b = 0, where they may converge towards 0.5. 4.2 Unbalanced Classification Losses SVMs are known to perform well regarding misclassification error, but they provide skewed decision boundaries for unbalanced classification losses, where the losses associated with incorrect decisions differ according to the true label. The mainstream approach used to address this problem consists in using different losses for positive and negative examples (Morik et al. 1999, Veropoulos et al. 1999), i.e. min f,b 1 2∥f∥2 H+C+ X {i|yi=1} [1 −(f(xi) + b)]++C− X {i|yi=−1} [1 + (f(xi) + b)]+ , (10) where the coefficients C+ and C−are constants, whose ratio is equal to the ratio of the losses ℓFN and ℓFP pertaining to false negatives and false positives, respectively (Lin et al. 2002).2 Bayes’ decision theory defines the optimal decision rule by positive classification when P(y = 1|x) > P0, where P0 = ℓFP ℓFP+ℓFN . We may thus rewrite C+ = C · (1 −P0) and C−= C · P0. With such definitions, the optimization problem may be interpreted as an upper-bound on the classification risk defined from ℓFN and ℓFP. However, the machinery of Section 3.2 unveils a major problem: the SVM decision function provided by sign(f(xi) + b) is not consistent with the probabilistic interpretation of SVM scores. We address this problem by deriving another criterion, by requiring that the neg-loglikelihood of any admissible functions g(x) is tangent to the hinge loss at f(x) + b = 0. This leads to the following problem: min f,b 1 2∥f∥2 H + C X {i|yi=1} [−log(P0) −(1 −P0)(f(xi) + b)]+ + X {i|yi=−1} [−log(1 −P0) + P0(f(xi) + b)]+ . (11) 2False negatives/positives respectively designate positive/negative examples incorrectly classified. −10 0 10 20 0 0.2 0.4 0.6 0.8 1 f(x)+b p(1|x) −10 0 10 20 0 1 2 3 4 5 f(x)+b L(g(x),1) 0 0.25 0.5 0.75 1 0 0.5 1 g(x) p(1|x) Figure 4: Left: lower (dashed) and upper (plain) posterior probabilities [g(x) + ε−(x), g(x)+ε+(x)] vs. SVM scores f(x)+b obtained from (11) with P0 = 0.25; center: corresponding neg-log-likelihood of g(x) for positive examples vs. f(x) + b. right: lower (dashed) and upper (plain) posterior probabilities vs. g(x), for g defined by ε+(x) = 0 for f(x) + b ≤0 and ε−(x) = 0 for f(x) + b ≥0. This loss differs from (10), in the respect that the margin for positive examples is smaller than the one for negative examples when P0 < 0.5. In particular, (10) does not affect the SVM solution for separable problems, while in (11), the decision boundary moves towards positive support vectors when P0 decreases. The analogue of Figure 3, displayed on Figure 4, shows that one recovers the characteristics of the standard SVM loss, except that the focus is now on the posterior probability P0 defined by Bayes’ decision rule. 5 Experiments with Unbalanced Classifications Losses It is straightforward to implement (11) in standard SVM packages. For experimenting with difficult unbalanced two-class problems, we used the Forest database, the largest available UCI dataset (http://kdd.ics.uci.edu/databases/covertype/). We consider the subproblem of discriminating the positive class Krummholz (20510 examples) against the negative class Spruce/Fir (211840 examples). The ratio of negative to positive examples is high, a feature commonly encountered with unbalanced classification losses. The training set was built by random selection of size 11 000 (1000 and 10 000 examples from the positive and negative class respectively); a validation set, of size 11 000 was drawn identically among the other examples; finally, the test set, of size 99 000, was drawn among the remaining examples. The performance was measured by the weighted risk function R = 1 n(NFNℓFN+NFPℓFP), where NFN and NFP are the number of false negatives and false positives, respectively. The loss ℓFP was set to one, and ℓFN was successively set to 1, 10 and 100, in order to penalize more and more heavily errors from the under-represented class. All approaches were tested using SVMs with a Gaussian kernel on normalized data. The hyper-parameters were tuned on the validation set for each of the ℓFN values. We additionally considered three tuning for the bias b: ˆb is the bias returned by the algorithm; ˆbv the bias returned by minimizing R on the validation set, which is an optimistic estimate of the bias that could be computed by cross-validation. We also provide results for b∗, the optimal bias computed on the test set. This “crystal ball” tuning may not represent an achievable goal, but it shows how far we are from the optimum. Table 1 compares the risk R obtained with the three approaches for the different values of ℓFN. The first line, with ℓFN = 1 corresponds to the standard classification error, where all training criteria are equivalent in theory and in practice. The bias returned by the algorithm is very close to the optimal one. For ℓFN = 10 and ℓFN = 100, the models obtained by optimizing C+/C−(10) and P0 (11) achieve better results than the baseline with the crystal ball bias. While the solutions returned by C+/C−can be significantly improved Table 1: Errors for 3 different criteria and for 3 different models over the Forest database ℓFN Baseline, problem (5) C+/C−, problem (10) P0, problem (11) ˆb b∗ ˆb ˆbv b∗ ˆb ˆbv b∗ 1 0.027 0.026 0.027 0.027 0.026 0.027 0.027 0.026 10 0.167 0.108 0.105 0.104 0.094 0.095 0.104 0.094 100 1.664 0.406 0.403 0.291 0.289 0.295 0.291 0.289 by tuning the bias, our criterion provides results that are very close to the optimum, in the range of the performances obtained with the bias optimized on an independant validation set. The new optimization criterion can thus outperform standard approaches for highly unbalanced problems. 6 Conclusion This paper introduced a semi-parametric model for classification which provides an interesting viewpoint on SVMs. The non-parametric component provides an intuitive means of transforming the likelihood into a decision-oriented criterion. This framework was used here to propose a new parameterization of the hinge loss, dedicated to unbalanced classification problems, yielding significant improvements over the classical procedure. Among other prospectives, we plan to apply the same framework to investigate hinge-like criteria for decision rules including a reject option, where the classifier abstains when a pattern is ambiguous. We also aim at defining losses encouraging sparsity in probabilistic models, such as kernelized logistic regression. We could thus build sparse probabilistic classifiers, providing an accurate estimation of posterior probabilities on a (limited) predefined range of posterior probabilities. In particular, we could derive decision-oriented criteria for multi-class probabilistic classifiers. For example, minimizing classification error only requires to find the class with highest posterior probability, and this search does not require precise estimates of probabilities outside the interval [1/K, 1/2], where K is the number of classes. References Y. Lin, Y. Lee, and G. Wahba. Support vector machines for classification in non-standard situations. Machine Learning, 46:191–202, 2002. B. G. Lindsay. Nuisance parameters. In S. Kotz, C. B. Read, and D. L. Banks, editors, Encyclopedia of Statistical Sciences, volume 6. Wiley, 1985. G. J. McLachlan. Discriminant analysis and statistical pattern recognition. Wiley, 1992. K. Morik, P. Brockhausen, and T. Joachims. Combining statistical learning with a knowledge-based approach - a case study in intensive care monitoring. In Proceedings of ICML, 1999. D. Oakes. Semi-parametric models. In S. Kotz, C. B. Read, and D. L. Banks, editors, Encyclopedia of Statistical Sciences, volume 8. Wiley, 1988. J. C. Platt. Probabilities for SV machines. In A. J. Smola, P. L. Bartlett, B. Sch¨olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 61–74. MIT Press, 2000. P. Sollich. Probabilistic methods for support vector machines. In S. A. Solla, T. K. Leen, and K.-R. M¨uller, editors, Advances in Neural Information Processing Systems 12, pages 349–355, 2000. K. Veropoulos, C. Campbell, and N. Cristianini. Controlling the sensitivity of support vector machines. In T. Dean, editor, Proc. of the IJCAI, pages 55–60, 1999. T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, 32(1):56–85, 2004.
|
2005
|
202
|
2,829
|
Nonparametric inference of prior probabilities from Bayes-optimal behavior Liam Paninski∗ Department of Statistics, Columbia University liam@stat.columbia.edu; http://www.stat.columbia.edu/∼liam Abstract We discuss a method for obtaining a subject’s a priori beliefs from his/her behavior in a psychophysics context, under the assumption that the behavior is (nearly) optimal from a Bayesian perspective. The method is nonparametric in the sense that we do not assume that the prior belongs to any fixed class of distributions (e.g., Gaussian). Despite this increased generality, the method is relatively simple to implement, being based in the simplest case on a linear programming algorithm, and more generally on a straightforward maximum likelihood or maximum a posteriori formulation, which turns out to be a convex optimization problem (with no non-global local maxima) in many important cases. In addition, we develop methods for analyzing the uncertainty of these estimates. We demonstrate the accuracy of the method in a simple simulated coin-flipping setting; in particular, the method is able to precisely track the evolution of the subject’s posterior distribution as more and more data are observed. We close by briefly discussing an interesting connection to recent models of neural population coding. Introduction Bayesian methods have become quite popular in psychophysics and neuroscience (1–5); in particular, a recent trend has been to interpret observed biases in perception and/or behavior as optimal, in a Bayesian (average) sense, under ecologically-determined prior distributions on the stimuli or behavioral contexts under study. For example, (2) interpret visual motion illusions in terms of a prior weighted towards slow, smooth movements of objects in space. In an experimental context, it is clearly desirable to empirically obtain estimates of the prior the subject is operating under; the idea would be to then compare these experimental estimates of the subject’s prior with the ecological prior he or she “should” have been using. Conversely, such an approach would have the potential to establish that the subject is not behaving Bayes-optimally under any prior, but rather is in fact using a different, nonBayesian strategy. Such tools would also be quite useful in the context of studies of learning and generalization, in which we would like to track the time course of a subject’s adaptation to an experimentally-chosen prior distribution (5). Such estimates of the subject’s prior have in the past been rather qualitative, and/or limited to simple parametric families (e.g., ∗We thank N. Daw, P. Hoyer, S. Inati, K. Koerding, I. Nemenman, E. Simoncelli, A. Stocker, and D. Wolpert for helpful suggestions, and in particular P. Dayan for pointing out the connection to neural population coding models. This work was supported by funding from the Howard Hughes Medical Institute, Gatsby Charitable Trust, and by a Royal Society International Fellowship. the width of a Gaussian may be fit to the experimental data, but the actual Gaussian identity of the prior is not examined systematically). We present a more quantitative method here. We first discuss the method in the general case of an arbitrarily-chosen loss function (the “cost” which we assume the subject is attempting to minimize, on average), then examine a few special important cases (e.g., mean-square and mean-absolute error) in which the technique may be simplified somewhat. The algorithms for determining the subject’s prior distributions turn out to be surprisingly quick and easy to code: the basic idea is that each observed stimulus-response pair provides a set of constraints on what the actual prior could be. In the simplest case, these constraints are linear, and the resulting algorithm is simply a version of linear programming, for which very efficient algorithms exist. More generally, the constraints are probabilistic, and we discuss likelihood-based methods for combining these noisy constraints (and in particular when the resulting maximum likelihood, or maximum a posteriori, problem can be solved efficiently via ascent methods, without fear of getting trapped in non-global local maxima). Finally, we discuss Bayesian methods for representing the uncertainty in our estimates. We should point out that related problems have appeared in the statistics literature, particularly under the subject of elicitation of expert opinion (6–8); in the machine learning literature, most recently in the area of “inverse reinforcement learning” (9); and in the economics/ game theory literature on utility learning (10). The experimental economics literature in particular is quite vast (where the relevance to gambling, price setting, etc. is discussed at length, particularly in settings in which “rational” — expected utilitymaximizing — behavior seems to break down); see, e.g. Wakker’s recent bibliography (www1.fee.uva.nl/creed/wakker/refs/rfrncs.htm) for further references. Finally, it is worth noting that the question of determining a subject’s (or more precisely, an opponent’s) priors in a gambling context — in particular, in the binary case of whether or not an opponent will accept a bet, given a fixed table of outcomes vs. payoffs — has received attention going back to the foundations of decision theory, most prominently in the discussions of de Finetti and Savage. Nevertheless, we are unaware of any previous application of similar techniques (both for estimating a subject’s true prior and for analyzing the uncertainty associated with these estimates) in the psychophysical or neuroscience literature. General case Our technique for determining the subject’s prior is based on several assumptions (some of which will be relaxed below). To begin, we assume that the subject is behaving optimally in a Bayesian sense. To be precise, we have four ingredients: a prior distribution on some hidden parameter θ; observed input (stimulus) data, dependent in some probabilistic way on θ; the subject’s corresponding output estimates of the underlying θ, given the input data; and finally a loss function D(., .) that penalizes bad estimates for θ. The fundamental assumption is that, on each trial i, the subject is choosing the estimate ˆθi of the underlying parameter, given data xi, to minimize the posterior average error Z p(θ|xi)D(ˆθi, θ)dθ ∼ Z p(θ)p(xi|θ)D(ˆθi, θ)dθ, (1) where p(θ) is the prior on hidden parameters (the unknown object the experimenter is trying to estimate), and p(xi|θ) is the likelihood of data xi given θ. For example, in the visual motion example, θ could be the true underlying velocity of an object moving through space, the observed data xi could be a short, noise-contaminated movie of the object’s motion, and the subject would be asked to estimate the true motion θ given the data xi and any prior conceptions, p(θ), of how one expects objects to move. Note that we have also implicitly assumed, in this simplest case, that both the loss D(., .) and likelihood functions p(xi|θ) are known, both to the subject and to the experimenter (perhaps from a preceding set of “learning” trials). So how can the experimenter actually estimate p(θ), given the likelihoods p(x|θ), the loss function D(., .), and some set of data {xi} with corresponding estimates {ˆθi} minimizing the posterior expected loss (1)? This turns out to be a linear programming problem (11), for which very efficient algorithms exist (e.g., “linprog.m” in Matlab). To see why, first note that the right hand side of expression (1) is linear in the prior p(θ). Second, we have a large collection of linear constraints on p(θ): we know that p(θ) ≥ 0 ∀θ (2) Z p(θ)dθ = 1 (3) Z p(θ)p(xi|θ) D(ˆθi, θ) − D(z, θ) dθ ≤0 ∀z (4) where (2-3) are satisfied by any proper prior distribution and (4) is the maximizer condition (1) expressed in slightly different language. (See also (10), who noted the same linear programming structure in an application to cost function estimation, rather than the prior estimation examined here.) The solution to the linear programming problem defined by (2-4) isn’t necessarily unique; it corresponds to an intersection of half-spaces, which is convex in general. To come up with a unique solution, we could maximimize a concave “regularizing” function on this convex set; possible such functions include, e.g., the entropy of p(θ), or its negative mean-square derivative (this function is strictly concave on the space of all functions whose integral is held fixed, as is the case here given constraint (3)); more generally, if we have some prior information on the form of the priors the subject might be using, and this information can be expressed in the “energy” form P[p(θ)] ∼eq[p(θ)], for a concave functional q[.], we could use the log of this “prior on priors” P. An alternative solution would be to modify constraint (4) to Z p(θ)p(xi|θ) D(ˆθi, θ) −D(z, θ) ≤−ǫ ∀z, where we can then adjust the slack variable ǫ until the contraint set shrinks to a single point. This leads directly to another linear programming problem (where we want to make the linear function ǫ as large as possible, under the above constraints). Note that for this last approach to work — for the linear programming problem to have a solution — we need to ensure that the set defined by the constraints (2-4) is compact; this basically means that the constraint set (4) needs to be sufficiently rich, which, in turn, means that sufficient data (or sufficiently strong prior constraints) are required. We will return to this point below. Finally, what if our primary assumption is not met? That is, what if subjects are not quite behaving optimally with respect to p(θ)? It is possible to detect this situation in the above framework, for example if the slack variable ǫ above is found to be negative. However, a different, more probabilistic viewpoint can be taken. Assume the value of the choice ˆθi is optimal under some “comparison” noise, that is, Z p(θ)p(xi|θ) D(ˆθi, θ) −D(z, θ) ≤σηi(z) ∀z, with ηi(z) a random variable of scale σ > 0 (assume η to be i.i.d. for now, although this may be generalized). If we assume this decision noise η has a log-concave density (i.e., the log of the density is a concave function; e.g., Gaussian, or exponential), then so does its integral (12), and the resulting maximum likelihood problem has no non-global local maxima and is therefore solvable by ascent methods. To see this, write the log-likelihood of (p, σ) given data {xi, ˆθi} as L{xi,ˆθi}(p, σ) = X log Z ui(z) −∞ dp(η), with the sum over the set of all the constraints in (4) and ui(z) ≡1 σ Z p(θ)p(xi|θ) D(ˆθi, θ) −D(z, θ) . L is the sum of concave functions in ui, and hence is concave itself, and has no non-global local maxima in these variables; since σ and p are linearly related through ui (and (p, σ) live in a convex set), L has no non-global local maxima in (p, σ), either. Once again, this maximum likelihood problem may be regularized by prior information1, maximizing the a posteriori likelihood L(p) −q[p] instead of L(p); this problem is similarly tractable by ascent methods, by the concavity of −q[.] (note that this “soft-constraint” problem reduces exactly to the “hard” constraint problem (4) as the noise σ →0)2. Note that the estimated value of the noise scale σ plays a similar role to that of the slack variable ǫ, above, with the difference that ǫ can be much more sensitive to the worst trial (that is, the trial on which the subject behaves most suboptimally); we can use either of these slack variables to go back and ask about how close to optimally the subjects were actually performing — large values of σ, for example, imply sub-optimal performance. An additional interesting idea is to use the computed value of η as a kind of outlier test; η large implies the trial was particularly suboptimal. Special cases Maximum a posteriori estimation: The maximum a posteriori (MAP) estimator corresponds to the Hamming distance loss function, D(i, j) = 1(i ̸= j); this implies that the constraints (4) have the simple form p(ˆθi) −p(z)L(ˆθi, z) ≥0, with L(ˆθi, z) defined as the largest observed likelihood ratio for ˆθi and z, that is, L(ˆθi, z) ≡max xi p(xi|z) p(xi|ˆθi) , 1Overfitting here is a symptom of the fact that in some cases — particularly when few data samples have been observed — many priors (even highly implausible priors) can explain the observed data fairly well; in this case, it is often quite useful to penalize these “implausible” priors, thus effectively regularizing our estimates. Similar observations have appeared in the context of medical applications of Markov random field methods (13). 2Another possible application of this regularization idea is as follows. We may incorporate improper priors — that is, priors which may not integrate to unity (such priors frequently arise in the analysis of reparameterization-invariant decision procedures, for example) — without any major conceptual modification in our analysis, simply by removing the normalization contraint (3). However, a problem arises: the zero measure, p(θ) ≡0, will always trivially satisfy the remaining constraints (2) and (4). This problem could potentially be ameliorated by introducing a convex regularizing term (or equivalently, a log-concave prior) on the total mass R p(θ)dθ. with the maximum taken over all xi which led to the estimate ˆθi. This setup is perhaps most appropriate for a two-alternative forced choice situation, where the problem is one of classification or discrimination, not estimation. Mean-square and absolute-error regression: Our discussion assumes an even simpler form when the loss function D(., .) is taken to be squared error, D(x, y) = (x −y)2, or absolute error, D(x, y) = |x −y|. In this case it is convenient to work with a slightly different noise model than the classification noise discussed above; instead, we may model the subject’s responses as optimal plus estimation noise. For squared-error, the optimal ˆθi is known to be uniquely defined as the conditional mean of θ given xi. Thus we may replace the collection of linear inequality constraints (4) with a much smaller set of linear equalities (a single equality per trial, instead of a single inequality per trial per z): Z p(xi|θ)(θ −ˆθi) p(θ)dθ = σηi; (5) the corresponding likelihood, again, has no non-global local maxima if η has a log-concave density. In the simplest case of Gaussian η, the maximum likelihood problem may be solved by standard nonnegative least-squares (e.g., “lsqnonneg” or “quadprog” in Matlab). In the absolute error case, the optimal ˆθi is given by the conditional median of θ given xi (although recall that the median is not necessarily unique here); thus, the inequality constraints (4) may again be replaced by equalities which are linear in p(θ): Z ˆθi −∞ p(θ)p(xi|θ) − Z ∞ ˆθi p(θ)p(xi|θ) = σηi; again, for Gaussian η this may be solved via standard nonnegative regression, albeit with a different constraint matrix. In each case, ηi retains its utility as an outlier score. A worked example: learning the fairness of a coin In this section we will work through a concrete example, to show how to put the ideas discussed above into practice. We take perhaps the simplest possible example, for clarity: the subject observes some number N of independent, identically distributed coin flips, and on each trial i tells us his/her probability of observing tails on the next trial, given that t = t(i) tails were observed in the first i trials3. Here the likelihood functions p(xi|θ) take the standard binomial form p(t(i)|ptails) = i t pt tails(1 −ptails)i−t (note that it is reasonable to assume that these likelihoods are known to the subject, at least approximately, due to the ubiquity of binomial data). Under our assumptions, the subject’s estimates ˆptails,i are given as the posterior mean of ptails given the number of tails observed up to trial i. This puts us directly in the mean-square framework discussed in equation (5); we assume Gaussian estimation noise η, construct a regression matrix A of N rows, with the i-th row given by p(t(i)|ptails)(ptails− ˆptails,i). To regularize our estimates, we add a small square-difference penalty of the form q[p(θ)] = R |dp(θ)/dθ|2dθ. Finally, we estimate ˆp(θ) = arg min p≥0; R 1 0 p(θ)dθ=1 ||Ap||2 2 + ǫq[p], for ǫ ≈10−7; this estimate is equivalent to MAP estimation under a (weak) Gaussian prior on the function p(θ) (truncated so that p(θ) ≥0), and is computed using quadprog.m. 3We note in passing that this simple binomial paradigm has potential applications to idealobserver analysis of classical neuroscientific tasks (e.g., synaptic release detection, or photon counting in retina) in addition to potential applications in psychophysics. 0 1 2 true prior 0 2 4 6 8 likelihoods 0 50 100 150 trial # (i) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 est prior 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 2 4 6 <−−−−−−−all heads ... all tails−−−−−−−−> est prior # tails/i estimate i=0 50 100 Figure 1: Learning the fairness of a coin (numerical simulation). Top panel: True prior distribution on coin fairness. The bimodal nature of this prior indicates that the subject expects coins to be unfair (skewed towards heads, ptails < .5, or tails, ptails > .5) more often than fair (ptails = .5). Second: Observed data. Open circles indicate the fraction of observed tails t = t(i) as a function of trial number i (the maximum likelihood estimate, MLE, of the fairness and a minimal sufficient statistic for this problem); + symbols indicate the subject’s estimate of the coin’s fairness, assumed to correspond to the posterior mean of the fairness under the subject’s prior. Note the systematic deviations of the subject’s estimate from the MLE; these deviations shrink as i increases and the strength of the prior relative to the likelihood term decreases. Third: Binomial likelihood terms `i t ´ pt tails(1 −ptails)i−t. Color of trace correponds to trial number i, as indicated in previous panel (traces are normalized for clarity). Fourth: Estimate of prior given 150 trials. Black trace indicates true prior (as in top panel); red indicates estimate ±1 posterior standard error (computed via importance sampling). Bottom: Tracking the evolution of the posterior. Black traces indicate the subject’s true posterior after observing 0 (thin trace), 50 (medium trace), and 100 (thick trace) sample coin flips; as more data are observed, the subject becomes more and more confident about the true fairness of the coin (p = .5), and the posteriors match the likelihood terms (c.f. third panel) more closely. Red traces indicate the estimated posterior given the full 150 or just the last 100 or 50 trials, respectively (errorbars omitted for visibility). Note that the procedure tracks the evolution of the subject’s posterior quite accurately, given relatively few trials. To place Bayesian confidence intervals around our estimate, we sample from the corresponding (truncated) Gaussian posterior distribution on p(θ) (via importance sampling with a suitably shifted, rescaled truncated Gaussian proposal density; similar methods are applicable more generally in the non-Gaussian case via the usual posterior approximation techniques, e.g. Laplace approximation). Figs. 1-2 demonstrate the accuracy of the estimated ˆp(θ); in particular, the bottom panels show that the method accurately tracks the evolution of the model subjects’ posteriors as an increasing amount of data are observed. 0 1 2 3 true prior 0 5 10 likelihoods 0 50 100 150 trial # (i) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 2 4 est prior 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 2 4 6 <−−−−−−−all heads ... all tails−−−−−−−−> est prior # tails/i estimate i=0 50 100 Figure 2: Learning an unfair coin (ptails = .25). Conventions as in Fig. 1. Connection to neural population coding It is interesting to note a connection to the neural population coding model studied in (14) (with more recent work reviewed in (15)). The basic idea is that neural populations encode not just stimuli, but probability distributions over stimuli (where the distribution describes the uncertainty in the state of the encoded object). Here the experimentally observed data are neural firing rates, which provide constraints on the underlying encoded “prior” distribution in terms of the individual tuning function of each cell in the observed population. The simplest model is as follows: the observed spikes ni from the i-th cell are Poissondistributed, with rate a nonlinear function of a linear functional of some prior distribution, ni ∼Poiss g Z p(θ)f(xi, θ) , where the kernel f is considered as the cell’s “tuning function”; the log-concavity of the likelihood of p is preserved for any nonlinearity g that is convex and log-concave, a class including the linear rectifiers, exponentials, and power-laws (and studied more extensively in (16)). Alternately, a simplified model is often used, e.g.: ni ∼q ni − R p(θ)f(xi, θ) σ , with q a log-concave density (typically Gaussian) to preserve the concavity of the loglikelihood; in this case, the scale σ of the noise does not vary with the mean firing rate, as it does in the Poisson model. In both cases, the observed firing rates act as constraints oriented linearly with respect to p; in the latter case, the noise scale σ sets the strength, or confidence, of each such constraint (2, 3). Thus, under this framework, given the simultaneously recorded activity of many cells {ni} and some model for the tuning functions f(xi, θ), we can infer p(θ) (and represent the uncertainty in these estimates) using methods quite similar to those developed above. Directions The obvious open avenue for future research (aside from application to experimental data) is to relax the assumptions: that the likelihood and cost function are both known, and that the data are observed directly (without any noise). It seems fair to conjecture that the subject can learn the likelihood and cost functions given enough data, but one would like to test this directly, e.g. by estimating D(., .) and p together, perhaps under restrictions on the form of D(., .). As emphasized above, the utility estimation problem has received a great deal of attention, and it is plausible to expect that the methods proposed here for estimation of the prior might be combined with previously-studied methods for utility elicitation and estimation. It is also interesting to consider these elicitation methods in the context of experimental design (8, 17, 18), in which we might actively seek stimuli xi to maximally constrain the possible form of the prior and/or cost function. References 1. D. Knill, W. Richards, eds., Perception as Bayesian Inference (Cambridge University Press, 1996). 2. Y. Weiss, E. Simoncelli, E. Adelson, Nature Neuroscience 5, 598 (2002). 3. Y. Weiss, D. Fleet, Statistical Theories of the Cortex (MIT Press, 2002), chap. Velocity likelihoods in biological and machine vision, pp. 77–96. 4. D. Kersten, P. Mamassian, A. Yuille, Annual Review of Psychology 55, 271 (2004). 5. K. Koerding, D. Wolpert, Nature 427, 244 (2004). 6. R. Hogarth, Journal of the American Statistical Association 70, 271 (1975). 7. J. Oakley, A. O’Hagan, Biometrika under review (2003). 8. P. Garthwaite, J. Kadane, A. O’Hagan, Handbook of Statistics (2004), chap. Elicitation. 9. A. Ng, S. Russell, ICML-17 (2000). 10. J. Blythe, AAAI02 (2002). 11. G. Strang, Linear algebra and its applications (Harcourt Brace, New York, 1988). 12. Y. Rinott, Annals of Probability 4, 1020 (1976). 13. M. Henrion, et al., Why is diagnosis using belief networks insensitive to imprecision in probabilities?, Tech. Rep. SMI-96-0637, Stanford (1996). 14. R. Zemel, P. Dayan, A. Pouget, Neural Computation 10, 403 (1998). 15. A. Pouget, P. Dayan, R. Zemel, Annual Reviews of Neuroscience 26, 381 (2003). 16. L. Paninski, Network: Computation in Neural Systems 15, 243 (2004). 17. K. Chaloner, I. Verdinelli, Statistical Science 10, 273 (1995). 18. L. Paninski, Advances in Neural Information Processing Systems 16 (2003).
|
2005
|
203
|
2,830
|
Oblivious Equilibrium: A Mean Field Approximation for Large-Scale Dynamic Games Gabriel Y. Weintraub, Lanier Benkard, and Benjamin Van Roy Stanford University {gweintra,lanierb,bvr}@stanford.edu Abstract We propose a mean-field approximation that dramatically reduces the computational complexity of solving stochastic dynamic games. We provide conditions that guarantee our method approximates an equilibrium as the number of agents grow. We then derive a performance bound to assess how well the approximation performs for any given number of agents. We apply our method to an important class of problems in applied microeconomics. We show with numerical experiments that we are able to greatly expand the set of economic problems that can be analyzed computationally. 1 Introduction In this paper we consider a class of infinite horizon non-zero sum stochastic dynamic games. At each period of time, each agent has a given state and can make a decision. These decisions together with random shocks determine the evolution of the agents’ states. Additionally, agents receive profits depending on the current states and decisions. There is a literature on such models which focusses on computation of Markov perfect equilibria (MPE) using dynamic programming algorithms. A major shortcoming of, however, is the computational complexity associated with solving for the MPE. When there are more than a few agents participating in the game and/or more than a few states per agent, the curse of dimensionality renders dynamic programming algorithms intractable. In this paper we consider a class of stochastic dynamic games where the state of an agent captures its competitive advantage. Our main motivation is to consider an important class of models in applied economics, namely, dynamic industry models of imperfect competition. However, we believe our methods can be useful in other contexts as well. To clarify the type of models we consider, let us describe a specific example of a dynamic industry model. Consider an industry where a group of firms can invest to improve the quality of their products over time. The state of a given firm represents its quality level. The evolution of quality is determined by investment and random shocks. Finally, at every period, given their qualities, firms compete in the product market and receive profits. Many real world industries where, for example, firms invest in R&D or advertising are well described by this model. In this context, we propose a mean-field approximation approach that dramatically simplifies the computational complexity of stochastic dynamic games. We propose a simple algorithm for computing an “oblivious” equilibrium in which each agent is assumed to make decisions based only on its own state and knowledge of the long run equilibrium distribution of states, but where agents ignore current information about rivals’ states. We prove that, if the distribution of agents obeys a certain “light-tail” condition, when the number of agents becomes large the oblivious equilibrium approximates a MPE. We then derive an error bound that is simple to compute to assess how well the approximation performs for any given number of agents. We apply our method to analyze dynamic industry models of imperfect competition. We conduct numerical experiments that show that our method works well when there are several hundred firms, and sometimes even tens of firms. Our method, which uses simple code that runs in a couple of minutes on a laptop computer, greatly expands the set of economic problems that can be analyzed computationally. 2 A Stochastic Dynamic Game In this section, we formulate a non-zero sum stochastic dynamic game. The system evolves over discrete time periods and an infinite horizon. We index time periods with nonnegative integers t ∈N (N = {0, 1, 2, . . .}). All random variables are defined on a probability space (Ω, F, P) equipped with a filtration {Ft : t ≥0}. We adopt a convention of indexing by t variables that are Ft-measurable. There are n agents indexed by S = {1, ..., n}. The state of each agent captures its ability to compete in the environment. At time t, the state of agent i ∈S is denoted by xit ∈N. We define the system state st to be a vector over individual states that specifies, for each state x ∈N, the number of agents at state x in period t. We define the state space S = n s ∈N∞ P∞ x=0 s(x) = n o . For each i ∈S, we define s−i,t ∈S to be the state of the competitors of agent i; that is, s−i,t(x) = st(x) −1 if xit = x, and s−i,t(x) = st(x), otherwise. In each period, each agent earns profits. An agent’s single period expected profit πm(xit, s−i,t) depends on its state xit, its competitors’ state s−i,t and a parameter m ∈ ℜ+. For example, in the context of an industry model, m could represent the total number of consumers, that is, the size of the pie to be divided among all agents. We assume that for all x ∈N, s ∈S, m ∈ℜ+, πm(x, s) > 0 and is increasing in x. Hence, agents in larger states earn more profits. In each period, each agent makes a decision. We interpret this decision as an investment to improve the state at the next period. If an agent invests µit ∈ℜ+, then the agent’s state at time t + 1 is given by, xi,t+1 = xit + w(µit, ζi,t+1), where the function w captures the impact of investment on the state and ζi,t+1 reflects uncertainty in the outcome of investment. For example, in the context of an industry model, uncertainty may arise due to the risk associated with a research endeavor or a marketing campaign. We assume that for all ζ, w(µ, ζ) is nondecreasing in µ. Hence, if the amount invested is larger it is more likely the agent will transit next period to a better state. The random variables {ζit|t ≥0, i ≥1} are i.i.d.. We denote the unit cost of investment by d. Each agent aims to maximize expected net present value. The interest rate is assumed to be positive and constant over time, resulting in a constant discount factor of β ∈(0, 1) per time period. The equilibrium concept we will use builds on the notion of a Markov perfect equilibrium (MPE), in the sense of [3]. We further assume that equilibrium is symmetric, such that all agents use a common stationary strategy. In particular, there is a function µ such that at each time t, each agent i ∈S makes a decision µit = µ(xit, s−i,t). Let M denote the set of strategies such that an element µ ∈M is a function µ : N × S →ℜ+. We define the value function V (x, s|µ′, µ) to be the expected net present value for an agent at state x when its competitors’ state is s, given that its competitors each follows a common strategy µ ∈M, and the agent itself follows strategy µ′ ∈M. In particular, V (x, s|µ′, µ) = Eµ′,µ " ∞ X k=t βk−t (π(xik, s−i,k) −dιik) xit = x, s−i,t = s # , where i is taken to be the index of an agent at state x at time t, and the subscripts of the expectation indicate the strategy followed by agent i and the strategy followed by its competitors. In an abuse of notation, we will use the shorthand, V (x, s|µ) ≡V (x, s|µ, µ), to refer to the expected discounted value of profits when agent i follows the same strategy µ as its competitors. An equilibrium to our model comprises a strategy µ ∈M that satisfy the following condition: (2.1) sup µ′∈M V (x, s|µ′, µ) = V (x, s|µ) ∀x ∈N, ∀s ∈S. Under some technical conditions, one can establish existence of an equilibrium in pure strategies [4]. With respect to uniqueness, in general we presume that our model may have multiple equilibria. Dynamic programming algorithms can be used to optimize agent strategies, and equilibria to our model can be computed via their iterative application. However, these algorithms require compute time and memory that grow proportionately with the number of relevant system states, which is often intractable in contexts of practical interest. This difficulty motivates our alternative approach. 3 Oblivious Equilibrium We will propose a method for approximating MPE based on the idea that when there are a large number of agents, simultaneous changes in individual agent states can average out because of a law of large numbers such that the normalized system state remains roughly constant over time. In this setting, each agent can potentially make near-optimal decisions based only on its own state and the long run average system state. With this motivation, we consider restricting agent strategies so that each agent’s decisions depend only on the agent’s state. We call such restricted strategies oblivious since they involve decisions made without full knowledge of the circumstances — in particular, the state of the system. Let ˜ M ⊂M denote the set of oblivious strategies. Since each strategy µ ∈ ˜ M generates decisions µ(x, s) that do not depend on s, with some abuse of notation, we will often drop the second argument and write µ(x). Let ˜sµ be the long-run expected system state when all agents use an oblivious strategy µ ∈˜ M. For an oblivious strategy µ ∈˜ M we define an oblivious value function ˜V (x|µ′, µ) = Eµ′ " ∞ X k=t βk−t (π(xik, ˜sµ) −dιik) xit = x # . This value function should be interpreted as the expected net present value of an agent that is at state x and follows oblivious strategy µ′, under the assumption that its competitors’ state will be ˜sµ for all time. Again, we abuse notation by using ˜V (x|µ) ≡˜V (x|µ, µ) to refer to the oblivious value function when agent i follows the same strategy µ as its competitors. We now define a new solution concept: an oblivious equilibrium consists of a strategy µ ∈˜ M that satisfy the following condition: (3.1) sup µ′∈˜ M ˜V (x|µ′, µ) = ˜V (x|µ), ∀x ∈N. In an oblivious equilibrium firms optimize an oblivious value function assuming that its competitors’ state will be ˜sµ for all time. The optimal strategy obtained must be µ. It is straightforward to show that an oblivious equilibrium exists under mild technical conditions. With respect to uniqueness, we have been unable to find multiple oblivious equilibria in any of the applied problems we have considered, but similarly with the case of MPE, we have no reason to believe that in general there is a unique oblivious equilibrium. 4 Asymptotic Results In this section, we establish asymptotic results that provide conditions under which oblivious equilibria offer close approximations to MPE as the number of agents, n, grow. We consider a sequence of systems indexed by the one period profit parameter m and we assume that the number of agents in system m is given by n(m) = am, for some a > 0. Recall that m represents, for example, the total pie to be divided by the agents so it is reasonable to increase n(m) and m at the same rate. We index functions and random variables associated with system m with a superscript (m). From this point onward we let ˜µ(m) denote an oblivious equilibrium for system m. Let V (m) and ˜V (m) represent the value function and oblivious value function, respectively, when the system is m. To further abbreviate notation we denote the expected system state associated with ˜µ(m) by ˜s(m) ≡˜s˜µ(m). The random variable s(m) t denotes the system state at time t when every agent uses strategy ˜µ(m). We denote the invariant distribution of {s(m) t : t ≥0} by q(m). In order to simplify our analysis, we assume that the initial system state s(m) 0 is sampled from q(m). Hence, s(m) t is a stationary process; s(m) t is distributed according to q(m) for all t ≥0. It will be helpful to decompose s(m) t according to s(m) t = f (m) t n(m), where f (m) t is the random vector that represents the fraction of agents in each state. Similarly, let ˜f (m) ≡E[f (m) t ] denote the expected fraction of agents in each state. With some abuse of notation, we define πm(xit, f−i,t, n) ≡πm(xit, n · f−i,t). We assume that for all x ∈N, f ∈S1, πm(x, f, n(m)) = Θ(1), where S1 = {f ∈ℜ∞ + | P x∈N f(x) = 1}. If m and n(m) grow at the same rate, one period profits remain positive and bounded. Our aim is to establish that, under certain conditions, oblivious equilibria well-approximate MPE as m grows. We define the following concept to formalize the sense in which this approximation becomes exact. Definition 4.1. A sequence ˜µ(m) ∈M possesses the asymptotic Markov equilibrium (AME) property if for all x ∈N, lim m→∞E˜µ(m) sup µ′∈M V (m)(x, s(m) t |µ′, ˜µ(m)) −V (m)(x, s(m) t |˜µ(m)) = 0 . The definition of AME assesses approximation error at each agent state x in terms of the amount by which an agent at state x can increase its expected net present value by deviating from the oblivious equilibrium strategy ˜µ(m), and instead following an optimal (non-oblivious) best response that keeps track of the true system state. The system states are averaged according to the invariant distribution. It may seem that the AME property is always obtained because n(m) is growing to infinity. However, recall that each agent state reflects its competitive advantage and if there are agents that are too “dominant” this is not necessarily the case. To make this idea more concrete, let us go back to our industry example where firms invest in quality. Even when there are a large number of firms, if the market tends to be concentrated — for example, if the market is usually dominated by a single firm with a an extremely high quality — the AME property is unlikely to hold. To ensure the AME property, we need to impose a “light-tail” condition that rules out this kind of domination. Note that d ln πm(y,f,n) df(x) is the semi-elasticity of one period profits with respect to the fraction of agents in state x. We define the maximal absolute semi-elasticity function: g(x) = max m∈ℜ+,y∈N,f∈S1,n∈N d ln πm(y, f, n) df(x) . For each x, g(x) is the maximum rate of relative change of any agent’s single-period profit that could result from a small change in the fraction of agents at state x. Since larger competitors tend to have greater influence on agent profits, g(x) typically increases with x, and can be unbounded. Finally, we introduce our light-tail condition. For each m, let ˜x(m) ∼˜f (m), that is, ˜x(m) is a random variable with probability mass function ˜f (m). ˜x(m) can be interpreted as the state of an agent that is randomly sampled from among all agents while the system state is distributed according to its invariant distribution. Assumption 4.1. For all states x, g(x) < ∞. For all ϵ > 0, there exists a state z such that E h g(˜x(m))1{˜x(m)>z} i ≤ϵ, for all m. Put simply, the light tail condition requires that states where a small change in the fraction of agents has a large impact on the profits of other agents, must have a small probability under the invariant distribution. In the previous example of an industry where firms invest in quality this typically means that very large firms (and hence high concentration) rarely occur under the invariant distribution. Theorem 4.1. Under Assumption 4.1 and some other regularity conditions1, the sequence ˜µ(m) of oblivious equilibria possesses the AME property. 5 Error Bounds While the asymptotic results from Section 4 provide conditions under which the approximation will work well as the number of agents grows, in practice one would also like to know how the approximation performs for a particular system. For that purpose we derive performance bounds on the approximation error that are simple to compute via simulation and can be used to asses the accuracy of the approximation for a particular problem instance. We consider a system m and to simplify notation we suppress the index m. Consider an oblivious strategy ˜µ. We will quantify approximation error at each agent state x ∈N by E supµ′∈M V (x, st|µ′, ˜µ) −V (x, st|˜µ) . The expectation is over the invariant distribution of st. The next theorem provides a bound on the approximation error. Recall that ˜s is the long run expected state in oblivious equilibrium (E[st]). Let ax(y) be the expected discounted sum of an indicator of visits to state y for an agent starting at state x that uses strategy ˜µ . Theorem 5.1. For any oblivious equilibrium ˜µ and state x ∈N, (5.1) E [∆V ] ≤ 1 1 −β E[∆π(st)] + X y∈N ax(y) (π(y, ˜s) −E [π(y, st)]) , 1In particular, we require that the single period profit function is “smooth” as a function of its arguments. See [5] for details. where ∆V = supµ′∈M V (x, st|µ′, ˜µ) − V (x, st|˜µ) and ∆π(s) = maxy∈N (π(y, s) −π(y, ˜s)). The error bound can be easily estimated via simulation algorithms. In particular, note that the bound is not a function of the true MPE or even of the optimal non-oblivious best response strategy. 6 Application: Industry Dynamics Many problems in applied economics are dynamic in nature. For example, models involving the entry and exit of firms, collusion among firms, mergers, advertising, investment in R&D or capacity, network effects, durable goods, consumer learning, learning by doing, and transaction or adjustment costs are inherently dynamic. [1] (hereafter EP) introduced an approach to modeling industry dynamics. See [6] for an overview. Computational complexity has been a limiting factor in the use of this modeling approach. In this section we use our method to expand the set of dynamic industries that can be analyzed computationally. Even though our results apply to more general models where for example firms make exit and entry decisions, here we consider a particular case of an EP model which itself is a particular case of the model introduced in Section 2. We consider a model of a single-good industry with quality differentiation. The agents are firms that can invest to improve the quality of their product over time. In particular xit is the quality level of firm i at time t. µit represents represents the amount of money invested by firm i at time t to improve its quality. We assume the one period profit function is derived from a logit demand system and where firms compete setting prices. In this case, m represents the market size. See [5] for more details about the model. 6.1 Computational Experiments In this section, we discuss computational results that demonstrate how our approximation method significantly expands the range of relevant EP-type models like the one previously introduced that can be studied computationally. First, we propose an algorithm to compute oblivious equilibrium [5]. Whether this algorithm is guaranteed to terminate in a finite number of iterations remains an open issue. However, in over 90% of the numerical experiments we present in this section, it converged in less than five minutes (and often much less than this). In the rest, it converged in less than fifteen minutes. Our first set of results investigate the behavior of the approximation error bound under several different model specifications. A wide range of parameters for our model could reasonably represent different real world industries of interest. In practice the parameters would either be estimated using data from a particular industry or chosen to reflect an industry under study. We begin by investigating a particular set of representative parameter values. See [5] for the specifications. For each set of parameters, we use the approximation error bound to compute an upper bound on the percentage error in the value function, E[supµ′∈M V (x,s|µ′,˜µ)−V (x,s|˜µ)] E[V (x,s|˜µ)]] , where ˜µ is the OE strategy and the expectations are taken with respect to s. We estimate the expectations using simulation. We compute the previously mentioned percentage approximation error bound for different market sizes m and number of firms n(m). As the market size increases, the number of firms increases and the approximation error bound decreases. In our computational experiments we found that the most important parameter affecting the approximation error bounds was the degree of vertical product differentiation, which indicates the importance consumers assign to product quality. In Figure 1 we present our results. When the parameter that measures the level of vertical differentiation is low the approximation error bound is less than 0.5% with just 5 firms, while when the parameter is high it is 5% for 5 firms, less than 3% with 40 firms, and less than 1% with 400 firms. Figure 1: Percentage approximation error bound for fixed number of firms. Most economic applications would involve from less than ten to several hundred firms. These results show that the approximation error bound may sometimes be small (<2%) in these cases, though this would depend on the model and parameter values for the industry under study. Having gained some insight into what features of the model lead to low values of the approximation error bound, the question arises as to what value of the error bounds is required to obtain a good approximation. To shed light on this issue we compare long-run statistics for the same industry primitives under oblivious equilibrium and MPE strategies. A major constraint on this exercise is that it requires the ability to actually compute the MPE, so to keep computation manageable we use four firms here. We compare the average values of several economic statistics of interest under the oblivious equilibrium and the MPE invariant distributions. The quantities compared are: average investment, average producer surplus, average consumer surplus, average share of the largest firm, and average share of the largest two firms. We also computed the actual benefit from deviating and keeping track of the industry state (the actual difference E[supµ′∈M V (x,s|µ′,˜µ)−V (x,s|˜µ)] E[V (x,s|˜µ)]] ). Note that the the latter quantity should always be smaller than the approximation error bound. From the computational experiments we conclude the following (see [5] for a table with the results): 1. When the bound is less than 1% the long-run quantities estimated under oblivious equilibrium and MPE strategies are very close. 2. Performance of the approximation depends on the richness of the equilibrium investment process. Industries with a relatively low cost of investment tend to have a symmetric average distribution over quality levels reflecting a rich investment process. In this cases, when the bound is between 1-20%, the long-run quantities estimated under oblivious equilibrium and MPE strategies are still quite close. In industries with high investment cost the industry (system) state tends to be skewed, reflecting low levels of investment. When the bound is above 1% and there is little investment, the long-run quantities can be quite different on a percentage basis (5% to 20%), but still remain fairly close in absolute terms. 3. The performance bound is not tight. For a wide range of parameters the performance bound is as much as 10 to 20 times larger than the actual benefit from deviating. The previous results suggest that MPE dynamics are well-approximated by oblivious equilibrium strategies when the approximation error bound is small (less than 1-2% and in some cases even up to 20 %). Our results demonstrate that the oblivious equilibrium approximation significantly expands the range of applied problems that can be analyzed computationally. 7 Conclusions and Future Research The goal of this paper has been to increase the set of applied problems that can be addressed using stochastic dynamic games. Due to the curse of dimensionality, the applicability of these models has been severely limited. As an alternative, we proposed a method for approximating MPE behavior using an oblivious equilibrium, where agents make decisions only based on their own state and the long run average system state. We began by showing that the approximation works well asymptotically, where asymptotics were taken in the number of agents. We also introduced a simple algorithm to compute an oblivious equilibrium. To facilitate using oblivious equilibrium in practice, we derived approximation error bounds that indicate how good the approximation is in any particular problem under study. These approximation error bounds are quite general and thus can be used in a wide class of models. We use our methods to analyze dynamic industry models of imperfect competition and showed that oblivious equilibrium often yields a good approximation of MPE behavior for industries with a couple hundred firms, and sometimes even with just tens of firms. We have considered very simple strategies that are functions only of an agent’s own state and the long run average system state. While our results show that these simple strategies work well in many cases, there remains a set of problems where exact computation is not possible and yet our approximation will not work well either. For such cases, our hope is that our methods will serve as a basis for developing better approximations that use additional information, such as the states of the dominant agents. Solving for equilibria of this type would be more difficult than solving for oblivious equilibria, but is still likely to be computationally feasible. Since showing that such an approach would provide a good approximation is not a simple extension of our results, this will be a subject of future research. References [1] R. Ericson and A. Pakes. Markov-perfect industry dynamics: A framework for empirical work. Review of Economic Studies, 62(1):53 – 82, 1995. [2] R. L. Goettler, C. A. Parlour, and U. Rajan. Equilibrium in a dynamic limit order market. Forthcoming, Journal of Finance, 2004. [3] E. Maskin and J. Tirole. A theory of dynamic oligopoly, I and II. Econometrica, 56(3):549 – 570, 1988. [4] U. Doraszelski and M. Satterthwaite. Foundations of Markov-perfect industry dynamics: Existence, purification, and multiplicity. Working Paper, Hoover Institution, 2003. [5] G. Y. Weintraub, C. L. Benkard, and B. Van Roy. Markov perfect industry dynamics with many firms. Submitted ofr publication, 2005. [6] A. Pakes. A framework for applied dynamic analysis in i.o. NBER Working Paper 8024, 2000.
|
2005
|
204
|
2,831
|
Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches Anna Levina3,4, J. Michael Herrmann1,2, Theo Geisel1,2,4 1 Bernstein Center for Computational Neuroscience G¨ottingen 2 Georg-August University G¨ottingen, Institute for Nonlinear Dynamics 3 Graduate School Identification in Mathematical Models 4 Max Planck Institute for Dynamics and Self-Organization Bunsenstr. 10, 37073 G¨ottingen, Germany anna|michael|geisel@chaos.gwdg.de Abstract There is experimental evidence that cortical neurons show avalanche activity with the intensity of firing events being distributed as a power-law. We present a biologically plausible extension of a neural network which exhibits a power-law avalanche distribution for a wide range of connectivity parameters. 1 Introduction Power-law distributions of event sizes have been observed in a number of seemingly diverse systems such as piles of granular matter [8], earthquakes [9], the game of life [1], friction [7], and sound generated in the lung during breathing. Because it is unlikely that the specific parameter values at which the critical behavior occurs are assumed by chance, the question arises as to what mechanisms may tune the parameters towards the critical state. Furthermore it is known that criticality brings about optimal computational capabilities [10], improves mixing or enhances the sensitivity to unpredictable stimuli [5]. Therefore, it is interesting to search for mechanisms that entail criticality in biological systems, for example in the nervous tissue. In [6] a simple model of a fully connected neural network of non-leaky integrate-and-fire neurons was studied. This study not only presented the first example of a globally coupled system that shows criticality, but also predicted the critical exponent as well as some extra-critical dynamical phenomena, which were later observed in experimental researches. Recently, Beggs and Plenz [3] studied the propagation of spontaneous neuronal activity in slices of rat cortex and neuronal cultures using multi-electrode arrays. Thereby, they found avalanche-like activity where the avalanche sizes were distributed according to a powerlaw with an exponent of -3/2. This distribution was stable over a long period of time. The authors suggested that such a distribution is optimal in terms of transmission and storage of the information. The network in [6] consisted of a set of N identical threshold elements characterized by the membrane potential u ≥0 and was driven by a slowly delivered random input. When the potential exceeds a threshold θ = 1, the neuron spikes and relaxes. All connections in the network are described by a single parameter α representing the evoked synaptic potential which a spiking neuron transmits to the all postsynaptic neurons. The system is driven by a slowly delivered random input. The simplicity of that model allows analytical consideration: an explicit formula for probability distribution of avalanche size depending on the parameter α was derived. A major drawback of the model was the lack of any true self-organization. Only at an externally well-tuned critical value of α = αcr did the distribution take a form of a power-law, although with an exponent of precisely -3/2 (in the limit of a large system). The term critical will be applied here also to finite systems. True criticality requires a thermodynamic limit N −→∞, we consider approximate power-law behavior characterized by an exponent and an error that describes the remaining deviation from the best-matching exponent. The model in [6] is displayed for comparison in Fig. 3. In Fig. 1 (a-c) it is visible that the system may also exhibit other types of behavior such as small avalanches with a finite mean (even in the thermodynamic limit) at α < αcr. On the other hand at α > αcr the distribution becomes non-monotonous, which indicates that avalanches of the size of the system are occurring frequently. Generally speaking, in order to drive the system towards criticality it therefore suffices to decrease the large avalanches and to enhance the small ones. Most interestingly, synaptic connections among real neurons show a similar tendency which thus deserves further study. We will consider the standard model of a short-term dynamics in synaptic efficacies [11, 13] and thereafter discuss several numerically determined quantities. Our studies imply that dynamical synapses indeed may support the criticalization of the neural activity in a small homogeneous neural system. 2 The model We are considering a network of integrate-and-fire neurons with dynamical synapses. Each synapse is described by two parameters: amount of available neurotransmitters and a fraction of them which is ready to be used at the next synaptic event. Both parameters change in time depending on the state of the presynaptic neuron. Such a system keeps a long memory of the previous events and is known to exert a regulatory effect to the network dynamics, which will turnout to be beneficial. Our approach is based on the model of dynamical synapses, which was shown by Tsodyks and Markram to reliably reproduce the synaptic responses between pyramidal neurons [11, 13]. Consider a set of N integrate-and-fire neurons characterized by a membrane potential hi ≥0, and two connectivity parameters for each synapse: Ji,j ≥0, ui,j ∈[0, 1]. The parameter Ji,j characterizes the number of available vesicles on the presynaptic side of the connection from neuron j to neuron i. Each spike leads to the usage of a portion of the resources of the presynaptic neuron, hence, at the next synaptic event less transmitters will be available i.e. activity will be depressed. Between spikes vesicles are slowly recovering on a timescale τ1. The parameter ui,j denotes the actual fraction of vesicles on the presynaptic side of the connection from neuron j to neuron i, which will be used in the synaptic transmission. When a spike arrives at the presynaptic side j, it causes an increase of ui,j. Between spikes, ui,j slowly decrease to zero on a timescale τ2. The combined effect of Ji,j and ui,j results in the facilitation or depression of the synapse. The dynamics of a membrane potential hi consists of the integration of excitatory postsynaptic currents over all synapses of the neuron and the slowly delivered random input. When the membrane potential exceeds threshold, the neuron emits a spike and hi resets to a smaller value. The 10 0 10 1 10 2 10 −8 10 −6 10 −4 10 −2 10 0 α=0.52 Log L Log P(L,N,α) f(L)=L−1.37 P(L,N,α) (a) 10 0 10 1 10 2 10 −6 10 −4 10 −2 10 0 α=0.54 Log L Log P(L,N,α) f(L)=L−1.37 P(L,N,α) (b) 10 0 10 1 10 2 10 −3 10 −2 10 −1 10 0 α=0.74 Log L Log P(L,N,α) f(L)=L−1.37 P(L,N,α) (c) 0.4 0.5 0.6 0.7 0.8 0.9 1 10 0 10 1 10 2 10 −4 10 −2 10 0 Log L α Log P(L,N,α) Figure 1: Probability distributions of avalanche sizes P(L, N, α). (a) in the subcritical, α = 0.52, (b) the critical, α = 0.53, and (c) supra-critical regime, α = 0.74. In (a-c) the solid lines and symbols denote the numerical results for the avalanche size distributions, dashed lines show the best matching power-law. Here the curves are temporal averages over 106 avalanches with N = 100, u0 = 0.1, τ1 = τ2 = 0.1. Sub-figure (d) displays P(L, N, α) as a function of L for α varying from 0.34 to 0.98 with step 0.01. The presented curves are temporal averages over 106 avalanches with N = 200, u0 = 0.1, τ1 = τ2 = 0.1. joint dynamics can be written as a system of differential equations ˙Ji,j = 1 τ1τs (J0 −Ji,j) −ui,jJi,jδ(t −tj sp), (1) ˙ui,j = −1 τ2τs ui,j + u0(1 −ui,j)δ(t −tj sp), (2) ˙hi = 1 τs δ(r(t) −i)cξ + N X j=1 ui,jJi,jδ(t −tj sp) (3) Here δ(t) is the Dirac delta-function, tj sp is the spiking time of neuron j, J0 is the resting value of Ji,j, u0 is the minimal value of ui,j, and τs is a parameter separating time-scales of random input and synaptic events. In the following study we will use the discrete version of equations (1-3). -4 -3.5 -3 -2.5 -2 -1.5 -1 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 γ α Figure 2: The best matching power-law exponent. The black line represents the present model, while the grey stands for model [6]. Average synaptic efficiency α varies from 0.3 to 1.0 with step 0.001. Presented curves are temporal averages over 107 avalanches with N = 200, u0 = 0.1, τ1 = τ2 = 10. Note that for a network of 200 units, the absolute critical exponent is smaller than the large system limit γ = −1.5 and that the step size has been drastically reduced in the vicinity of the phase transition. 3 Discrete version of the model We consider time being measured in discrete steps, t = 0, 1, 2, . . .. Because synaptic values are essentially determined presynaptically, we assume that all synapses of a neuron are identical, i.e. Jj, uj are used instead of Ji,j and ui,j respectively. The system is initialized with arbitrary values hi ∈[0, 1), i = 1, . . . , N, where the threshold θ is fixed at 1. Depending on the state of the system at time t, the i-th element receives external input I ext i (t) or internal input Iint i (t) from other neural elements. The two effects result in an activation ˜h at time t + 1, ˜hi(t + 1) = hi(t) + Iext i (t) + Iint i (t) (4) From the activation ˜hi(t + 1), the membrane potential of the i-th element at time t + 1 is computed as hi(t + 1) = ˜hi(t + 1) if ˜hi(t + 1) < 1, ˜hi(t + 1) −1 if ˜hi(t + 1) ≥1, (5) i.e. if the activation exceeds the threshold, it is reset but retains the supra-threshold portion ˜hi(t + 1) −1 of the membrane potential. The external input Iext i (t) is a random amount c ξ, received by a randomly chosen neuron. Here, c is input strength scale, parameter of the model, ξ is uniformly distributed on [0, 1] and independent of i. The external input is considered to be delivered slowly compared to the internal relaxation dynamics (which corresponds to τsep ≫1), i.e. it occurs only if no element has exceeded the threshold in the previous time step. This corresponds to an infinite separation of the time scales of external driving and avalanche dynamics discussed in the literature on self-organized criticality [12, 14]. The present results, however, are not affected by a continuous external input even during the avalanches. The external input 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 δ γ α Figure 3: The mean squared deviation from the best fit power-law. The grey code and parameters are the same as in Fig. 2 For the fit, avalanches of a size larger than 1 and smaller than N/2 have been used. Clearly, an error levels above 0.1 indicates that the fitted curve is far from being a candidate for a power law. Near to α = 1, when the nondynamical model develops a supercritical behavior, the range of the power-law is quite limited. Interesting is again the sharp transition of the dynamical model, which is due to the facilitation strength surpassing a critical level. can formally be written as Iext i (t) = c δr,i(t) |δM(t−1)|, 0 ξ, where r is an integer random variable between 1 and N indicating the chosen element, M(t −1) is the set of indices of supra-threshold elements in the previous time step i.e. M(t) = {i|˜hi(t) ≥1}, and δ.. is the Kronecker delta. We will consider c = J0, thus an external input is comparable with the typical internal input. The internal input Iint i (t) is given by Iint i (t) = X j∈M(t−1) Jj(t) uj(t). The system is initialized with ui = u0, Ji = J0, where J0 = α/(Nu0) and α is the connection strength parameter. Similar to the membrane potentials dynamics, we can distinguish two situations: either there were supra-threshold neurons at the previous moment of time or not. uj(t + 1) = ( uj(t) −1 τ2 u0uj(t)) · δ|M(t)|,0 if ˜hi(t) < 1, uj(t) + (1 −uj(t))u0(t) if ˜hi(t) ≥1, (6) Jj(t + 1) = ( Jj(t) + 1 τ1 (J0 −Jj(t)) · δ|M(t)|,0 if ˜hi(t) < 1, Jj(t)(1 −uj(t)) if ˜hi(t) ≥1, (7) Thus, we have a model with parameters α, u0, τ1, τ2 and N. Our main focus will be on the influence of α on the cumulative dynamics of the network. The dependence on N has been studied in [6], where it was found that the critical parameter of the distribution scales as αcr = 1 −N −1/2. In the same way, the exponent will be smaller in modulus than -3/2 for finite systems. 0.053 0.0534 0.0538 0.0542 0.0546 0.055 0.65 0.7 0.75 0.8 0.85 0.9 α σi 0 0.05 0.1 0.15 0.2 0.25 0.3 δ Averaged synaptic efficacy Deviation from a power−law Figure 4: Average synaptic efficacy for the parameter α varied from 0.53 to 0.55 with step 0.0005 (left axis). Dashed line depicts deviation from a power-law (right axis). If at time t0 an element receives an external input and fires, then an avalanche starts and |M(t0)| = 1. The system is globally coupled, such that during an avalanche all elements receive internal input including the unstable elements themselves. The avalanche duration D ≥0 is defined to be the smallest integer for which the stopping condition |M(t0+D)| = 0 is satisfied. The avalanche size L is given by L = PD−1 k=0 |M(t0 + k)|. The subject of our interest is the probability distribution of avalanche size P(L, N, α) depending on the parameter α. 4 Results Similarly, as in model [6] we considered the avalanche size distribution for different values of α, cf. Fig. 1. Three qualitatively different regimes can be distinguished: subcritical, critical, and supra-critical. For small values of α, subcritical avalanche-size distributions are observed. The subcriticality is characterized by the neglible number of avalanches of a size close to the system size. For αcr, the system has an avalanche distribution with an approximate power-law behavior for L, inside a range from 1 almost up to the size of the system, where the exponential cut-off is observed (Fig. 1b). Above the critical value αcr, avalanche size distributions become non-monotonous (Fig. 1c). Such supra-critical curves have a minimum at an intermediate avalanche size. There is the sharp transition from subcritical to critical regime and then a long critical region, where the distribution of avalanche size stays close to the power-law. For a system of 200 neurons this transition is shown in Fig. 2. To characterize this effect we used the least-squares estimate of the closest power-law parameters Cnorm and γ. p(L, N, α) ≈CnormLγ The mean squared deviation from the estimated power-law undergoes a fast change Fig. 3 (bottom) near αcr = 0.54. At this point the transition from the subcritical to the critical regime occurs. Then there is a long interval of parameters for which the deviation from the power-law is about 2%. Also, the parameters of the power-law approximately stay constant. For different system-sizes different values of αcr and γ are observed. At large system sizes γ is close to −1.5 In order to develop more extensive analysis we considered also a number of additional sta0.2 0.3 0.4 0.5 0.6 0.7 0.8 α ∆ σ −2×10−4 −4×10−4 10−4 0 Figure 5: Difference between synaptic efficacy after and before avalanche averaged over all synapses . Values larger than zero mean facilitation, smaller ones mean depression. Presented curves are temporal averages over 106 avalanches with N = 100, u0 = 0.1, τ1 = τ2 = 10. tistical quantities at the beginning and after the avalanche. The average synaptic efficacy σ = ⟨σi⟩= ⟨Jiui⟩is determined by taking the average over all neurons participating in an avalanche. This average shows the mean input, which neurons receive at each step of avalanche. This characteristic quantity undergoes a sharp transition together with the avalanches distribution, cf. Fig. 4. The meaning of the quantity σ in the present model is similar to the coupling strength α/N in the model discussed in [6]. It is equal to the average EPSP which all postsynaptic neurons will receive after presynaptic neuron spikes. The transition from a subcritical to a critical regime happens when σ jumps into the vicinity of αcr/N of the previous model (for N = 100 and αcr = 0.9). This points to the correspondence between the two models. When α is large, then the synaptic efficacy is high and, hence, avalanches are large and intervals between them are small. The depression during the avalanche dominates facilitation and decrease synaptic efficacy and vise versa. When avalanches are small, facilitation dominates depression. Thus, the synaptic dynamics stabilizes the network to remain near the critical value for a large interval of parameters α. In Fig. 4 shown the averaged effect of an avalanche for different values of parameter α. For α > αcr, depression during the avalanche is stronger than facilitation and avalanches on average decrease synaptic efficacy. When α is very small, the effect of facilitation is washed out during the inter-avalanche period where synaptic parameters return to the resting state. To illustrate this, Fig. 5 shows the difference, ∆σ = ⟨σafter⟩−⟨σbefore⟩, between the average synaptic efficacies after and before the avalanche depending on the parameter α. If this difference is larger than zero, synapses are facilitated by avalanche. If it is smaller than zero, synapses are depressed. For small values of the parameter α avalanches lead to facilitation, while, for large values of α avalanches depress synapses. In the limit N →∞, the synaptic dynamics should be rescaled such that the maximum of transmitter available at a time t divided by the average avalanche size converges to a value which scales as 1 −N −1/2. In this way, if the average avalanche size is smaller than critical, synapses will essentially be enhanced, or they will otherwise experience depression. The necessary parameters for the model (such as the time-scales) have shown to be easily achievable in the small (although time-consuming) simulations presented here. 5 Conclusion We presented a simple biologically plausible complement to a model of a non-leaky integrate-and-fire neurons network which exhibits a power-law avalanche distribution for a wide range of connectivity parameters. In previous studies [6] we showed, that the simplest model with only one parameter α, characterizing synaptic efficacy of all synapses exhibits subcritical, critical and supra critical regimes with continuous transition from one to another, depending on parameter α. These main classes are also present here but the region of critical behavior is immensely enlarged. Both models have a power-law distribution with an exponent approximately equal to -3/2, although the exponent is somewhat smaller for small network sizes. For network sizes close to those in the experiments described in [3] the result is indistinguishable from the limiting value. References [1] P. Bak, K. Chen, and M. Creutz. Self-organized criticality in the ‘Game of Life. Nature, 342:780–782, 1989. [2] P. Bak, C. Tang, and K. Wiesenfeld. Self-organized criticality: an explanation of 1/f noise. Phys. Rev. Lett., 59:381–384, 1987. [3] J. Beggs and D. Plenz. Neuronal avalanches in neocortical circuits. J Neurosci, 23:11167–11177, 2003. [4] J. Beggs and D. Plenz. Neuronal Avalanches Are Diverse and Precise Activity Patterns That Are Stable for Many Hours in Cortical Slice Cultures. J Neurosci, 24(22):5216-5229, 2004. [5] R. Der, F. Hesse, R. Liebscher ( Contingent robot behavior from self-referential dynamical systems. Submitted to Autonomous Robots, 2005. [6] C. W. Eurich, M. Herrmann, and U. Ernst. Finite-size effects of avalanche dynamics. Phys. Rev. E, 66, 2002. [7] H. J. S. Feder and J. Feder. Self-organized criticality in a stick-slip process. Phys. Rev. bibtLett., 66:2669–2672, 1991. [8] V. Frette, K. Christensen, A. M. Malthe-Sørenssen, J. Feder, T. Jøssang, and P. Meakin. Avalanche dynamics in a pile of rice. Nature, 397:49, 1996. [9] B. Gutenberg and C. F. Richter. Magnitude and energy of earthquakes. Ann. Geophys., 9:1, 1956. [10] R. A. Legenstein, W. Maass. Edge of chaos and prediction of computational power for neural microcircuit models. Submitted, 2005. [11] H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between pyramidal neurons. Nature, 382:807–810, 1996. [12] D. Sornette, A. Johansen, and I. Dornic. Mapping self-organized criticality onto criticality. J. Phys. I, 5:325–335, 1995. [13] M. Tsodyks, K. Pawelzik, and H. Markram. Neural networks with dynamic synapses. Neural Computations, 10:821–835, 1998. [14] A. Vespignani and S. Zapperi. Order parameter and scaling fields in self-organized criticality. Phys. Rev. Lett., 78:4793–4796, 1997.
|
2005
|
205
|
2,832
|
From Weighted Classification to Policy Search D. Blatt Department of Electrical Engineering and Computer Science University of Michigan Ann Arbor, MI 48109-2122 dblatt@eecs.umich.edu A. O. Hero Department of Electrical Engineering and Computer Science University of Michigan Ann Arbor, MI 48109-2122 hero@eecs.umich.edu Abstract This paper proposes an algorithm to convert a T-stage stochastic decision problem with a continuous state space to a sequence of supervised learning problems. The optimization problem associated with the trajectory tree and random trajectory methods of Kearns, Mansour, and Ng, 2000, is solved using the Gauss-Seidel method. The algorithm breaks a multistage reinforcement learning problem into a sequence of single-stage reinforcement learning subproblems, each of which is solved via an exact reduction to a weighted-classification problem that can be solved using off-the-self methods. Thus the algorithm converts a reinforcement learning problem into simpler supervised learning subproblems. It is shown that the method converges in a finite number of steps to a solution that cannot be further improved by componentwise optimization. The implication of the proposed algorithm is that a plethora of classification methods can be applied to find policies in the reinforcement learning problem. 1 Introduction There has been increased interest in applying tools from supervised learning to problems in reinforcement learning. The goal is to leverage techniques and theoretical results from supervised learning for solving the more complex problem of reinforcement learning [3]. In [6] and [4], classification was incorporated into approximate policy iterations. In [2], regression and classification are used to perform dynamic programming. Bounds on the performance of a policy which is built from a sequence of classifiers were derived in [8] and [9]. Similar to [8], we adopt the generative model assumption of [5] and tackle the problem of finding good policies within an infinite class of policies, where performance is evaluated in terms of empirical averages over a set of trajectory trees. In [8] the T-step reinforcement learning problem was converted to a set of weighted classification problems by trying to fit the classifiers to the maximal path on the trajectory tree of the decision process. In this paper we take a different approach. We show that while the task of finding the global optimum within a class of non-stationary policies may be overwhelming, the componentwise search leads to single step reinforcement learning problems which can be reduced to a sequence of weighted classification problems. Our reduction is exact and is different from the one proposed in [8]; it gives more weight to regions of the state space in which the difference between the possible actions in terms of future reward is large, rather than giving more weight to regions in which the maximal future reward is large. The weighted classification problems can be solved by applying weights-sensitive classifiers or by further reducing the weighted classification problem to a standard classification problem using re-sampling methods (see [7], [1], and references therein for a description of both approaches). Based on this observation, an algorithm that converts the policy search problem into a sequence of weighted classification problems is given. It is shown that the algorithm converges in a finite number of steps to a solution, which cannot be further improved by changing the control of a single stage while holding the rest of the policy fixed. 2 Problem Formulation The results are presented in the context of MDPs but can be applied to POMDPs and nonMarkovian decision processes as well. Consider a T-step MDP M = {S, A, D, Ps,a}, where S is a (possibly continuous) state space, A = {0, . . . , L −1} is a finite set of possible actions, D is the distribution of the initial state, and Ps,a is the distribution of the next state given that the current state is s and the action taken is a. The reward granted when taking action a at state s and making a transition to state s′ is assumed to be a known deterministic and bounded function of s′ denoted by r : S →[−M, M]. No generality is lost in specifying a known deterministic reward since it is possible to augment the state variable by an additional random component whose distribution depends on the previous state and action, and specify the function r to extract this random component. Denote by S0, S1, . . . , ST the random state variables. A non-stationary deterministic policy π = (π0, π1, . . . , πT −1) is a sequence of mappings πt : S →A, which are called controls. The control πt specifies the action taken at time t as a function of the state at time t. The expected sum of rewards of a non-stationary deterministic policy π is given by V (π) = Eπ ( T X t=1 r (St) ) , (1) where the expectation is taken with respect to the distribution over the random state variables induced by the policy π. We call V (π) the value of policy π. Non-stationary deterministic policies are considered since the optimal policy for a finite horizon MDP is non-stationary and deterministic [10]. Usually the optimal policy is defined as the policy that maximizes the value conditioned on the initial state, i.e., Vπ(s) = Eπ ( T X t=1 R (St) |S0 = s ) , (2) for any realization s of S0 [10]. The policy that maximizes the conditional value given each realization of the initial state also maximizes the value averaged over the initial state, and it is the unique maximizer if the distribution of the initial state D is positive over S. Therefore, when optimizing over all possible policies, the maximization of (1) and (2) are equivalent. When optimizing (1) over a restricted class of policies, which does not contain the optimal policy, the distribution over the initial state specifies the importance of different regions of the state space in terms of the approximation error. For example, assigning high probability to a certain region of S will favor policies that well approximate the optimal policy over that region. Alternatively, maximizing (1) when D is a point mass at state s is equivalent to maximizing (2). Following the generative model assumption of [5], the initial distribution D and the conditional distribution Ps,a are unknown but it is possible to generate realization of the initial state according to D and the next state according to Ps,a for arbitrary state-action pairs (s, a). Given the generative model, n trajectory trees are constructed in the following manner. The root of each tree is a realization of S0 generated according to the distribution D. Given the realization of the initial state, realizations of the next state S1 given the L possible actions, denoted by S1|a, a ∈A, are generated. Note that this notation omits the dependence on the value of the initial state. Each of the L realizations of S1 is now the root of the subtree. These iterations continue to generate a depth T tree. Denote by St|i0, i1, . . . , it−1 the random variable generated at the node that follows the sequence of actions i0, i1, . . . , it−1. Hence, each tree is constructed using a single call to the initial state generator and LT −2 calls to the next state generator. Figure 1: A binary trajectory tree. Consider a class of policies Π, i.e., each element of Π is a sequence of T mappings from S to A. It is possible to estimate the value of any policy in the class from the set of trajectory trees by simply averaging the sum of rewards on each tree along the path that agrees with the policy [5]. Denote by bV i(π) the observed value on the i’th tree along the path that corresponds to the policy π. Then the value of the policy π is estimated by bVn(π) = n−1 n X i=1 bV i(π). (3) In [5], the authors show that with high probability (over the data set) bVn(π) converges uniformly to V (π) (1) with rates that depend on the VC-dimension of the policy class. This result motivates the use of policies π with high bVn(π), since with high probability these policies have high values of V (π). In this paper, we consider the problem of finding policies that obtain high values of bVn(π). 3 A Reduction From a Single Step Reinforcement Learning Problem to Weighted Classification The building block of the proposed algorithm is an exact reduction from a single step reinforcement learning to a weighted classification problem. Consider the single step decision process. An initial state S0 generated according to the distribution D is followed by one of L possible actions A ∈{0, 1, . . . , L −1}, which leads to a transition to state S1 whose conditional distribution given the initial state is s and the action is a is given by Ps,a. Given a class of policies Π, where policy in Π is a map from S to A, the goal is to find bπ ∈arg max π∈Π bVn(π). (4) In this single step problem the data are n realization of the random element {S0, S1|0, S1|1, . . . , S1|L −1}. Denote the i’th realization by {si 0, si 1|0, si 1|1, . . . , si 1|L − 1}. In this case, bVn(π) can be written explicitly by bVn(π) = En (L−1 X l=0 r(S1|l)I(π(S0) = l) ) , (5) where for a function f, En {f(S0, S1|0, S1|1, . . . , S1|L −1)} is its empirical expectation n−1 Pn i=1 f(si 0, si 1|0, si 1|1, . . . , si 1|L −1), and I(·) is the indicator function taking a value of one when its argument is true and zero otherwise. The following proposition shows that the problem of maximizing the empirical reward (5) is equivalent to a weighted classification problem. Proposition 1 Given a class of policies Π and a set of n trajectory trees, arg max π∈Π En (L−1 X l=0 r(S1|l)I(π(S0) = l) ) = arg min π∈Π En (L−1 X l=0 max k r(S1|k) −r(S1|l) I(π(S0) = l) ) . (6) The proposition implies that the maximizer of the empirical reward over a class of policies is the output of an optimal weights dependent classifier for the data set: si 0, arg max k r(si 1|k), wi n i=1 , where for each sample, the first argument is the example, the second is the label, and wi = max k r(si 1|k) −r(si 1|0), max k r(si 1|k) −r(si 1|1), . . . , max k r(si 1|k) −r(si 1|L −1) is the realization of the L costs of classifying example i to each of the possible labels. Note that the realizations of the costs are always non-negative and the cost of the correct classification (arg maxk r(si 1|k)) is always zero. The solution to the weighted classification problem is a map from S to A which minimizes the empirical weighted misclassification error (6). The proposition asserts that this mapping is also the control which maximizes the empirical reward (5). Proof 1 For all j ∈{0, 1, . . . , L −1}, L−1 X l=0 r(S1|l)I(π(S0) = l) = r(S1|j) + (r(S1|0) −r(S1|j))I(π(s) = 0) + (7) (r(S1|1) −r(S1|j))I(π(s) = 1) + . . . + (r(S1|L −1) −r(S1|j))I(π(s) = L −1). In addition, En (L−1 X l=0 r(S1|l)I(π(S0) = l) ) = En ( I(arg max k r(S1|k) = 0) L−1 X l=0 r(S1|l)I(π(S0) = l) ) + En ( I(arg max k r(S1|k) = 1) L−1 X l=0 r(S1|l)I(π(S0) = l) ) + . . . + En ( I(arg max k r(S1|k) = L −1) L−1 X l=0 r(S1|l)I(π(S0) = l) ) . Substituting (7) we obtain En (L−1 X l=0 r(S1|l)I(π(S0) = l) ) = L−1 X j=0 En{I(arg max k r(S1|k) = j)[r(S1|j) − (max k r(S1|k) −r(S1|0))I(π(S0) = 0) − (max k r(S1|k) −r(S1|1))I(π(S0) = 1) −. . . − (max k r(S1|k) −r(S1|L −1))I(π(S0) = L −1)]} = L−1 X j=0 En I(arg max k r(S1|k) = j)r(S1|j) − En (L−1 X l=0 max k R(S1|k) −R(S1|l) I(π(S0) = l) ) The term in the second to last line is independent of π(s) and the result follows. In the binary case, the optimization problem is arg min π∈Π En |r(S1|0) −r(S1|1)|I(π(S0) ̸= arg max k r(S1|k)) , i.e., the single step reinforcement learning problem reduces to the weighted classification problem with samples si 0, arg max k∈{0,1} r(si 1|k), |r(si 1|0) −r(si 1|1)| n i=1 , where for each sample, the first argument is the example, the second is the label, and the third is a realization of the cost incurred when misclassifying the example. Note that this is different from the reduction in [8]. When applying the reduction in [8] to our single step problem the costs are taken to be maxk∈{0,1} r(si 1|k) rather than |r(si 1|0) −r(si 1|1)|. Setting the costs to maxk∈{0,1} r(si 1|k) instead of |r(si 1|0) −r(si 1|1)| favors classifiers which perform well in regions where the maximal reward is large (regardless of the difference between the two actions) instead of regions where the difference between the rewards that result from the two actions is large. It is easy to set an example of a simple MDP and a restricted class of policies, which do not include the optimal policy, in which the classifier that minimizes the weighted misclassification problem with costs maxk∈{0,1} r(si 1|k) is not equivalent to the optimal policy. When using our reduction, they are always equivalent. On the other hand, in [8] the choice maxk∈{0,1} r(si 1|k) led to a bound on the performance of the policy in terms of the performance of the classifier. We do not pursue this type of bounds here since given the classifier, the performance of the resulting policy can be directly estimated from (5). Given a sequence of classifiers, the value of the induced sequence of controls (or policy) can be estimated directly by (3) with generalization guarantees provided by the bounds in [5]. In [2], a certain single step binary reinforcement learning problem is converted to weighted classification by averaging multiple realizations of the rewards under the two possible actions for each state. As seen here, this Monte Carlo approach is not necessary; it is sufficient to sample the rewards once for each state. 4 Finding Good Policies for a T-Step Markov Decision Processes By Solving a Sequence of Weighted Classification Problems Given the class of policies Π, the algorithm updates the controls π0, . . . , πT −1 one at a time in a cyclic manner while holding the rest constant. Each update is formulated as a single step reinforcement learning problem which is then converted to a weighted classification problem. In practice, if the weighted classification problem is only approximately solved, then the new control is accepted only if it leads to higher value of bV . When updating πt, the trees are pruned from the root to stage t by keeping only the branch which agrees with the controls π0, π1, . . . , πt−1. Then a single step reinforcement learning is formulated at time step t, where the realization of the reward which follows action a ∈A at stage t is the immediate reward obtained at the state which follows action a plus the sum of rewards which are accumulated along the branch which agrees with the controls πt+1, πt+2, . . . , πT −1. The iterations end after the first complete cycle with no parameter modifications. Note that when updating πt, each tree contributes one realization of the state at time t. A result of the pruning process is that the ensemble of state realization are drawn from the distribution induced by the policy up to time t −1. In other words, the algorithm relaxes the requirement in [2] to have access to a baseline distribution - a distribution over the states that is induced by a good policy. Our algorithm automatically generates samples from distributions that are induced by a sequence of monotonically improving policies. Figure 2: Updating π1. In the example: pruning down according to π0(S0) = 0, propagating rewards up according to π2(S2|00) = 1, and π2(S2|01) = 0. Proposition 2 The algorithm converges after a finite number of iterations to a policy that cannot be further improved by changing one of the controls and holding the rest fixed. Proof 2 Writing the empirical average sum of rewards bVn(π) explicitly as bVn(π) = En X i0,...,iT −1∈AT I(π0(S0) = i0)I(π1(S1|i0) = i1) . . . I(πT −1(ST −1|i0, i1, . . . , iT −2) = iT −1) T X t=1 r(St|i0, i1, . . . , it−1) ) , it can be seen that the algorithm is a Gauss-Seidel algorithm for maximizing bVn(π), where, at each iteration, optimization of πt is carried out at one of the stages t while keeping πt′, t′ ̸= t fixed. At each iteration the previous control is a valid solution and hence the objective function is non decreasing. Since bVn(π) is evaluated using a finite number of trees, it can take only a finite set of values. Therefore, we must reach a cycle with no updates after a finite number of iterations. A cycle with no improvements implies that we cannot increase the empirical average sum of rewards by updating one of the πt’s. 5 Initialization There are two possible initial policies that can be extracted from the set of trajectory trees. One possible initial policy is the myopic policy which is computed from the root of the tree downwards. Staring from the root, π0 is found by solving the single stage reinforcement learning resulting from taking into account only the immediate reward at the next state. Once the weighted classification problem is solved the trees are pruned by following the action which agrees with π0. The remaining realizations of state S1 follow the distribution induced by the myopic control of the first stage. The process is continued to stage T −1. The second possible initial policy is computed from the leaves backward to the root. Note that the distribution of the state at a leaf that is chosen at random is the distribution of the state when a randomized policy is used. Therefore, to find the best control at stage T −1, given that the previous T −2 controls choose random actions, we solve the weighted classification problem induced by considering all the realization of the state ST −1 from all the trees (these are not independent observations) or choose randomly one realization from each tree (these are independent realizations). Given the classifier, we use the equivalent control πT −1 to propagated the rewards up to the previous stage and solve the resulting weighted classification problem. This is carried out recursively up to the root of the tree. 6 Extensions The results presented in this paper generalize to the non-Markovian setting as well. In particular, when the state space, action space, and the reward function depend on time, and the distribution over the next state depends on all past states and actions, we will be dealing with non-stationary deterministic policies π = (π0, π1, . . . , πT −1); πt : S0 × A0 × . . . × St−1 × At−1 × St →At, t = 0, 1, . . . , T −1. POMDPs can be dealt with in terms of the belief states as a continuous state space MDP or as a non-Markovian process in which policies depend directly on all past observations. While we focused on the trajectory tree method, the algorithm can be easily modified to solve the optimization problem associated with the random trajectory method [5] by adjusting the single step reinforcement learning reduction and the pruning method presented here. 7 Illustrative Example The following example illustrates the aspects of the problem and the components of our solution. The simulated system is a two-step MDP, with continuous state space S = [0, 1] and a binary action space A = {0, 1}. The distribution over the initial state is uniform. Given state s and action a the next state s′ is generated by s′ = mod(s + 0.33a + 0.1randn, 1), where mod(x, 1) is the fraction part of x, and randn is a Gaussian random variable independent of the other variables in the problem. The reward function is r(s) = s sin(πs). We consider a class of policies parameterized by a continuous parameter: Π = {π(·; θ)|θ = (θ0, θ1) ∈[0, 2]2}, where πi(s; θi) = 1 when θi ≤1 and s > θi or when θi > 1 and s < θi −1 and zero otherwise, i = 0, 1. In Figure 3 the objective function bVn(π(θ)), estimated from n = 20 trees, is presented as a function of θ0 and θ1. The path taken by the algorithm supperimposed on the contour plot of bVn(π(θ)) is also presented. Starting from the arbitrary point 0, the algorithm performs optimization with respect to one of the coordinates at a time and converges after 3 iterations. 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 θ0 θ1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 θ0 θ1 0 1 2 3 Figure 3: The objective function bVn(π(θ)) and the path taken by the algorithm. References [1] N. Abe, B. Zadrozny, and J. Langford. An iterative method for multi-class cost-sensitive learning. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 3–11, 2004. [2] J. Bagnell, S. Kakade, A. Ng, and J. Schneider. Policy search by dynamic programming. In Advances in Neural Information Processing Systems, volume 16. MIT Press, 2003. [3] A. G. Barto and T. G. Dietterich. Reinforcement learning and its relationship to supervised learning. In J. Si, A. Barto, W. Powell, and D. Wunsch, editors, Handbook of learning and approximate dynamic programming. John Wiley and Sons, Inc, 2004. [4] A. Fern, S. Yoon, and R. Givan. Approximate policy iteration with a policy language bias. In Advances in Neural Information Processing Systems, volume 16, 2003. [5] M. Kearns, Y. Mansour, and A. Ng. Approximate planning in large POMDPs via reusable trajectories. In Advances in Neural Information Processing Systems, volume 12. MIT Press, 2000. [6] M. Lagoudakis and R. Parr. Reinforcement learning as classification: Leveraging modern classifiers. In Proceedings of the Twentieth International Conference on Machine Learning, 2003. [7] J. Langford and A. Beygelzimer. Sensitive error correcting output codes. In Proceedings of the 18th Annual Conference on Learning Theory, pages 158–172, 2005. [8] J. Langford and B. Zadrozny. Reducing T-step reinforcement learning to classification. http://hunch.net/∼jl/projects/reductions/reductions.html, 2003. [9] J. Langford and B. Zadrozny. Relating reinforcement learning performance to classification performance. In Proceedings of the Twenty Second International Conference on Machine Learning, pages 473–480, 2005. [10] M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, Inc, 1994.
|
2005
|
206
|
2,833
|
Off-Road Obstacle Avoidance through End-to-End Learning Yann LeCun Courant Institute of Mathematical Sciences New York University, New York, NY 10004, USA http://yann.lecun.com Urs Muller Net-Scale Technologies Morganville, NJ 07751, USA urs@net-scale.com Jan Ben Net-Scale Technologies Morganville, NJ 07751, USA Eric Cosatto NEC Laboratories, Princeton, NJ 08540 Beat Flepp Net-Scale Technologies Morganville, NJ 07751, USA Abstract We describe a vision-based obstacle avoidance system for off-road mobile robots. The system is trained from end to end to map raw input images to steering angles. It is trained in supervised mode to predict the steering angles provided by a human driver during training runs collected in a wide variety of terrains, weather conditions, lighting conditions, and obstacle types. The robot is a 50cm off-road truck, with two forwardpointing wireless color cameras. A remote computer processes the video and controls the robot via radio. The learning system is a large 6-layer convolutional network whose input is a single left/right pair of unprocessed low-resolution images. The robot exhibits an excellent ability to detect obstacles and navigate around them in real time at speeds of 2 m/s. 1 Introduction Autonomous off-road vehicles have vast potential applications in a wide spectrum of domains such as exploration, search and rescue, transport of supplies, environmental management, and reconnaissance. Building a fully autonomous off-road vehicle that can reliably navigate and avoid obstacles at high speed is a major challenge for robotics, and a new domain of application for machine learning research. The last few years have seen considerable progress toward that goal, particularly in areas such as mapping the environment from active range sensors and stereo cameras [11, 7], simultaneously navigating and building maps [6, 15], and classifying obstacle types. Among the various sub-problems of off-road vehicle navigation, obstacle detection and avoidance is a subject of prime importance. The wide diversity of appearance of potential obstacles, and the variability of the surroundings, lighting conditions, and other factors, make the problem very challenging. Many recent efforts have attacked the problem by relying on a multiplicity of sensors, including laser range finder and radar [11]. While active sensors make the problem considerably simpler, there seems to be an interest from potential users for purely passive systems that rely exclusively on camera input. Cameras are considerably less expensive, bulky, power hungry, and detectable than active sensors, allowing levels of miniaturization that are not otherwise possible. More importantly, active sensors can be slow, limited in range, and easily confused by vegetation, despite rapid progress in the area [2]. Avoiding obstacles by relying solely on camera input requires solving a highly complex vision problem. A time-honored approach is to derive range maps from multiple images through multiple cameras or through motion [6, 5]. Deriving steering angles to avoid obstacles from the range maps is a simple matter. A large number of techniques have been proposed in the literature to construct range maps from stereo images. Such methods have been used successfully for many years for navigation in indoor environments where edge features can be reliably detected and matched [1], but navigation in outdoors environment, despite a long history, is still a challenge [14, 3]: real-time stereo algorithms are considerably less reliable in unconstrained outdoors environments. The extreme variability of lighting conditions, and the highly unstructured nature of natural objects such as tall grass, bushes and other vegetation, water surfaces, and objects with repeating textures, conspire to limit the reliability of this approach. In addition, stereo-based methods have a rather limited range, which dramatically limits the maximum driving speed. 2 End-To-End Learning for Obstacle Avoidance In general, computing depth from stereo images is an ill-posed problem, but the depth map is only a means to an end. Ultimately, the output of an obstacle avoidance system is a set of possible steering angles that direct the robot toward traversible regions. Our approach is to view the entire problem of mapping input stereo images to possible steering angles as a single indivisible task to be learned from end to end. Our learning system takes raw color images from two forward-pointing cameras mounted on the robot, and maps them to a set of possible steering angles through a single trained function. The training data was collected by recording the actions of a human driver together with the video data. The human driver remotely drives the robot straight ahead until the robot encounters a non-traversible obstacle. The human driver then avoids the obstacle by steering the robot in the appropriate direction. The learning system is trained in supervised mode. It takes a single pair of heavily-subsampled images from the two cameras, and is trained to predict the steering angle produced by the human driver at that time. The learning architecture is a 6-layer convolutional network [9]. The network takes the left and right 149×58 color images and produces two outputs. A large value on the first output is interpreted as a left steering command while a large value on the second output indicates a right steering command. Each layer in a convolutional network can be viewed as a set of trainable, shift-invariant linear filters with local support, followed by a point-wise non-linear saturation function. All the parameters of all the filters in the various layers are trained simultaneously. The learning algorithm minimizes the discrepancy between the desired output vector and the output vector produced by the output layer. The approach is somewhat reminiscent of the ALVINN and MANIAC systems [13, 4]. The main differences with ALVINN are: (1) our system uses stereo cameras; (2) it is trained for off-road obtacle avoidance rather than road following; (3) Our trainable system uses a convolutional network rather than a traditional fully-connected neural net. Convolutional networks have two considerable advantages for this applications. Their local and sparse connection scheme allows us to handle images of higher resolution than ALVINN while keeping the size of the network within reasonnable limits. Convolutional nets are particularly well suited for our task because local feature detectors that combine inputs from the left and right images can be useful for estimating distances to obstacles (possibly by estimating disparities). Furthermore, the local and shift-invariant property of the filters allows the system to learn relevant local features with a limited amount of training data. They key advantage of the approach is that the entire function from raw pixels to steering angles is trained from data, which completely eliminates the need for feature design and selection, geometry, camera calibration, and hand-tuning of parameters. The main motivation for the use of end-to-end learning is, in fact, to eliminate the need for hand-crafted heuristics. Relying on automatic global optimization of an objective function from massive amounts for data may produce systems that are more robust to the unpredictable variability of the real world. Another potential benefit of a pure learning-based approach is that the system may use other cues than stereo disparity to detect obstacles, possibly alleviating the short-sightedness of methods based purely on stereo matching. 3 Vehicle Hardware We built a small and light-weight vehicle which can be carried by a single person so as to facilitate data collection and testing in a wide variety of environments. Using a small, rugged and low-cost robot allowed us to drive at relatively high speed without fear of causing damage to people, property or the robot itself. The downside of this approach is the limited payload, too limited for holding the computing power necessary for the visual processing. Therefore, the robot has no significant on-board computing power. It is remotely controled by an off-board computer. A wireless link is used to transmit video and sensor readings to the remote computer. Throttle and steering controls are sent from the computer to the robot through a regular radio control channel. The robot chassis was built around a customized 1/10-th scale remote-controlled, electricpowered, four-wheel-drive truck which was roughly 50cm in length. The typical speed of the robot during data collection and testing sessions was roughly 2 meters per second. Two forward-pointing low-cost 1/3-inch CCD cameras were mounted 110mm apart behind a clear lexan window. With 2.5mm lenses, the horizontal field of view of each camera was about 100 degrees. A pair of 900MHz analog video transmitters was used to send the camera outputs to the remote computer. The analog video links were subject to high signal noise, color shifts, frequent interferences, and occasional video drop-outs. But the small size, light weight, and low cost provided clear advantages. The vehicle is shown in Figure 1. The remote control station consisted of a 1.4GHz Athlon PC running Linux with video capture cards, and an interface to an R/C transmitter. Figure 1: Left: The robot is a modified 50 cm-long truck platform controled by a remote computer. Middle: sample images images from the training data. Right: poor reception occasionally caused bad quality images. 4 Data Collection During a data collection session, the human operator wears video goggles fed with the video signal from one the robot’s cameras (no stereo), and controls the robot through a joystick connected to the PC. During each run, the PC records the output of the two video cameras at 15 frames per second, together with the steering angle and throttle setting from the operator. A crucially important requirement of the data collection process was to collect large amounts of data with enough diversity of terrain, obstacles, and lighting conditions. Tt was necessary for the human driver to adopt a consistent obstacle avoidance behaviour. To ensure this, the human driver was to drive the vehicle straight ahead whenever no obstacle was present within a threatening distance. Whenever the robot approached an obstacle, the human driver had to steer left or right so as to avoid the obstacle. The general strategy for collecting training data was as follows: (a) Collecting data from as large a variety of off-road training grounds as possible. Data was collected from a large number of parks, playgrounds, frontyards and backyards of a number of suburban homes, and heavily cluttered construction areas; (b) Collecting data with various lighting conditions, i. e., different weather conditions and different times of day; (c) Collecting sequences where the vehicle starts driving straight and then is steered left or right as the robot approached an obstacle; (d) Avoiding turns when no obstacles were present; (e) Including straight runs with no obstacles and no turns as part of the training set; (f) Trying to be consistent in the turning behavior, i. e., always turning at approximately the same distance from an obstacle. Even though great care was taken in collecting the highest quality training data, there were a number of imperfections in the training data that could not be avoided: (a) The smallform-factor, low-cost cameras presented significant differences in their default settings. In particular, the white balance of the two cameras were somewhat different; (b) To maximize image quality, the automatic gain control and automatic exposure were activated. Because of differences in fabrication, the left and right images had slightly different brightness and contrast characteristics. In particular, the AGC adjustments seem to react at different speeds and amplitudes; (c) Because of AGC, driving into the sunlight caused the images to become very dark and obstacles to become hard to detect; (d) The wireless video connection caused dropouts and distortions of some frames. Approximately 5 % of the frames were affected. An example is shown in Figures 1; (e) The cameras were mounted rigidly on the vehicle and were exposed to vibration, despite the suspension. Despite these difficult conditions, the system managed to learn the task quite well as will be shown later. The data was recorded and archived at a resolution of 320×240× pixels at 15 frames per second. The data was collected on 17 different days during the Winter of 2003/2004 (the sun was very low on the horizon). A total of 1,500 clips were collected with an average length of about 85 frames each. This resulted in a total of about 127,000 individual pairs of frames. Segments during which the robot was driven into position in preparation for a run were edited out. No other manual data cleaning took place. In the end, 95,000 frame pairs were used for training and 32,000 for validation/testing. The training pairs and testing pairs came from different sequences (and often different locations). Figure 1 shows example snapshots from the training data, including an image with poor reception. Note that only one of the two (stereo) images is shown. High noise and frame dropouts occurred in approximately 5 % of the frames. It was decided to leave them in the training set and test set so as to train the system under realistic conditions. 5 The Learning System The entire processing consists of a single convolutional network. The architecture of convolutional nets is somewhat inspired by the structure of biological visual systems. Convolutional nets have been used successfully in a number of vision applications such as handwriting recognition [9], object recognition [10], and face detection [12]. The input to the convolutional net consists of 6 planes of size 149×58 pixels. The six planes respectively contain the Y, U and V components for the left camera and the right camera. The input images were obtained by cropping the 320 × 240 images, and through 2× horizontal low-pass filtering and subsampling, and 4× vertical low-pass filtering and subsampling. The horizontal resolution was set higher so as to preserve more accurate image disparity information. Each layer in a convolutional net is composed of units organized in planes called feature maps. Each unit in a feature map takes inputs from a small neighborhood within the feature maps of the previous layer. Neighborhing units in a feature map are connected to neighboring (possibly overlapping) windows. Each unit computes a weighted sum of its inputs and passes the result through a sigmoid saturation function. All units within a feature map share the same weights. Therefore, each feature map can be seen as convolving the feature maps of the previous layers with small-size kernels, and passing the sum of those convolutions through sigmoid functions. Units in a feature map detect local features at all locations on the previous layer. The first layer contains 6 feature maps of size 147×56 connected to various combinations of the input maps through 3×3 kernels. The first feature map is connected to the YUV planes of the left image, the second feature map to the YUV planes of the right image, and the other 4 feature maps to all 6 input planes. Those 4 feature maps are binocular, and can learn filters that compare the location of features in the left and right images. Because of the weight sharing, the first layer merely has 276 free parameters (30 kernels of size 3×3 plus 6 biases). The next layer is an averaging/subsampling layer of size 49×14 whose purpose is to reduce the spatial resolution of the feature maps so as to build invariances to small geometric distortions of the input. The subsampling ratios are 3 horizontally and 4 vertically. The 3-rd layer contains 24 feature maps of size 45×12. Each feature map is connected to various subsests of maps in the previous layer through a total of 96 kernels of size 5×3. The 4-th layer is an averaging/subsampling layer of size 9×4 with 5×3 subsampling ratios. The 5-th layer contains 100 feature maps of size 1×1 connected to the 4-th layer through 2400 kernels of size 9×4 (full connection). finally, the output layer contains two units fully-connected to the 100 units in the 5-th layer. The two outputs respectively code for “turn left” and “turn right” commands. The network has 3.15 Million connections and about 72,000 trainable parameters. The bottom half of figure 2 shows the states of the six layers of the convolutional net. the size of the input, 149×58, was essentially limited by the computing power of the remote computer (a 1.4GHz Athlon). The network as shown runs in about 60ms per image pair on the remote computer. Including all the processing, the driving system ran at a rate of 10 cycles per second. The system’s output is computed on a frame by frame basis with no memory of the past and no time window. Using multiple successive frames as input would seem like a good idea since the multiple views resulting from ego-motion facilitates the segmentation and detection of nearby obstacles. Unfortunately, the supervised learning approach precludes the use of multiple frames. The reason is that since the steering is fairly smooth in time (with long, stable periods), the current rate of turn is an excellent predictor of the next desired steering angle. But the current rate of turn is easily derived from multiple successive frames. Hence, a system trained with multiple frames would merely predict a steering angle equal to the current rate of turn as observed through the camera. This would lead to catastrophic behavior in test mode. The robot would simply turn in circles. The system was trained with a stochastic gradient-based method that automatically sets the relative step sizes of the parameters based on the local curvature of the loss surface [8]. Gradients were computed using the variant of back-propagation appropriate for convolutional nets. 6 Results Two performance measurements were recorded, the average loss, and the percentage of “correctly classified” steering angles. The average loss is the sum of squared differences between outputs produced by the system and the target outputs, averaged over all samples. The percentage of correctly classified steering angles measures the number of times the predicted steering angle, quantized into three bins (left, straight, right), agrees with steering angle provided by the human driver. Since the thresholds for deciding whether an angle counted as left, center, or right were somewhat arbitrary, the percentages cannot be intepreted in absolute terms, but merely as a relative figure of merit for comparing runs and architectures. Figure 2: Internal state of the convolutional net for two sample frames. The top row shows left/right image pairs extracted from the test set. The light-blue bars below show the steering angle produced by the system. The bottom halves show the state of the layers of the network, where each column is a layer (the penultimate layer is not shown). Each rectangular image is a feature map in which each pixel represents a unit activation. The YUV components of the left and right input images are in the leftmost column. With 95,000 training image pairs, training took 18 epochs through the training set. No significant improvements in the error rate occurred thereafter. After training, the error rate was 25.1% on the training set, and 35.8% on the test set. The average loss (mean-sqaured error) was 0.88 on the training set and 1.24 on the test set. A complete training session required about four days of CPU time on a 3.0GHz Pentium/Xeon-based server. Naturally, a classification error rate of 35.8% doesn’t mean that the vehicle crashes into obstacles 35.8% of the time, but merely that the prediction of the system was in a different bin than that of the human drivers for 35.8% of the frames. The seemingly high error rate is not an accurate reflection of the actual effectiveness of the robot in the field. There are several reasons for this. First, there may be several legitimate steering angles for a given image pair: turning left or right around an obstacle may both be valid options, but our performance measure would record one of those options as incorrect. In addition, many illegitimate errors are recorded when the system starts turning at a different time than the human driver, or when the precise values of the steering angles are different enough to be in different bins, but close enough to cause the robot to avoid the obstacle. Perhaps more informative is diagram in figure 3. It shows the steering angle produced by the system and the steering angle provided by the human driver for 8000 frames from the test set. It is clear for the plot that only a small number of obstacles would not have been avoided by the robot. The best performance measure is a set of actual runs through representative testing grounds. Videos of typical test runs are available at http://www.cs.nyu.edu/˜yann/research/dave/index.html. Figure 2 shows a snapshot of the trained system in action. The network was presented with a scene that was not present in the training set. This figure shows that the system can detect obstacles and predict appropriate steering angles in the presence of back-lighting and with wild difference between the automatics gain settings of the left and right cameras. Another visualization of the results can be seen in Figures 4. They are snapshots of video clips recorded from the vehicle’s cameras while the vehicle was driving itself autonomously. Only one of the two camera outputs is shown here. Each picture also shows Figure 3: The steering angle produced by the system (black) compared to the steering angle provided by the human operator (red line) for 8000 frames from the test set. Very few obstacles would not have been avoided by the system. the steering angle produced by the system for that particular input. 7 Conclusion We have demonstrate the applicability of end-to-end learning methods to the task of obstacle avoidance for off-road robots. A 6-layer convolutional network was trained with massive amounts of data to emulate the obstacle avoidance behavior of a human driver. the architecture of the system allowed it to learn low-level and high-level features that reliably predicted the bearing of traversible areas in the visual field. The main advantage of the system is its robustness to the extreme diversity of situations in off-road environments. Its main design advantage is that it is trained from raw pixels to directly produce steering angles. The approach essentially eliminates the need for manual calibration, adjustments, parameter tuning etc. Furthermore, the method gets around the need to design and select an appropriate set of feature detectors, as well as the need to design robust and fast stereo algorithms. The construction of a fully autonomous driving system for ground robots will require several other components besides the purely-reactive obstacle detection and avoidance system described here. The present work is merely one component of a future system that will include map building, visual odometry, spatial reasoning, path finding, and other strategies for the identification of traversable areas. Acknowledgment This project was a preliminary study for the DARPA project “Learning Applied to Ground Robots” (LAGR). The material presented is based upon work supported by the Defense Advanced Research Project Agency Information Processing Technology Office, ARPA Order No. Q458, Program Code No. 3D10, Issued by DARPA/CMO under Contract #MDA972-03-C-0111. References [1] N. Ayache and O. Faugeras. Maintaining representations of the environment of a mobile robot. IEEE Trans. Robotics and Automation, 5(6):804–819, 1989. [2] C. Bergh, B. Kennedy, L. Matthies, and Johnson A. A compact, low power two-axis scanning laser rangefinder for mobile robots. In The 7th Mechatronics Forum International Conference, 2000. [3] S. B. Goldberg, M. Maimone, and L. Matthies. Stereo vision and rover navigation software for planetary exploration. In IEEE Aerospace Conference Proceedings, March 2002. [4] T. Jochem, D. Pomerleau, and C. Thorpe. Vision-based neural network road and intersection detection and traversal. In Proc. IEEE Conf. Intelligent Robots and Systems, volume 3, pages 344–349, August 1995. Figure 4: Snapshots from the left camera while the robots drives itself through various environment. The black bar beneath each image indicates the steering angle produced by the system. Top row: four successive snapshots showing the robot navigating through a narrow passageway between a trailer, a backhoe, and some construction material. Bottom row, left: narrow obstacles such as table legs and poles (left), and solid obstacles such as fences (center-left) are easily detected and avoided. Higly textured objects on the ground do not detract the system from the correct response (center-right). One scenario where the vehicle occasionally made wrong decisions is when the sun is in the field of view: the system seems to systematically drive towards the sun, whenever the sun is low on the horizon (right). Videos of these sequences are available at http://www.cs.nyu.edu/˜yann/research/dave/index.html. [5] A. Kelly and A. Stentz. Stereo vision enhancements for low-cost outdoor autonomous vehicles. In International Conference on Robotics and Automation, Workshop WS-7, Navigation of Outdoor Autonomous Vehicles, (ICRA ’98), May 1998. [6] D.J. Kriegman, E. Triendl, and T.O. Binford. Stereo vision and navigation in buildings for mobile robots. IEEE Trans. Robotics and Automation, 5(6):792–803, 1989. [7] E. Krotkov and M. Hebert. Mapping and positioning for a prototype lunar rover. In Proc. IEEE Int’l Conf. Robotics and Automation, pages 2913–2919, May 1995. [8] Y. LeCun, L. Bottou, G. Orr, and K. Muller. Efficient backprop. In G. Orr and Muller K., editors, Neural Networks: Tricks of the trade. Springer, 1998. [9] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998. [10] Yann LeCun, Fu-Jie Huang, and Leon Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of CVPR’04. IEEE Press, 2004. [11] L. Matthies, E. Gat, R. Harrison, B. Wilcox, R. Volpe, and T. Litwin. Mars microrover navigation: Performance evaluation and enhancement. In Proc. IEEE Int’l Conf. Intelligent Robots and Systems, volume 1, pages 433–440, August 1995. [12] R. Osadchy, M. Miller, and Y. LeCun. Synergistic face detection and pose estimation with energy-based model. In Advances in Neural Information Processing Systems (NIPS 2004). MIT Press, 2005. [13] Dean A. Pomerleau. Knowledge-based training of artificial neural netowrks for autonomous robot driving. In J. Connell and S. Mahadevan, editors, Robot Learning. Kluwer Academic Publishing, 1993. [14] C. Thorpe, M. Herbert, T. Kanade, and S Shafer. Vision and navigation for the carnegie-mellon navlab. IEEE Trans. Pattern Analysis and Machine Intelligence, 10(3):362–372, May 1988. [15] S. Thrun. Learning metric-topological maps for indoor mobile robot navigation. Artificial Intelligence, 99(1):21–71, February 1998.
|
2005
|
207
|
2,834
|
Modeling Neuronal Interactivity using Dynamic Bayesian Networks Lei Zhang†,‡, Dimitris Samaras†, Nelly Alia-Klein‡, Nora Volkow‡, Rita Goldstein‡ † Computer Science Department, SUNY at Stony Brook, Stony Brook, NY ‡ Medical Department, Brookhaven National Laboratory, Upton, NY Abstract Functional Magnetic Resonance Imaging (fMRI) has enabled scientists to look into the active brain. However, interactivity between functional brain regions, is still little studied. In this paper, we contribute a novel framework for modeling the interactions between multiple active brain regions, using Dynamic Bayesian Networks (DBNs) as generative models for brain activation patterns. This framework is applied to modeling of neuronal circuits associated with reward. The novelty of our framework from a Machine Learning perspective lies in the use of DBNs to reveal the brain connectivity and interactivity. Such interactivity models which are derived from fMRI data are then validated through a group classification task. We employ and compare four different types of DBNs: Parallel Hidden Markov Models, Coupled Hidden Markov Models, Fully-linked Hidden Markov Models and Dynamically MultiLinked HMMs (DML-HMM). Moreover, we propose and compare two schemes of learning DML-HMMs. Experimental results show that by using DBNs, group classification can be performed even if the DBNs are constructed from as few as 5 brain regions. We also demonstrate that, by using the proposed learning algorithms, different DBN structures characterize drug addicted subjects vs. control subjects. This finding provides an independent test for the effect of psychopathology on brain function. In general, we demonstrate that incorporation of computer science principles into functional neuroimaging clinical studies provides a novel approach for probing human brain function. 1. Introduction Functional Magnetic Resonance Imaging (fMRI) has enabled scientists to look into the active human brain [1] by providing sequences of 3D brain images with intensities representing blood oxygenation level dependent (BOLD) regional activations. This has revealed exciting insights into the spatial and temporal changes underlying a broad range of brain functions, such as how we see, feel, move, understand each other and lay down memories. This fMRI technology offers further promise by imaging the dynamic aspects of the functioning human brain. Indeed, fMRI has encouraged a growing interest in revealing brain connectivity and interactivity within the neuroscience community. It is for example understood that a dynamically managed goal directed behavior requires neural control mechanisms orchestrated to select the appropriate and task-relevant responses while inhibiting irrelevant or inappropriate processes [12]. To date, the analyses and interpretation of fMRI data that are most commonly employed by neuroscientists depend on the cognitive-behavioral probes that are developed to tap regional brain function. Thus, brain responses are a-priori labeled based on the putative underlying task condition and are then used to separate a priori defined groups of subjects. In recent computer science research [18][13][3][19], machine learning methods have been applied for fMRI data analysis. However, in these approaches information on the connectivity and interactivity between brain voxels is discarded and brain voxels are assumed to be independent, which is an inaccurate assumption (see use of statistical maps [3][19] or the mean of each fMRI time interval[13]). In this paper, we exploit Dynamic Bayesian Networks for modeling dynamic (i.e., connecting and interacting) neuronal circuits from fMRI sequences. We suggest that through incorporation of graphical models into functional neuroimaging studies we will be able to identify neuronal patterns of connectivity and interactivity that will provide invaluable insights into basic emotional and cognitive neuroscience constructs. We further propose that this interscientific incorporation may provide a valid tool where objective brain imaging data are used for the clinical purpose of diagnosis of psychopathology. Specifically, in our case study we will model neuronal circuits associated with reward processing in drug addiction. We have previously shown loss of sensitivity to the relative value of money in cocaine users [9]. It has also been previously highlighted that the complex mechanism of drug addiction requires the connectivity and interactivity between regions comprising the mesocorticolimbic circuit [12][8]. However, although advancements have been made in studying this circuit’s role in inhibitory control and reward processing, inference about the connectivity and interactivity of these regions is at best indirect. Dynamical causal models have been compared in [16]. Compared with dynamic causal models, DBNs admit a class of nonlinear continuous-time interactions among the hidden states and model both causal relationships between brain regions and temporal correlations among multiple processes, useful for both classification and prediction purposes. Probabilistic graphical models [14][11] are graphs in which nodes represent random variables, and the (lack of) arcs represent conditional independence assumptions. In our case, interconnected brain regions can be considered as nodes of a probabilistic graphical model and interactivity relationships between regions are modeled by probability values on the arcs (or the lack of) between these nodes. However, the major challenge in such a machine learning approach is the choice of a particular structure that models connectivity and interactivity between brain regions in an accurate and efficient manner. In this work, we contribute a framework of exploiting Dynamic Bayesian Networks to model such a structure for the fMRI data. More specifically, instead of modeling each brain region in isolation, we aim to model the interactive pattern of multiple brain regions. Furthermore, the revealed functional information is validated through a group classification case study: separating drug addicted subjects from healthy non-drug-using controls based on trained Dynamic Bayesian Networks. Both conventional BBNs and HMMs are unsuitable for modeling activities underpinned not only by causal but also by clear temporal correlations among multiple processes [10], and Dynamic Bayesian Networks [5][7] are required. Since the state of each brain region is not known (only observations of activation exist), it can be thought of as a hidden variable[15]. An intuitive way to construct a DBN is to extend a standard HMM to a set of interconnected multiple HMMs. For example, Vogler et al. [17] proposed Parallel Hidden Markov Models (PaHMMs) that factorize state space into multiple independent temporal processes without causal connections in-between. Brand et al. [2] exploited Coupled Hidden Markov Models (CHMMs) for complex action recognitions. Gong et al. [10] developed a Dynamically Multi-Linked Hidden Markov Model (DMLHMM) for the recognition of group activities involving multiple different object events in a noisy outdoor scene. This model is the only one of those models that learns both the structure and parameters of the graphical model, instead of presuming a structure (possibly inaccurate) given the lack of knowledge of human brain connectivity. In order to model the dynamic neuronal circuits underlying reward processing in the human brains, we explore and compare the above DBNs. We propose and compare two learning schemes of DML-HMMs, one is greedy structure search (Hill-Climbing) and the other is Structural Expectation-Maximization (SEM). To our knowledge, this is the first time that Dynamic Bayesian Networks are exploited in modeling the connectivity and interactivity among brain regions activated during a fMRI study. Our current experimental classification results show that by using DBNs, group classification can be performed even if the DBNs are constructed from as few as 5 brain regions. We also demonstrate that, by using the proposed learning algorithms, different DBN structures characterize drug addicted subjects vs. control subjects which provides an independent test for the effects of psychopathology on brain function. From the machine learning point of view, this paper provides an innovative application of Dynamic Bayesian Networks in modeling dynamic neuronal circuits. Furthermore, since the structures to be explored are exclusively represented by hidden (cannot be observed directly) states and their interconnecting arcs, the structure learning of DML-HMMs poses a greater challenge than other DBNs [5]. From the neuroscientific point of view, drug addiction is a complex disorder characterized by compromised inhibitory control and reward processing. However, individuals with compromised mechanisms of control and reward are difficult to identify unless they are directly subjected to challenging conditions. Modeling the interactive brain patterns is therefore essential since such patterns may be unique to a certain psychopathology and could hence be used for improving diagnosis and prevention efforts (e.g., diagnosis of drug addiction, prevention of relapse or craving). In addition, the development of this framework can be applied to further our understanding of other human disorders and states such as those impacting insight and awareness, that similarly to drug addiction are currently identified based mostly on subjective criteria and self-report. Figure 1: Four types of Dynamic Bayesian Networks: PaHMM, CHMM, FHMM and DML-HMM. 2. Dynamic Bayesian Networks In this section, we will briefly describe the general framework of Dynamic Bayesian Networks. DBNs are Bayesian Belief Networks that have been extended to model the stochastic evolution of a set of random variables over time [5][7]. As described in [10], a DBN B can be represented by two sets of parameters (m, Θ) where the first set m represents the structure of the DBN including the number of hidden state variables S and observation variables O per time instance, the number of states for each hidden state variable and the topology of the network (set of directed arcs connecting the nodes). More specifically, the ith hidden state variable and the jth observation variable at time instance t are denoted as S(i) t and O(j) t with i ∈{1, ..., Nh} and j ∈{1, ..., No}, Nh and No are the number of hidden state variables and observation variables respectively. The second set of parameters Θ includes the state transition matrix A, the observation matrix B and a matrix π modeling the initial state distribution P(Si 1). More specifically, A and B quantify the transition models P(S(i) t |Pa(S(i) t )) and observation models P(O(i) t |Pa(O(i) t )) respectively where Pa(S(i) t ) are the parents of S(i) t (similarly Pa(O(i) t ) for observations). In this paper, we will examine four types of DBNs: Parallel Hidden Markov Models (PaHMM) [17], Coupled Hidden Markov Models (CHMM)[2], Fully Connected Hidden Markov Models (FHMM) and Dynamically Multi-Linked Hidden Markov Models (DML-HMM)[10] as shown in Fig 1 where observation nodes are shown as shaded circles, hidden nodes as clear circles and the causal relationships among hidden state variables are represented by the arcs between hidden nodes. Notice that the first three DBNs are essentially three special cases of the DML-HMM. 2.1. Learning of DBNs Given the form of DBNs in the previous sections, there are two learning problems that must be solved for real-world applications: 1) Parameter Learning: assuming fixed structure, given the training sequences of observations O, how we adjust the model parameters B = (m, Θ) to maximize P(O|B); 2) Structure Learning: for DBNs with unknown structure (i.e. DML-HMMs), how we learn the structure from the observation O. Parameter learning has been well studied in [17][2]. Given fixed structure, parameters can be learned iteratively using Expectation-Maximization (EM). The E step, which involves the inference of hidden states given parameters, can be implemented using an exact inference algorithm such as the junction tree algorithm. Then the parameters and maximal likelihood L(Θ) can be computed iteratively from the M step. In [10], the DML-HMM was selected from a set of candidate structures, however the selection of candidate structure is non-trivial for most applications including brain region connectivity. For a DML-HMM with N hidden nodes, the total number of different structures is 2N2−N, thus it is impossible to conduct an exhaustive search in most cases. The learning of DBNs involving both parameter learning and structure learning has been discussed in [5], where the scoring rules for standard probabilistic networks were extended to the dynamic case and the Structural EM (SEM) algorithm was developed for structure learning when some of the variables are hidden. The structure learning of DML-HMMs is more challenging since the structures to be explored are exclusively represented by the hidden states and none of them can be directly observed. In the following, we will explain two learning schemes for the DML-HMMs. One standard way is to perform parametric EM within an outer-loop structural search. Thus, our first scheme is to use an outer-loop of the Hill-Climbing algorithm (DML-HMM-HC). For each step of the algorithm, from the current DBN, we first compute a neighbor list by adding, deleting, or reversing one arc. Then we perform parameter learning for each of the neighbors and go to the neighbor with the minimum score until there is no neighbor whose score is higher than the current DBN. Our second learning scheme is similar to the Structural EM algorithm [5] in the sense that the structural and parametric modification are performed within a single EM process. As described in [5][4], a structural search can be performed efficiently given complete observation data. However, as we described above, the structure of DML-HMMs are represented by the hidden states which can not be observed directly. Hence, we develop the DML-HMM-SEM algorithm as follows: given the current structure, we first perform a parameter learning and then, for each training data, we compute the Most Probable Explanation (MPE), which computes the most likely value for each hidden node (similar to Viterbi in standard HMM). The MPE thus provides a complete estimation of the hidden states and a complete-data structural search [4] is then performed to find the best structure. We perform learning iteratively until the structure converges. In this scheme, the structural search is performed in the inner loop thus making the learning more efficient. Pseudocodes of both learning schemes are described in Table 1. In this paper, we use Schwarz’s Bayesian Information Criterion (BIC): BIC = −2 log L(ΘB) + KB log N as our score function where for a DBN B, L(ΘB) is the maximal likelihood under B, KB is the dimension of the parameters of B and N is the size of the training data. Theoretically, the DML-HMM-SEM algorithm is not guaranteed to converge since for the same training data, the most probably explanations (Si, Sj) of two DML-HMMs Bi, Bj might be different. In the worst case, oscillation between two structures is possible. To guarantee halting of the algorithm, a loop detector can be added so that, once any structure is selected in a second time, we stop the learning and select the structure with the minimum score visited during the searching. However, in our experiments, the learning algorithm always converged in a few steps. Procedure DML-HMM-HC Procedure DML-HMM-SEM Initial Model(B0); Initial Model(B0); Loop i = 0, 1, ... until convergence: Loop i = 0, 1, ... until convergence: [B ′ i, score0 i ] = Learn Parameter(Bi); [B ′ i, score0 i ] = Learn Parameter(Bi); B1..J i = Generate Neighbors(Bi); S = Most Prob Expl(B ′ i, O); for j=1..J Bmax i = Find Best Struct(S); [Bj′ i , scorej i] = Learn Parameter(Bj i ); if Bmax i == B ′ i j = Find Minscore(score1..J i ); return B ′ i; if (scorej i > score0 i ) else return B ′ i; Bi+1 = Bmax i ; else Bi+1 = Bj i ; Table 1: Two schemes of learning DML-HMMs: the left column lists the DML-HMM-HC scheme and the right column lists the DML-HMM-SEM scheme. 3. Modeling Reward Neuronal Circuits: A Case Study In this section, we will describe our case study of modeling Reward Neuronal Circuits: by using DBNs, we aim to model the interactive pattern of multiple brain regions for the neuropsychological problem of sensitivity to the relative value of money. Furthermore, we will examine the revealed functional information encapsulated in the trained DBNs through a group classification study: separating drug addicted subjects from healthy non-drug-using controls based on trained DBNs. 3.1. Data Collection and Preprocess In our experiments, data were collected to study the neuropsychological problem of loss of sensitivity to the relative value of money in cocaine users[9]. MRI studies were performed on a 4T Varian scanner and all stimuli were presented using LCD-goggles connected to a PC. Human participants pressed a button or refrained from pressing based on a picture shown to them. They received a monetary reward if they performed correctly. Specifically, three runs were repeated twice (T1, T2, T3; and T1R, T2R, T3R) and in each run, there were three monetary conditions (high money, low money, no money) and a baseline condition; the order of monetary conditions was pseudo-randomized and identical for all participants. Participants were informed about the monetary condition by a 3-sec instruction slide, presenting the stimuli: $0.45, $0.01 or $0.00. Feedback for correct responses in each condition consisted of the respective numeral designating the amount of money the subject has earned if correct or the symbol (X) otherwise. To simulate real-life motivational salience, subjects could gain up to $50 depending on their performance on this task. 16 cocaine dependent individuals, 18-55 years of age, in good health, were matched with 12 non-drug-using controls on sex, race, education and general intellectual functioning. Statistical Parametric Mapping (SPM)[6] was used for fMRI data preprocessing (realignment, normalization/registration and smoothing) and statistical analyses. 3.2. Feature Selection and Neuronal Circuit Modeling The fMRI data are extremely high dimensional (i.e. 53 × 63 × 46 voxels per scan). Prior to training the DBN, we selected 5 brain regions: Left Inferior Frontal Gyrus (Left IFG), Prefrontal Cortex (PFC, including lateral and medial dorsolateral PFC and the anterior cingulate), Midbrain (including substantia nigra), Thalamus and Cerebellum. These regions were selected based on prior SPM analyses random-effects analyses (ANOVA) where the goal was to differentiate effect of money (high, low, no) from the effect of group (cocaine, Figure 2: Learning processes and learned structures from two algorithms. The leftmost column demonstrates two (superimposed) learned structures where light gray dashed arcs (long dash) are learned from DML-HMM-HC, dark gray dashed arcs (short dash) from DML-HMM-SEM and black solid arcs from both. The right columns shows the transient structures of the learning processes of two algorithms where black represents existence of arc and white represents no arc. control) on all regions that were activated to monetary reward in all subjects. In all these five regions, the monetary main effect was significant as evidenced by region of interest follow-up analyses. Of note is the fact that these five regions are part of the mesocorticolimbic reward circuit, previously implicated in addiction. Each of the above brain regions is presented by a k-D feature vector where k is the number of brain voxels selected in this brain region (i.e. k = 3 for Left IFG and k = 8 for PFC). After feature selection, a DML-HMM with 5 hidden nodes can be learned as described in Sec. 2 from the training data. The leftmost image in Fig. 2 shows two superimposed possible structures of such DML-HMMs. The causal relationships discovered among different brain regions are embodied in the topology of the DML-HMM. Each of the five hidden variables has two states (activated or not) and each continuous observation variable (given by a k-D feature vector) represents the observed activation of each brain region. The Probabilistic Distribution Function (PDF) of each observation variable is a mixture of Gaussians conditioned by the state of its discrete parent node. Figure 3: Left three images shows the structures learned from the 3 subsets of Group C and the right three images shows those learned from subsets of Group S. Figure shows that some arcs consistently appeared in Group C but not consistently in Group S (marked in dark gray) and vice versa (marked in light gray), which implies such group differences in the interactive brain patterns may correspond to the loss of sensitivity to the relative value of money in cocaine users. 4. Experiments and Results We collected fMRI data of 16 drug addicted subjects and 12 control subjects, 6 runs per participant. Due to head motion, some data could not be used. In our experiments, we used a total of 152 fMRI sequences (87 scans per sequence) with 86 sequences for the drug addicted subjects (Group S) and 66 for control subjects (Group C). First we compare the two learning schemes for DML-HMMs proposed in Sec. 2. Fig. 2 demonstrates the learning process (initialized with the FHMM) for drug addicted subjects. The leftmost column shows two learned structures where red arcs are learned from DMLHMM-HC, green arcs from DML-HMM-SEM and black arcs from both. The right columns show the learning processes of DML-HMM-SEM (top) and DML-HMM-HC (bottom) with black representing existence of arc and white representing no arc. Since in DML-HMMSEM, structure learning is in the inner loop, the learning process is much faster than that of DML-HMM-HC. We also compared the BIC scores of the learned structures and we found DML-HMM-SEM selected better structures than DML-HMM-HC. It is also very interesting to examine the structure learning processes by using different training data. For each participant group, we randomly separated the data set into three subsets and trained DBNs are reported in Fig. 3 where the left three images show the structures learned from the 3 subsets of Group C and the right three images show those learned from subsets of Group S. In Fig. 3, we found the learned structures of each group are similar. We also found that some arcs consistently appeared in Group C but not consistently in Group S (marked in red) and vice versa (marked in green), which implies such group differences in the interactive brain patterns may correspond to the loss of sensitivity to the relative value of money in cocaine users. More specifically, in Fig. 3, the average intragroup similarity scores were 80% and 78.3%, while cross-group similarity was 56.7%. Figure 4: Classification results: All DBN methods significantly improved classification rates compared to K-Nearest Neighbor with DML-HMM performing best. The second set of experiments was to apply the trained DBNs for group classification. In our data collection, there were 6 runs of fMRI collection: T1, T2, T3, T1R, T2R and T3R with the latter latter repeating the former three, grouped into 4 data sets {T1, T2, T3, ALL} with ALL containing all the data. We performed classification experiments on each of the 4 data sets where the data were randomly divided into a training set and a testing set of equal size. During training, the described four DBN type were employed using the training set while during the learning of DML-HMMs, different initial structures (PaHMM, CHMM, FHMM) were used and the structure with the minimum BIC score was selected from the three learned DML-HMMs. For each model, two DBNs {Bc, Bs} were trained on the training data of Group C and Group S respectively. During testing, for each testing fMRI sequence Otest, we computed two likelihoods P test c = P(Otest|Bc) and P test s = P(Otest|Bs) using the two trained DBNs. Since the two DBNs may have different structures, instead of directly comparing the two likelihoods, we used the difference between these two likelihoods for classification. More specifically, during training, for each training sequence TRi, we computed the ratio of two likelihoods RT R i = P i c/P i s where P i c = P(TRi|Bc) and P i s = P(TRi|Bs). As expected, generally the ratios of Group C training data were significantly greater than those of Group S. During testing, the ratio Rtest = P test c /P test s for each test sequence was also computed and compared to the ratios of the training data for classification. Fig. 4 reports the classification rates of the different DBNs on each data set. For comparison, the k-th Nearest Neighbor (KNN) algorithm was applied on the fMRI sequences directly and Fig. 4 shows that by using DBNs, classification rates are significantly better with DML-HMM outperforming all other models. 5. Conclusions and Future Work In this work, we contributed a framework of exploiting Dynamic Bayesian Networks to model the functional information of the fMRI data. We explored four types of DBNs: a Parallel Hidden Markov Model (PaHMM), a Coupled Hidden Markov Model (CHMM), a Fully-linked Hidden Markov Model (FHMM) and a Dynamically Multi-linked Hidden Markov Model. Furthermore, we proposed and compared two structural learning schemes of DML-HMMs and applied the DBNs to a group classification problem. To our knowledge, this is the first time that Dynamic Bayesian Networks are exploited in modeling the connectivity and interactivity among brain voxels from fMRI data. This framework of exploring functional information of fMRI data provides a novel approach of revealing brain connectivity and interactivity and provides an independent test for the effect of psychopathology on brain function. Currently, DBNs use independently pre-selected brain regions, thus some other important interactivity information may have been discarded in the feature selection step. Our future work will focus on developing a dynamic neuronal circuit modeling framework performing feature selection and DBN learning simultaneously. Due to computational limits and for clarity purposes, we explored only 5 brain regions and thus another direction of future work is to develop a hierarchical DBN topology to comprehensively model all implicated brain regions efficiently. References [1] S. Anders, M. Lotze, M. Erb, W. Grodd, and N. Birbaumer. Brain activity underlying emotional valence and arousal: A response-related fmri study. In Human Brain Mapping. [2] M. Brand, N. Oliver, and A. Pentland. Coupled hidden markov models for complex action recognition. In CVPR, pages 994–999, 1996. [3] J. Ford, H. Farid, F. Makedon, L.A. Flashman, T.W. McAllister, V. Megalooikonomou, and A.J. Saykin. Patient classification of fmri activation maps. In MICCAI, 2003. [4] N. Friedman. The bayesian structual algorithm. In UAI, 1998. [5] N. Friedman, K. Murphy, and S. Russell. Learning the structure of dynamic probabilistic networks. In Uncertainty in AI, pages 139–147, 1998. [6] K. Friston, A. Holmes, K. Worsley, and et al. Statistical parametric maps in functional imaging: A general linear approach. Human Brain Mapping, pages 2:189–210, 1995. [7] G. Ghahramani. Learning dynamic bayesian networks. In Adaptive Processing of Sequences and Data Structures, Lecture Notes in AI, pages 168–197, 1998. [8] R.Z. Goldstein and N.D. Volkow. Drug addiction and its underlying neurobiological basis: Neuroimaging evidence for the involvement of the frontal cortex. American Journal of Psychiatry, (10):1642–1652. [9] R.Z. Goldstein et al. A modified role for the orbitofrontal cortex in attribution of salience to monetary reward in cocaine addiction: an fmri study at 4t. In Human Brain Mapping Conference, 2004. [10] S. Gong and T. Xiang. Recognition of group activities using dynamic probabilistic networks. In ICCV, 2003. [11] M.I. Jordan and Y. Weiss. Graphical models: probabilistic inference, Arbib, M. (ed): Handbook of Neural Networks and Brain Theory. MIT Press, 2002. [12] A.W. MacDonald et al. Dissociating the role of the dorsolateral prefrontal and anterior cingulate cortex in cognitive control. Science, 288(5472):1835–1838, 2000. [13] T.M. Mitchell, R. Hutchinson, R. Niculescu, F. Pereira, X. Wang, M. Just, and S. Newman. Learning to decode cognitive states from brain images. Machine Learning, 57:145–175, 2004. [14] K.P. Murphy. An introduction to graphical models. 2001. [15] L.K. Hansen P. Hojen-Sorensen and C.E. Rasmussen. Bayesian modeling of fmri time series. In NIPS, 1999. [16] W.D. Penny, K.E. Stephan, A. Mechelli, and K.J. Friston. Comparing dynamic causal models. NeuroImage, 22(3):1157–1172, 2004. [17] C. Vogler and D. Metaxas. A framework for recognizing the simultaneous aspects of american sign language. In CVIU, pages 81:358–384, 2001. [18] X. Wang, R. Hutchinson, and T.M. Mitchell. Training fmri classifiers to detect cognitive states across multiple human subjects. In NIPS03, Dec 2003. [19] L. Zhang, D. Samaras, D. Tomasi, N. Volkow, and R. Goldstein. Machine learning for clinical diagnosis from functional magnetic resonance imaging. In CVPR, 2005.
|
2005
|
21
|
2,835
|
A Theoretical Analysis of Robust Coding over Noisy Overcomplete Channels Eizaburo Doi1, Doru C. Balcan2, & Michael S. Lewicki1,2 1Center for the Neural Basis of Cognition, 2Computer Science Department, Carnegie Mellon University, Pittsburgh, PA 15213 {edoi,dbalcan,lewicki}@cnbc.cmu.edu Abstract Biological sensory systems are faced with the problem of encoding a high-fidelity sensory signal with a population of noisy, low-fidelity neurons. This problem can be expressed in information theoretic terms as coding and transmitting a multi-dimensional, analog signal over a set of noisy channels. Previously, we have shown that robust, overcomplete codes can be learned by minimizing the reconstruction error with a constraint on the channel capacity. Here, we present a theoretical analysis that characterizes the optimal linear coder and decoder for one- and twodimensional data. The analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides a number of important insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions to achieve robustness. We also report numerical solutions for robust coding of highdimensional image data and show that these codes are substantially more robust compared against other image codes such as ICA and wavelets. 1 Introduction In neural systems, the representational capacity of a single neuron is estimated to be as low as 1 bit/spike [1, 2]. The characteristics of the optimal coding strategy under such conditions, however, remains an open question. Recent efficient coding models for sensory coding such as sparse coding and ICA have provided many insights into visual sensory coding (for a review, see [3]), but those models made the implicit assumption that the representational capacity of individual neurons was infinite. Intuitively, such a limit on representational precision should strongly influence the form of the optimal code. In particular, it should be possible to increase the number of limited capacity units in a population to form a more precise representation of the sensory signal. However, to the best of our knowledge, such a code has not been characterized analytically, even in the simplest case. Here we present a theoretical analysis of this problem for one- and two-dimensional data for arbitrary numbers of units. For simplicity, we assume that the encoder and decoder are both linear, and that the goal is to minimize the mean squared error (MSE) of the reconstruction. In contrast to our previous report, which examined noisy overcomplete representations [4], the cost function does not contain a sparsity prior. This simplification makes the cost depend up to second order statistics, making it analytically tractable while preserving the robustness to noise. 2 The model To define our model, we assume that the data is N-dimensional, has zero mean and covariance matrix Σx, and define two matrices W ∈RM×N and A ∈RN×M. For each data point x, its representation r in the model is the linear transform of x through matrix W, perturbed by the additive noise (i.e., channel noise) n ∼N(0, σ2 n IM): r = Wx + n = u + n. (1) We refer to W as the encoding matrix and its row vectors as encoding vectors. The reconstruction of a data point from its representation is simply the linear transform of the latter, using matrix A: ˆx = Ar = AWx + An. (2) We refer to A as the decoding matrix and its column vectors as decoding vectors. The term AWx in eq. 2 determines how the reconstruction depends on the data, while An reflects the channel noise in the reconstruction. When there is no channel noise (n = 0), AW = I is equivalent to perfect reconstruction. A graphical description of this system is shown in Fig. 1. x W u r A x^ n Encoder Decoder Channel Noise Data Noisy Representation Noiseless Representation Reconstruction Figure 1: Diagram of the model. The goal of the system is to form an accurate representation of the data that is robust to the presence of channel noise. We quantify the accuracy of the reconstruction by the mean squared error (MSE) over a set of data. The error of each sample is ϵ = x −ˆx = (IN −AW)x −An, and the MSE is expressed in matrix form: E(A, W) = tr{(IN −AW)Σx(IN −AW)T } + σ2 n tr{AAT }, (3) where we used E = ⟨ϵT ϵ⟩= tr(⟨ϵϵT ⟩). Note that, due to the MSE objective along with the zero-mean assumptions, the optimal solution depends solely on second-order statistics of the data and the noise. Since the SNR is limited in the neural representation [1, 2], we assume that each coding unit has a limited variance ⟨u2 i ⟩= σ2 u so that the SNR is limited to the same constant value γ2 = σ2 u/σ2 n. As the channel capacity of information is defined by C = 1 2 ln(γ2 + 1), this is equivalent to limiting the capacity of each unit to the same level. We will call this constraint as channel capacity constraint. Now our problem is to minimize eq. 3 under the channel capacity constraint. To solve it, we will include this constraint in the parametrization of W. Let Σx = EDET be the eigenvalue decomposition of the data covariance matrix, and denote S = D 1 2 = diag(√λ1, · · · , √λM), where λi ≡Dii are the Σx’s eigenvalues. As we will see shortly, it is convenient to define V ≡WES/σu, then the condition ⟨u2 i ⟩= σ2 u implies that VVT = Cu = ⟨uuT ⟩/σ2 u, (4) where Cu is the correlation matrix of the representation u. Now the problem is formulated as a constrained optimization: finding the parameters that satisfy eq. 4 and minimize E. 3 The optimal solutions and their characteristics In this section we analyze the optimal solutions in some simple cases, namely for 1dimensional (1-D) and 2-dimensional (2-D) data. 3.1 1-D data In the 1-D case the MSE (eq. 3) is expressed as E = σ2 x(1 −aw)2 + σ2 n∥a∥2 2, (5) where σ2 x = Σx ∈R, a = A ∈R1×M and w = W ∈RM×1. By solving the necessary condition for the minimum, ∂E/∂a = 0, with the channel capacity constraint (eq. 4), the entries of the optimal solutions are wi = ±σu σx , ai = 1 wi · γ2 M · γ2 + 1, (6) and the smallest value of the MSE is E = σ2 x M · γ2 + 1. (7) This minimum depends on the SNR (γ2) and on the number of units (M), and it is monotonically decreasing with respect to both. Furthermore, we can compensate for a decrease in SNR by an increase of the number of units. Note that ai are responsible for this adaptive behavior as wi do not vary with either γ2 or M, in the 1-D case. The second term in eq. 5 leads the optimal a into having as small norm as possible, while the first term prevents it from being arbitrarily small. The optimum is given by the best trade-off between them. 3.2 2-D data In the 2-D case, the channel capacity constraint (eq. 4) restricts V such that the row vectors of V should be on the unit circle. Therefore V can be parameterized as V = cos θ1 sin θ1 ... ... cos θM sin θM , (8) where θi ∈[0, 2π) is the angle between i-th row of V and the principal eigenvector of the data e1 (E = [e1, e2], λ1 ≥λ2 > 0). The necessary condition for the minimum ∂E/∂A = O implies A = σuESVT (σ2 uVVT + σ2 nIM)−1. (9) Using eqs. 8 and 9, the MSE can be expressed as E = (λ1 + λ2) 2 M γ2 + 1 −γ2 2 (λ1 −λ2) Re(Z) M 2 γ2 + 1 2 −1 4γ4|Z|2 , (10) where by definition Z = PM k=1 zk = PM k=1[cos(2θk) + i sin(2θk)]. (11) Now the problem has been reduced to finding simply a complex number Z that minimizes E. Note that Z defines θk in V, which in turn defines W (by definition; see eq. 4) and A (eq. 9). In the following we analyze the problem in two complementary cases: when the data variance is isotropic (i.e., λ1 = λ2), and when it is anisotropic (λ1 > λ2). As we will see, the solutions are qualitatively different in these two cases. 3.2.1 Isotropic case Isotropy of the data variance implies λ1 = λ2 ≡σ2 x, and (without loss of generality) E = I, which simplifies the MSE (eq. 10) as E = 2σ2 x 1 + M 2 γ2 2 M γ2 + 1 2 −1 4γ4|Z|2 . (12) Therefore, E is minimized whenever |Z|2 is minimized. If M = 1, |Z|2 = |z1|2 is always 1 by definition (eq. 11), yielding the optimal solutions W = σu σx V, A = σx σu · γ2 γ2 + 1VT , (13) where V = V(θ1), ∀θ1 ∈[0, 2π). Eq. 13 means that the orientation of the encoding and decoding vectors is arbitrary, and that the length of those vectors is adjusted exactly as in the 1-D case (eq. 6 with M =1; Fig. 2). The minimum MSE is given by E = σ2 x γ2 + 1 + σ2 x. (14) The first term is the same as in the 1-D case (eq. 7 with M =1), corresponding to the error component along the axis that the encoding/decoding vectors represent, while the second term is the whole data variance along the axis orthogonal to the encoding/decoding vectors, along which no reconstruction is made. If M ≥2, there exists a set of angles θk for which |Z|2 is 0. This can be verified by representing Z in the complex plane (Z-diagram in Fig. 2) and observing that there is always a configuration of connected, unit-length bars that starts from, and ends up at the origin, thus indicating that Z = |Z|2 = 0. Accordingly, the optimal solution is W = σu σx V, A = σx σu · γ2 M 2 γ2 + 1VT , (15) where the optimal V = V(θ1, · · · , θM) is given by such θ1, . . . , θM for which Z = 0. Specifically, if M =2, then z1 and z2 must be antiparallel but are not otherwise constrained, making the pair of decoding vectors (and that of encoding vectors) orthogonal, yet free to rotate. Note that both the encoding and the decoding vectors are parallel to the rows of V (eq. 15), and the angle of zk from the real axis is twice as large as that of ak (or wk). Likewise, if M =3, the decoding vectors should be evenly distributed yet still free to rotate; if M = 4, the four vectors should just be two pairs of orthogonal vectors (not necessarily evenly distributed); if M ≥5, there is no obvious regularity. With Z = 0, the MSE is minimized as E = 2σ2 x M 2 γ2 + 1. (16) The minimum MSE (eq. 16) depends on the SNR (γ2) and overcompleteness ratio (M/N) exactly in the same manner as explained in the 1-D case (eq. 7), considering that in both cases the numerator is the data variance, tr(Σx). We present examples in Fig 2: given M = 2, the reconstruction gets worse by lowering the SNR from 10 to 1; however, the reconstruction can be improved by increasing the number of units for a fixed SNR (γ2 = 1). Just as in the 1-D case, the norm of the decoding vectors gets smaller by increasing M or decreasing γ2, which is explicitly described by eq. 15. Variance Decoding Encoding Z-Diagram Encoding M=1 M=2 M=3 M=4 M=5 γ2=1 γ2=10 γ2=1 γ2=1 γ2=1 γ2=1 Figure 2: The optimal solutions for isotropic data. M is the number of units and γ2 is the SNR in the representation. “Variance” shows the variance ellipses for the data (gray) and the reconstruction (magenta). For perfect reconstruction, the two ellipses should overlap. “Encoding” and “Decoding” show encoding vectors (red) and decoding vectors (blue), respectively. The gray vectors show the principal axes of the data, e1 and e2. “Z-Diagram” represents Z = Σkzk (eq. 11) in the complex plane, where each unit length bar corresponds to a zk, and the end point indicated by “×” represents the coordinates of Z. The set of green dots in a plot corresponds to optimal values of Z; when this set reduces to a single dot, the optimal Z is unique. In general there could be multiple configurations of bars for a single Z, implying multiple equivalent solutions of A and W for a given Z. For M = 2 and γ2 =10, we drew with gray dotted bars an example of Z that is not optimal (corresponding encoding and decoding vectors not shown). 3.2.2 Anisotropic case In the anisotropic condition λ1 > λ2, the MSE (eq. 10) is minimized when Z = Re(Z) ≥0 for a fixed value of |Z|2. Therefore, the problem is reduced to seeking a real value Z = y ∈[0, M] that minimizes E = (λ1 + λ2) M 2 γ2 + 1 −γ2 2 (λ1 −λ2) y M 2 γ2 + 1 2 −1 4γ4y2 . (17) If M =1, then y = cos 2θ1 from eq. 11, and therefore, E in eq. 17 is minimized iff θ1 = 0, yielding the optimal solutions W = σu √λ1 eT 1 , A = √λ1 σu · γ2 γ2 + 1e1. (18) In contrast to the isotropic case with M = 1, the encoding and decoding vectors are specified along the principal axis (e1) as illustrated in Fig. 3. The minimum MSE is E = λ1 γ2 + 1 + λ2. (19) This is the same form as in the isotropic case (eq. 14) except that the first term is now related to the variance along the principal axis, λ1, by which the encoding/decoding vectors can most effectively be utilized for representing the data, while the second term is specified as the data variance along the minor axis, λ2, by which the loss of reconstruction is mostly minimized. Note that it is a similar mechanism of dimensionality reduction as using PCA. If M ≥2, then we can derive the optimal y from the necessary condition for the minimum, dE/dy = 0, which yields √λ1 −√λ2 √λ1 + √λ2 M + 2 γ2 −y √λ1 + √λ2 √λ1 −√λ2 M + 2 γ2 −y = 0. (20) Let γ2 c denote the SNR critical point, where γ2 c = ( p λ1/λ2 −1)/M. (21) If γ2 ≥γ2 c, then eq. 20 has a root within its domain [0, M], y = √λ1 −√λ2 √λ1 + √λ2 2 γ2 + M , (22) with y = M if γ2 = γ2 c. Accordingly the optimal solutions are given by W = V σu/√λ1 0 0 σu/√λ2 ET , A = √λ1 + √λ2 2σu · γ2 M 2 γ2 + 1EVT , (23) where the optimal V = V(θ1, · · · , θM) is given by the Z-diagram as illustrated in Fig. 3, which we will describe shortly. The minimum MSE is given by E = 1 M 2 γ2 + 1 (√λ1 + √λ2)2 2 . (24) Note that eqs. 23–24 are reduced to eqs. 15–16 if λ1 = λ2. If the SNR is smaller than γ2 c, then dE/dy = 0 does not have a root within the domain. However, dE/dy is always negative, and hence, E decreases monotonically on [0, M]. The minimum is therefore obtained when y = M, yielding the optimal solutions W = σu √λ1 1MeT 1 , A = √λ1 σu · γ2 Mγ2 + 1e11T M, (25) where 1M = (1, · · · , 1)T ∈RM, and the minimum is given by E = λ1 Mγ2 + 1 + λ2. (26) Note that E takes the same form as in M =1 (eq. 19) except that we can now decrease the error by increasing the number of units. To summarize, if the representational resource is too limited either by M or γ2, the best strategy is to represent only the principal axis. Now we describe the optimal solutions using the Z-diagram (Fig. 3). First, the optimal solutions differ depending on the SNR. If γ2 > γ2 c, the optimal Z is a certain point between 0 and M on the real axis. Specifically, for M =2 the optimal configuration of the unit-length connected bars is unique (up to flipping about x-axis), meaning that the encoding/decoding vectors are symmetric about the principal axis; for M ≥3, there are infinitely many configurations of the bars starting from the origin and ending at the optimal Z, and nothing can be added about their regularity. If γ2 ≤γ2 c, the optimal Z is M, and the optimal configuration is obtained only when all the bars align on the real axis. In this case, encoding/decoding vectors are all parallel to the principal axis (e1), as described by eq. 25. Such a degenerate representation is unique for the anisotropic case and is determined by γ2 c (eq. 21). We can M=1 M=2 M=3 M=8 γ2=10 γ2=10 γ2=2 γ2=1 γ2=1 γ2=1 γ2=1 Variance Decoding Z-Diagram Encoding Figure 3: The optimal solutions for anisotropic data. Notations are as in Fig. 2. We set λ1 =1.87 and λ2 =0.13. γ2 >γ2 c holds for all M ≥2 but the one with M=2 and γ2 =1. avoid the degeneration either by increasing the SNR (e.g., Fig. 3, M =2 with different γ2) or by increasing the number of units (γ2 =1 with different M). Also, the optimal solutions for the overcomplete representation are, in general, not obtained by simple replication (except in the degenerate case). For example, for γ2 = 1 in Fig. 3, the optimal solution for M =8 is not identical to the replication of the optimal solution for M =2, and we can formally prove it by using eq. 22. For M = 1 and for the degenerate case, where only one axis in two dimensional space is represented, the optimal strategy is to preserve information along the principal axis at the cost of losing all information along the minor axis. Such a biased representation is also found for the non-degenerate case. We can see in Fig. 3 that the data along the principal axis is more accurately reconstructed than that along the minor axis; if there is no bias, the ellipse for the reconstruction should be similar to that of the data. More precisely, we can prove that the error ratio along e1 is smaller than that along e2 at the ratio of √λ2 : √λ1 (note the switch of the subscripts), which describes the representation bias toward the main axis. 4 Application to image coding In the case of high-dimensional data we can employ an algorithm similar to the one in [4], to numerically compute optimal solutions that minimizes the MSE subject to the channel capacity constraint. Fig. 4 presents the performance of our model when applied to image coding in the presence of channel noise. The data were 8×8 pixel blocks taken from a large image, and for comparison we considered representations with M = 64 (“1×”) and respectively, 512 (“8×”) units. As for the channel capacity, each unit has 1.0 bit precision as in the neural representation [1]. The robust coding model shows a dramatic reduction in the reconstruction error, when compared to alternatives such as ICA and wavelet codes. This underscores the importance of taking into account the channel capacity constraint for better understanding the neural representation. Original Robust Coding (1x) Robust Coding (8x) Wavelets 34.8% ICA 32.5% 3.8% 0.6% Figure 4: Reconstruction using one bit channel capacity representations. To ensure that all models had the same precision of 1.0 bit for each coefficient, we added Gaussian noise to the coefficients of the ICA and “Daubechies 9/7” wavelet codes as in the robust coding. For each representation, we displayed percentage error of the reconstruction. The results are consistent using other images, block size, or wavelet filters. 5 Discussion In this study we measured the accuracy of the reconstruction by the MSE. An alternative measure could be, as in [5, 3], mutual information I(x, ˆx) between the data and the reconstruction. However, we can prove that this measure does not yield optimal solutions for the robust coding problem. Assuming the data is Gaussian and the representation is complete, we can prove that the mutual information is upper-bounded, I(x, ˆx) = 1 2 ln det(γ2VVT + IN) ≤N 2 ln(γ2 + 1), (27) with equality iff VVT = I, i.e., when the representation u is whitened (see eq. 4). This result holds even for anisotropic data, which is different from the optimal MSE code that can employ correlated, or even degenerate, representation. As ICA is one form of whitening, the results in Fig. 4 demonstrate the suboptimality of whitening in the MSE sense. The optimal MSE code over noisy channels was examined previously in [6] for Ndimensional data. However, the capacity constraint was defined for a population and only examined the case of undercomplete codes. In the model studied here, motivated by the neural representation, the capacity constraint is imposed for individual units. Furthermore, the model allows for arbitrary number of units, which provides a way to arbitrarily improve the robustness of the code using a population code. The theoretical analysis for oneand two-dimensional cases quantifies the amount of error reduction as a function of the SNR and the number of units along with the data covariance matrix. Finally, our numerical results for higher-dimensional image data demonstrate a dramatic improvement in the robustness of the code over both conventional transforms such as wavelets and also representations optimized for statistical efficiency such as ICA. References [1] A. Borst and F. E. Theunissen. Information theory and neural coding. Nature Neuroscience, 2:947–957, 1999. [2] N. K. Dhingra and R. G. Smith. Spike generator limits efficiency of information transfer in a retinal ganglion cell. Journal of Neuroscience, 24:2914–2922, 2004. [3] A. Hyvarinen, J. Karhunen, and E. Oja. Independent Component Analysis. Wiley, 2001. [4] E. Doi and M. S. Lewicki. Sparse coding of natural images using an overcomplete set of limited capacity units. In Advances in NIPS, volume 17, pages 377–384. MIT Press, 2005. [5] J. J. Atick and A. N. Redlich. What does the retina know about natural scenes? Neural Computation, 4:196–210, 1992. [6] K. I. Diamantaras, K. Hornik, and M. G. Strintzis. Optimal linear compression under unreliable representation and robust PCA neural models. IEEE Trans. Neur. Netw., 10(5):1186–1195, 1999.
|
2005
|
22
|
2,836
|
A Probabilistic Approach for Optimizing Spectral Clustering Rong Jin∗, Chris Ding†, Feng Kang∗ ∗Lawrence Berkeley National Laboratory, Berkeley, CA 94720 †Michigan State University, East Lansing , MI 48824 Abstract Spectral clustering enjoys its success in both data clustering and semisupervised learning. But, most spectral clustering algorithms cannot handle multi-class clustering problems directly. Additional strategies are needed to extend spectral clustering algorithms to multi-class clustering problems. Furthermore, most spectral clustering algorithms employ hard cluster membership, which is likely to be trapped by the local optimum. In this paper, we present a new spectral clustering algorithm, named “Soft Cut”. It improves the normalized cut algorithm by introducing soft membership, and can be efficiently computed using a bound optimization algorithm. Our experiments with a variety of datasets have shown the promising performance of the proposed clustering algorithm. 1 Introduction Data clustering has been an active research area with a long history. Well-known clustering methods include the K-means methods (Hartigan & Wong., 1994), Gaussian Mixture Model (Redner & Walker, 1984), Probabilistic Latent Semantic Indexing (PLSI) (Hofmann, 1999), and Latent Dirichlet Allocation (LDA) (Blei et al., 2003). Recently, spectral clustering methods (Shi & Malik, 2000; Ng et al., 2001; Zha et al., 2002; Ding et al., 2001; Bach & Jordan, 2004)have attracted more and more attention given their promising performance in data clustering and simplicity in implementation. They treat the data clustering problem as a graph partitioning problem. In its simplest form, a minimum cut algorithm is used to minimize the weights (or similarities) assigned to the removed edges. To avoid unbalanced clustering results, different objectives have been proposed, including the ratio cut (Hagen & Kahng, 1991), normalized cut (Shi & Malik, 2000) and min-max cut (Ding et al., 2001). To reduce the computational complexity, most spectral clustering algorithms use the relaxation approach, which maps discrete cluster memberships into continuous real numbers. As a result, it is difficult to directly apply current spectral clustering algorithms to multiclass clustering problems. Various strategies (Shi & Malik, 2000; Ng et al., 2001; Yu & Shi, 2003) have been used to extend spectral clustering algorithms to multi-class clustering problems. One common approach is to first construct a low-dimension space for data representation using the smallest eigenvectors of a graph Laplacian that is constructed based on the pair wise similarity of data. Then, a standard clustering algorithm, such as the K-means method, is applied to cluster data points in the low-dimension space. One problem with the above approach is how to determine the appropriate number of eigenvectors. A too small number of eigenvectors will lead to an insufficient representation of data, and meanwhile a too large number of eigenvectors will bring in a significant amount of noise to the data representation. Both cases will degrade the quality of clustering. Although it has been shown in (Ng et al., 2001) that the number of required eigenvectors is generally equal to the number of clusters, the analysis is valid only when data points of different clusters are well separated. As will be shown later, when data points are not well separated, the optimal number of eigenvectors can be different from the number of clusters. Another problem with the existing spectral clustering algorithms is that they are based on binary cluster membership and therefore are unable to express the uncertainty in data clustering. Compared to hard cluster membership, probabilistic membership is advantageous in that it is less likely to be trapped by local minimums. One example is the Bayesian clustering method (Redner & Walker, 1984), which is usually more robust than the K-means method because of its soft cluster memberships. It is also advantageous to use probabilistic memberships when the cluster memberships are the intermediate results and will be used for other processes, for example selective sampling in active learning (Jin & Si, 2004). In this paper, we present a new spectral clustering algorithm, named “Soft Cut”, that explicitly addresses the above two problems. It extends the normalized cut algorithm by introducing probabilistic membership of data points. By encoding membership of multiple clusters into a set of probabilities, the proposed clustering algorithm can be applied directly to multi-class clustering problems. Our empirical studies with a variety of datasets have shown that the soft cut algorithm can substantially outperform the normalized cut algorithm for multi-class clustering. The rest paper is arranged as follows. Section 2 presents the related work. Section 3 describes the soft cut algorithm. Section 4 discusses the experimental results. Section 5 concludes this study with the future work. 2 Related Work The key idea of spectral clustering is to convert a clustering problem into a graph partitioning problem. Let n be the number of data points to be clustered. Let W = [wi,j]n×n be the weight matrix where each wi,j is the similarity between two data points. For the convenience of discussion, wi,i = 0 for all data points. Then, a clustering problem can be formulated into the minimum cut problem, i.e., q∗ = arg min q∈{−1,1}n n X i,j=1 wi,j(qi −qj)2 = qT Lq (1) where q = (q1, q2, ..., qn) is a vector for binary memberships and each qi can be either −1 or 1. L is the Laplacian matrix. It is defined as L = D −W, where D = [di,i]n×n is a diagonal matrix with each element di,i = δi,j Pn j=1 wi,j. Directly solving the problem in (1) requires combinatorial optimization, which is computationally expensive. Usually, a relaxation approach (Chung, 1997) is used to replace the vector q ∈{−1, 1}n with a vector ˆq ∈Rn under the constraint Pn i=1 ˆq2 i = n. As a result of the relaxation, the approximate solution to (1) is the second smallest eigenvector of Laplacian L. One problem with the minimum cut approach is that it does not take into account the size of clusters, which can lead to clusters of unbalanced sizes. To resolve this problem, several different criteria are proposed, including the ratio cut (Hagen & Kahng, 1991), normalized cut (Shi & Malik, 2000) and min-max cut (Ding et al., 2001). For example, in the normalized cut algorithm, the following objective is used: Jn(q) = C+,−(q) D+(q) + C+,−(q) D−(q) (2) where C+,−(q) = Pn i,j=1 wi,jδ(qi, +)δ(qj, −) and D± = Pn i=1 δ(qi, ±) Pn j=1 wi,j. In the above objective, the size of clusters, i.e., D±, is used as the denominators to avoid clusters of too small size. Similar to the minimum cut approach, a relaxation approach is used to convert the problem in (2) into a eigenvector problem. For multi-class clustering, we can extend the objective in (2) into the following form: Jnorm mc(q) = K X z=1 X z′̸=z Cz,z′(q) Dz(q) (3) where K is the number of clusters, vector q ∈ {1, 2, ..., K}n, Cz,z′ = Pn i,j=1 δ(qi, z)δ(qj, z′)wi,j, and Dz = Pn i=1 Pn j=1 δ(qi, z)wi,j. However, efficiently finding the solution that minimizes (3) is rather difficult. In particular, a simple relaxation method cannot be applied directly here. In the past, several heuristic approaches (Shi & Malik, 2000; Ng et al., 2001; Yu & Shi, 2003) have been proposed for finding approximate solutions to (3). One common strategy is to first obtain the K smallest (excluding the one with zero eigenvalue) eigenvectors of Laplacian L, and project data points onto the low-dimension space that is spanned by the K eigenvectors. Then, a standard clustering algorithm, such as the K-means method, is applied to cluster data points in this low-dimension space. In contrast to these approaches, the proposed spectral clustering algorithm deals with the multi-class clustering problem directly. It estimates the probabilities for each data point be in different clusters simultaneously. Through the probabilistic cluster memberships, the proposed algorithm will be less likely to be trapped by local minimums, and therefore will be more robust than the existing spectral clustering algorithms. 3 Spectral Clustering with Soft Membership In this section, we describe a new spectral clustering algorithm, named “Soft Cut”, which extends the normalized cut algorithm by introducing probabilistic cluster membership. In the following, we will present a formal description of the soft cut algorithm, followed by the procedure that efficiently optimizes the related optimization problem. 3.1 Algorithm Description First, notice that Dz in (3) can be expanded as Dz = PK j=1 Ci,j. Thus, the objective function for multi-class clustering in (3) can be rewritten as: Jn mc(q) = K X z=1 K X z′̸=z Cz,z′(q) Dz(q) = K − K X z=1 Cz,z(q) Dz(q) (4) Let J′ n mc = PK z=1 Cz,z(q) Dz(q) . Thus, instead of minimizing Jn mc, we can maximize J′ n mc. To extend the above objective function to a probabilistic framework, we introduce the probabilistic cluster membership. Let qz,i denote the probability for the i-th data point to be in the z-th cluster. Let matrix Q = [qz,i]K×n include all probabilities qz,i. Using the probabilistic notations, we can rewrite Cz,z′ and Dz as follows: Cz,z′(Q) = n X i,j=1 qz,iqz′,jwi,j, Dz(Q) = n X i,j=1 qz,iwi,j (5) Substituting the probabilistic expression for Cz,z′ and Dz into J′ n mc, we have the following optimization problem for probabilistic spectral clustering: Q∗= arg min Q∈RK×n Jprob(Q) = arg max Q∈RK×n K X z=1 Pn i,j=1 qz,iqz,jwi,j Pn i,j=1 qz,iwi,j s.t.∀i ∈[1..n], z ∈[1..K] : qz,i ≥0, K X z=1 qz,i = 1 (6) 3.2 Optimization Procedure In this subsection, we present a bound optimization algorithm (Salakhutdinov & Roweis, 2003) for efficiently finding the solution to (6). It maximizes the objective function in (6) iteratively. In each iteration, a concave lower bound is first constructed for the objective function based on the solution obtained from the previous iteration. Then, a new solution for the current iteration is obtained by maximizing the lower bound. The same procedure is repeated until the solution converges to a local maximum. Let Q′ = [q′ i,j]K×n be the probabilities obtained in the previous iteration, and Q = [qi,j]K×n be the probabilities for current iteration. Define ∆(Q, Q′) = log Jprob(Q) Jprob(Q′) which is the logarithm of the ratio of the objective functions between two consecutive iterations. Using the convexity of logarithm function, i.e., log(P i piqi) ≥P i pi log(qi) for a pdf {pi}, we have ∆(Q, Q′) lower bound by the following expression: ∆(Q, Q′) = log K X z=1 Cz,z(Q) Dz(Q) ! −log K X z=1 Cz,z(Q′) Dz(Q′) ! ≥ K X z=1 tz log Cz,z(Q) Cz,z(Q′) −log Dz(Q) Dz(Q′) (7) where tz is defined as: tz = Cz,z(Q′) Dz(Q′) PK z′=1 Cz′,z′(Q′) Dz′(Q′) (8) Now, the first term within the big bracket in (7), i.e., log Cz,z(Q) Cz,z(Q′), can be further relaxed as: log Cz,z(Q) Cz,z(Q′) = log n X i,j=1 q′ z,iq′ z,jwi,j Cz,z(Q′) qz,iqz,j q′ z,iq′ z,j ≥ 2 n X i=1 n X j=1 si,j z log(qz,i) − n X i,j=1 si,j z log(q′ z,iq′ z,j) (9) where si,j z is defined as: si,j z = q′ z,iq′ z,jwi,j Cz,z(Q′) (10) Meanwhile, using the inequality log x ≤x −1, we have log Dz(Q) Dz(Q′) upper bounded by the following expression: log Dz(Q) Dz(Q′) ≤Dz(Q) Dz(Q′) −1 = n X i=1 qz,i n X j=1 wi,j Dz(Q′) −1 (11) Putting together (7), (9), and (11), we have a concave lower bound for the objective function in (6), i.e., log Jprob(Q) ≥ log Jprob(Q′) + ∆0(Q′) + 2 K X z=1 n X i,j=1 tzsi,j z log qz,i − K X z=1 n X i,j=1 qz,iwi,j Dz(Q′) (12) where ∆0(Q′) is defined as: ∆0(Q′) = − K X z=1 tz n X i,j=1 si,j z wi,j log(q′ z,iq′ z,j) + 1 The optimal solution that maximizes the lower bound in (12) can be computed by setting its derivative to zero, which leads to the following solution: qz,i = 2tz Pn j=1 si,j z tz Pn j=1 wi,j Dz(Q′) + λi (13) where λi is a Lagrangian multiplier that ensure PK z=1 qz,i = 1. It can be acquired by maximizing the following objective function: l(λi) = −λi + 2 K X z=1 tz n X j=1 si,j z log tz n X j=1 wi,j Dz(Q′) + λi (14) Since the above objective function is concave, we can apply a standard numerical procedure, such as the Newton’s method, to efficiently find the value for λi. 4 Experiment In this section, we focus on examining the effectiveness of the proposed soft cut algorithm for multi-class clustering. In particular, we will address the following two research questions: 1. How effective is the proposed algorithm for data clustering? We compare the proposed soft cut algorithm to the normalized cut algorithm with various numbers of eigenvectors. 2. How robust is the proposed algorithm for data clustering? We evaluate the robustness of clustering algorithms by examining their variance across multiple trials. 4.1 Experiment Design Datasets In order to extensively examine the effectiveness of the proposed soft cut algorithm, a variety of datasets are used in this experiment. They are: • Text documents that are extracted from the 20 newsgroups to form two five-class datasets, named as “M5” and “L5”. Each class contain 100 document and there are totally 500 documents. Table 1: Datasets Description Dataset Description #Class #Instance #Features M5 Text documents 5 500 1000 L5 Text documents 5 500 1000 Pendigit Pen-based handwritting 10 2000 16 Ribosome Ribosome rDNA sequences 8 1907 27617 • Pendigit that comes from the UCI data repository. It contains 2000 examples that belong to 10 different classes. • Ribosomal sequences that are from RDP project (http://rdp.cme.msu.edu/index.jsp). It contains annotated rRNA sequences of ribosome for 2000 different bacteria that belong to 10 different phylum (e.g., classes). Table 1 provides the detailed information regarding each dataset. Evaluation metrics To evaluate the performance of different clustering algorithms, two different metrics are used: • Clustering accuracy. For the datasets that have no more than five classes, clustering accuracy is used as the evaluation metric. To compute clustering accuracy, each automatically generated cluster is first aligned with a true class. The classification accuracy based on the alignment is then computed, and the clustering accuracy is defined as the maximum classification accuracy among all possible alignments. • Normalized mutual information. For the datasets that have more than five classes, due to the expensive computation involved in finding the optimal alignment, we use the normalized mutual information (Banerjee et al., 2003) as the alternative evaluation metric. If Tu and Tl denote the cluster labels and true class labels assigned to data points, the normalized mutual information “nmi” is defined as nmi = 2I(Tu, Tl) (H(Tu) + H(Tl)) where I(Tu, Tl) stands for the mutual information between clustering labels Tu and true class labels Tl. H(Tu) and H(Tl) are the entropy functions for Tu and Tl, respectively. Each experiment was run 10 times with different initialization of parameters. The averaged results together with their variance are used as the final evaluation metric. Implementation We follow the paper (Ng et al., 2001) for implementing the normalized cut algorithm. A cosine similarity is used to measure the affinity between any two data points. Both the EM algorithm and the Kmeans methods are used to cluster the data points that are projected into the low-dimension space spanned by the smallest eigenvectors of a graph Laplacian. 4.2 Experiment (I): Effectiveness of The Soft Cut Algorithm The clustering results of both the soft cut algorithm and the normalized cut algorithm are summarized in Table 2. In addition to the Kmeans algorithm, we also apply the EM clustering algorithm to the normalized cut algorithm. In this experiment, the number of eigenvectors used for the normalized cut algorithms is equal to the number of clusters. First, comparing to both normalized cut algorithms, we see that the proposed clustering algorithm substantially outperform the normalized cut algorithms for all datasets. Second, Table 2: Clustering results for different clustering methods. Clustering accuracy is used for dataset “L5” and “M5” as the evaluation metric, and normalized mutual information is used for “Pendigit” and “Ribosome” . Soft Cut Normalized Cut (Kmeans) Normalized Cut (EM) M5 89.2 ± 1.3 83.2 ± 8.8 62.4 ± 5.6 L5 69.2 ± 2.7 64.2 ± 4.9 45.1 ± 4.8 Pendigit 56.3 ± 3.8 46.0 ± 6.4 52.8 ± 2.0 Ribosome 69.7 ± 2.9 62.2 ± 9.1 63.2 ± 3.8 Table 3: Clustering accuracy for normalized cut with embedding in eigenspace with K eigenvectors. K-means is used. #Eigenvector M5 L5 Pendigit Ribosome K 83.2 ± 8.8 64.1 ± 4.9 46.0 ± 6.4 62.2 ± 9.1 K + 1 77.6 ± 8.6 69.6 ± 6.7 43.3 ± 9.1 65.9 ± 5.8 K + 2 79.7 ± 8.5 64.1 ± 5.7 41.6 ± 9.3 63.4 ± 4.8 K + 3 80.2 ± 6.6 61.4 ± 5.8 42.9 ± 9.6 67.2 ± 7.6 K + 4 74.9 ± 9.2 59.1 ± 4.7 47.5 ± 3.7 60.7 ± 8.4 K + 5 70.5 ± 5.7 66.1 ± 4.7 39.2 ± 9.3 63.9 ± 8.2 K + 6 75.5 ± 8.6 61.9 ± 4.7 43.4 ± 8.3 63.5 ± 10.4 K + 7 75.8 ± 7.5 59.7 ± 5.6 46.8 ± 7.3 56.6 ± 10.7 K + 8 73.5 ± 6.6 61.2 ± 4.7 49.8 ± 8.9 54.3 ± 7.2 comparing to the normalized cut algorithm using the Kmeans method, we see that the soft cut algorithm has smaller variance in its clustering results. This can be explained by the fact that the Kmeans algorithm uses binary cluster membership and therefore is likely to be trapped by local optimums. As indicated in Table 2, if we replace the Kmeans algoirthm with the EM algorithm in the normalized cut algorithm, the variance in clustering results is generally reduced but at the price of degradation in the performance of clustering. Based on the above observation, we conclude that the soft cut algorithm appears to be effective and robust for multi-class clustering. 4.3 Experiment (II): Normalized Cut using Different Numbers of Eigenvectors One potential reason why the normalized cut algorithm perform worse than the proposed algorithm is that the number of clusters may not be the optimal number of eigenvectors. To examine this issue, we test the normalized cut algorithm with different number of eigenvectors. The Kmeans method is used for clustering the eigenvectors. The results of the normalized cut algorithm using different number of eigenvectors are summarized in Table 3. The best performance is highlighted by the bold fold. First, we clearly see that the best clustering results may not necessarily happen when the number of eigenvectors is exactly equal to the number of clusters. In fact, for three out of four cases, the best performance is achieved when the number of eigenvectors is larger than the number of clusters. This result indicates that the choice of numbers of eigenvectors can have a significant impact on the performance of clustering. Second, comparing the results in Table 3 to the results in Table 2, we see that the soft cut algorithm is still able to outperform the normalized cut algorithm even with the optimal number of eigenvectors. In general, since spectral clustering is originally designed for binary-class classification, it requires an extra step when it is extended to multi-class clustering problems. Hence, the resulting solutions are usually suboptimal. In contrast, the soft cut algorithm directly targets on multi-class clustering problems, and thus is able to achieve better performance than the normalized cut algorithm. 5 Conclusion In this paper, we proposed a novel probabilistic algorithm for spectral clustering, called “soft cut” algorithm. It introduces probabilistic membership into the normalized cut algorithm and directly targets on the multi-class clustering problems. Our empirical studies with a number of datasets have shown that the proposed algorithm outperforms the normalized cut algorithm considerably. In the future, we plan to extend this work to other applications such as image segmentation. References Bach, F. R., & Jordan, M. I. (2004). Learning spectral clustering. Advances in Neural Information Processing Systems 16. Banerjee, A., Dhillon, I., Ghosh, J., & Sra, S. (2003). Generative model-based clustering of directional data. Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2003). Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. J. Mach. Learn. Res., 3, 993–1022. Chung, F. (1997). Spectral graph theory. Amer. Math. Society. Ding, C., He, X., Zha, H., Gu, M., & Simon, H. (2001). A min-max cut algorithm for graph partitioning and data clustering. Proc. IEEE Int’l Conf. Data Mining. Hagen, L., & Kahng, A. (1991). Fast spectral methods for ratio cut partitioning and clustering. Proceedings of IEEE International Conference on Computer Aided Design (pp. 10–13). Hartigan, J., & Wong., M. (1994). A k-means clustering algorithm. Appl. Statist., 28, 100–108. Hofmann, T. (1999). Probabilistic latent semantic indexing. Proceedings of the 22nd Annual ACM Conference on Research and Development in Information Retrieval (pp. 50–57). Berkeley, California. Jin, R., & Si, L. (2004). A bayesian approach toward active learning for collaborative filtering. Proceedings of the 20th conference on Uncertainty in artificial intelligence (pp. 278–285). Banff, Canada: AUAI Press. Ng, A., Jordan, M., & Weiss, Y. (2001). On spectral clustering: Analysis and an algorithm. Advances in Neural Information Processing Systems 14. Redner, R. A., & Walker, H. F. (1984). Mixture densities, maximum likelihood and the em algorithm. SIAM Review, 26, 195–239. Salakhutdinov, R., & Roweis, S. T. (2003). Adaptive overrelaxed bound optimization methods. Proceedings of the Twentieth International Conference (ICML 2003) (pp. 664–671). Shi, J., & Malik, J. (2000). Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, 888–905. Yu, S. X., & Shi, J. (2003). Multiclass spectral clustering. Proceedings of Ninth IEEE International Conference on Computer Vision. Nice, France. Zha, H., He, X., Ding, C., Gu, M., & Simon, H. (2002). Spectral relaxation for k-means clustering. Advances in Neural Information Processing Systems 14.
|
2005
|
23
|
2,837
|
Learning Multiple Related Tasks using Latent Independent Component Analysis Jian Zhang†, Zoubin Ghahramani†‡, Yiming Yang† † School of Computer Science ‡ Gatsby Computational Neuroscience Unit Cargenie Mellon University University College London Pittsburgh, PA 15213 London WC1N 3AR, UK {jian.zhang, zoubin, yiming}@cs.cmu.edu Abstract We propose a probabilistic model based on Independent Component Analysis for learning multiple related tasks. In our model the task parameters are assumed to be generated from independent sources which account for the relatedness of the tasks. We use Laplace distributions to model hidden sources which makes it possible to identify the hidden, independent components instead of just modeling correlations. Furthermore, our model enjoys a sparsity property which makes it both parsimonious and robust. We also propose efficient algorithms for both empirical Bayes method and point estimation. Our experimental results on two multi-label text classification data sets show that the proposed approach is promising. 1 Introduction An important problem in machine learning is how to generalize between multiple related tasks. This problem has been called “multi-task learning”, “learning to learn”, or in some cases “predicting multivariate responses”. Multi-task learning has many potential practical applications. For example, given a newswire story, predicting its subject categories as well as the regional categories of reported events based on the same input text is such a problem. Given the mass tandem spectra of a sample protein mixture, identifying the individual proteins as well as the contained peptides is another example. Much attention in machine learning research has been placed on how to effectively learn multiple tasks, and many approaches have been proposed[1][2][3][4][5][6][10][11]. Existing approaches share the basic assumption that tasks are related to each other. Under this general assumption, it would be beneficial to learn all tasks jointly and borrow information from each other rather than learn each task independently. Previous approaches can be roughly summarized based on how the “relatedness” among tasks is modeled, such as IID tasks[2], a Bayesian prior over tasks[2][6][11], linear mixing factors[5][10], rotation plus shrinkage[3] and structured regularization in kernel methods[4]. Like previous approaches, the basic assumption in this paper is that the multiple tasks are related to each other. Consider the case where there are K tasks and each task is a binary classification problem from the same input space (e.g., multiple simultaneous classifications of text documents). If we were to separately learn a classifier, with parameters θk for each task k, we would be ignoring relevant information from the other classifiers. The assumption that the tasks are related suggests that the θk for different tasks should be related to each other. It is therefore natural to consider different statistical models for how the θk’s might be related. We propose a model for multi-task learning based on Independent Component Analysis (ICA)[9]. In this model, the parameters θk for different classifiers are assumed to have been generated from a sparse linear combination of a small set of basic classifiers. Both the coefficients of the sparse combination (the factors or sources) and the basic classifiers are learned from the data. In the multi-task learning context, the relatedness of multiple tasks can be explained by the fact that they share certain number of hidden, independent components. By controlling the model complexity in terms of those independent components we are able to achieve better generalization capability. Furthermore, by using distributions like Laplace we are able to enjoy a sparsity property, which makes the model both parsimonious and robust in terms of identifying the connections with independent sources. Our model can be combined with many popular classifiers, and as an indispensable part we present scalable algorithms for both empirical Bayes method and point estimation, with the later being able to solve high-dimensional tasks. Finally, being a probabilistic model it is always convenient to obtain probabilistic scores and confidence which are very helpful in making statistical decisions. Further discussions on related work are given in Section 5. 2 Latent Independent Component Analysis The model we propose for solving multiple related tasks, namely the Latent Independent Component Analysis (LICA) model, is a hierarchical Bayesian model based on the traditional Independent Component Analysis. ICA[9] is a promising technique from signal processing and designed to solve the blind source separation problem, whose goal is to extract independent sources given only observed data that are linear combinations of the unknown sources. ICA has been successfully applied to blind source separation problem and shows great potential in that area. With the help of non-Gaussianity and higher-order statistics it can correctly identify the independent sources, as opposed to technique like Factor Analysis which is only able to remove the correlation in the data due to the intrinsic Gaussian assumption in the corresponding model. In order to learn multiple related tasks more effectively, we transform the joint learning problem into learning a generative probabilistic model for our tasks (or more precisely, task parameters), which precisely explains the relatedness of multiple tasks through the latent, independent components. Unlike the standard Independent Component Analysis where we use observed data to estimate the hidden sources, in LICA the “observed data” for ICA are actually task parameters. Consequently, they are latent and themselves need to be learned from the training data of each individual task. Below we give the precise definition of the probabilistic model for LICA. Suppose we use θ1, θ2, . . . , θK to represent the model parameters of K tasks where θk ∈ RF ×1 can be thought as the parameter vector of the k-th individual task. Consider the following generative model for the K tasks: θk = Λsk + ek sk ∼ p(sk | Φ) (1) ek ∼ N(0, Ψ) where sk ∈RH×1 are the hidden source models with Φ denotes its distribution parameters; Λ ∈RF ×H is a linear transformation matrix; and the noise vector ek ∈RF ×1 Figure 1: Graphical Model for Latent Independent Component Analysis is usually assumed to be a multivariate Gaussian with diagonal covariance matrix Ψ = diag(ψ11, . . . , ψF F ) or even Ψ = σ2I. This is essentially assuming that the hidden sources s are responsible for all the dependencies among θk’s, and conditioned on them all θk’s are independent. Generally speaking we can use any member of the exponential families as p(ek), but in most situations the noise is taken to be a multivariate Gaussian which is convenient. The graphical model for equation (1) is shown as the upper level in Figure 1, whose lower part will be described in the following. 2.1 Probabilistic Discriminative Classifiers One building block in the LICA is the probabilistic model for learning each individual task, and in this paper we focus on classification tasks. We will use the following notation to describe a probabilistic discriminative classifier for task k, and for notation simplicity we omit the task index k below. Suppose we have training data D = {(x1, y1), . . . , (xN, yN)} where xi ∈RF ×1 is the input data vector and yi ∈{0, 1} is the binary class label, our goal is to seek a probabilistic classifier whose prediction is based on the conditional probability p(y = 1|x) △= f(x) ∈[0, 1]. We further assume that the discriminative function to have a linear form f(x) = µ(θT x), which can be easily generalized to non-linear functions by some feature mapping. The output class label y can be thought as randomly generated from a Bernoulli distribution with parameter µ(θT x), and the overall model can be summarized as follows: yi ∼ B(µ(θT xi)) µ(t) = Z t −∞ p(z)dz (2) where B(.) denotes the Bernoulli distribution and p(z) is the probability density function of some random variable Z. By changing the definition of random variable Z we are able to specialize the above model into a variety of popular learning methods. For example, when p(z) is standard logistic distribution we will get logistic regression classifier; when p(z) is standard Gaussian we get the probit regression. In principle any member belonging to the above class of classifiers can be plugged in our LICA, or even generative classifiers like Naive Bayes. We take logistic regression as the basic classifier, and this choice should not affect the main point in this paper. Also note that it is straightforward to extend the framework for regression tasks whose likelihood function yi ∼N(θT xi, σ2) can be solved by simple and efficient algorithms. Finally we would like to point out that although shown in the graphical model that all training instances share the same input vector x, this is mainly for notation simplicity and there is indeed no such restriction in our model. This is convenient since in reality we may not be able to obtain all the task responses for the same training instance. 3 Learning and Inference for LICA The basic idea of the inference algorithm for the LICA is to iteratively estimate the task parameters θk, hidden sources sk, and the mixing matrix Λ and noise covariance Ψ. Here we present two algorithms, one for the empirical Bayes method, and the other for point estimation which is more suitable for high-dimensional tasks. 3.1 Empirical Bayes Method The graphical model shown in Figure 1 is an example of a hierarchical Bayesian model, where the upper levels of the hierarchy model the relation between the tasks. We can use an empirical Bayes approach and learn the parameters Ω= {Φ, Λ, Ψ} from the data while treating the variables Z = {θk, sk}K k=1 as hidden, random variables. To get around the unidentifiability caused by the interaction between Λ and s we assume Φ is of standard parametric form (e.g. zero mean and unit variance) and thus remove it from Ω. The goal is to learn point estimators ˆΛ and ˆΨ as well as obtain posterior distributions over hidden variables given training data. The log-likelihood of incomplete data log p(D | Ω) 1 can be calculated by integrating out hidden variables log p(D|Ω) = K X k=1 log (Z N Y i=1 p(y(k) i | xi, θk) Z p(θk | sk, Λ, Ψ)p(sk|Φ)dsk dθk ) for which the maximization over parameters Ω= {Λ, Ψ} involves two complicated integrals over θk and sk, respectively. Furthermore, for classification tasks the likelihood function p(y|x, θ) is typically non-exponential and thus exact calculation becomes intractable. However, we can approximate the solution by applying the EM algorithm to decouple it into a series of simpler E-steps and M-steps as follows: 1. E-step: Given the parameter Ωt−1 = {Λ, Ψ}t−1 from the (t−1)-th step, compute the distribution of hidden variables given Ωt−1 and D: p(Z | Ωt−1, D) 2. M-step: Maximizing the expected log-likelihood of complete data (Z, D), where the expectation is taken over the distribution of hidden variables obtained in the E-step: Ωt = arg maxΩEp(Z|Ωt−1,D) [log p(D, Z | Ω)] The log-likelihood of complete data can be written as log p(D, Z | Ω) = K X k=1 ( N X i=1 log p(y(k) i | xi, θk) + log p(θk | sk, Λ, Ψ) + log p(sk | Φ) ) where the first and third item do not depend on Ω. After some simplification the M-step can be summarized as {ˆΛ, ˆΨ} = arg maxΛ,Ψ PK k=1 E[log p(θk | sk, Λ, Ψ)] which leads to the following updating equations: ˆΛ = K X k=1 E[θksT k ] ! K X k=1 E[sksT k ] !−1 ; ˆΨ = 1 K K X k=1 E[θkθT k ] −( K X k=1 E[θksT k ])ˆΛT ! In the E-step we need to calculate the posterior distribution p(Z | D, Ω) given the parameter Ωcalculated in previous M-step. Essentially only the first and second order 1Here with a little abuse of notation we ignore the difference of discriminative and generative at the classifier level and use p(D | θk) to denote the likelihood in general. Algorithm 1 Variational Bayes for the E-step (subscript k is removed for simplicity) 1. Initialize q(s) with some standard distribution (Laplace distribution in our case): q(s) = QH h=1 L(0, 1). 2. Solve the following Bayesian logistic regression (or other Bayesian classifier): q(θ) ←arg max q(θ) (Z q(θ) log N(θ; ΛE[s], Ψ) QN i=1 p(yi|θ, xi) q(θ) dθ ) 3. Update q(s): q(s)←arg max q(s) Z q(s) log p(s) q(s) −1 2Tr Ψ−1(E[θθT ]+ΛssTΛT −2E[θ](Λs)T ) ds 4. Repeat steps 2-5 until convergence conditions are satisfied. moments are needed, namely: E[θk], E[sk], E[θkθT k ], E[sksT k ] and E[θksT k ]. Since exact calculation is intractable we will approximate p(Z | D, Ω) with q(Z) belonging to the exponential family such that certain distance measure (can be asymmetric) between p(Z|D, Ω) and q(Z)) is minimized. In our case we apply the variational Bayes method which applies KL (q(Z)||p(D, Z | Ω)) as the distance measure. The central idea is to lower bound the log-likelihood using Jensen’s inequality: log p(D) = log R p(D, Z)dZ ≥ R q(Z) log p(D,Z) q(Z) dZ. The RHS of the above equation is what we want to maximize, and it is straightforward to show that maximizing this lower bound is equivalent to minimize the KL-divergence KL(q(Z)||p(Z|D)). Since given Ωthe K tasks are decoupled, we can conduct inference for each task respectively. We further assume q(θk, sk) = q(θk)q(sk), which in general is a reasonable simplifying assumption and allows us to do the optimization iteratively. The details for the E-step are shown in Algorithm 1. We would like to comment on several things in Algorithm 1. First, we assume the form of q(θ) to be multivariate Gaussian, which is a reasonable choice especially considering the fact that only the first and second moments are needed in the M-step. Second, the prior choice of p(s) in step 3 is significant since for each s we only have one associated “data point” θ. In particular using the Laplace distribution will lead to a more sparse solution of E[s], and this will be made more clear in Section 3.2. Finally, we take the parametric form of q(s) to be the product of Laplace distributions with unit variance but known mean, where the fixed variance is intended to remove the unidentifiability issue caused by the interaction between scales of s and Λ. Although using a full covariance Gaussian for q(s) is another choice, again due to unidentifiability reason caused by rotations of s and Λ we could make it a diagonal Gaussian. As a result, we argue that the product of Laplaces is better than the product of Gaussians since it has the same parametric form as the prior p(s). 3.1.1 Variational Method for Bayesian Logistic Regression We present an efficient algorithm based on the variational method proposed in[7] to solve step 2 in Algorithm 1, which is guaranteed to converge and known to be efficient for this problem. Given a Gaussian prior N(m0, V0) over the parameter θ and a training set 2 D = {(x1, y1), . . . , (xN, yN)}, we want to obtain an approximation N(m, V) to the true posterior distribution p(θ|D). Taking one data point (x, y) as an example, the basic idea is to use an exponential function to approximate the non-exponential likelihood function p(y|x, θ) = (1 + exp(−yθT x))−1 which in turn makes the Bayes formula tractable. 2Again we omit the task index k and use y ∈{−1, 1} instead of y ∈{0, 1} to simplify notation. By using the inequality p(y|x, θ) ≥g(ξ) exp (yxT θ −ξ)/2 −λ(ξ)((xT θ)2 −ξ2) △= p(y|x, θ, ξ) where g(z) = 1/(1 + exp(−z)) is the logistic function and λ(ξ) = tanh(ξ/2)/4ξ, we can maximize the lower bound of p(y|x) = R p(θ)p(y|x, θ)dθ ≥ R p(θ)p(y|x, θ, ξ)dθ. An EM algorithm can be formulated by treating ξ as the parameter and θ as the hidden variable: • E-step: Q(ξ, ξt) = E [log {p(θ)p(y|x, θ, ξ)} | x, y, ξt] • M-step: ξt+1 = arg maxξ Q(ξ, ξt) Due to the Gaussianity assumption the E-step can be thought as updating the sufficient statistics (mean and covariance) of q(θ). Finally by using the Woodbury formula the EM iterations can be unraveled and we get the efficient one-shot E-step updating without involving matrix inversion (due to space limitation we skip the derivation): Vpost = V − 2λ(ξ) 1 + 2λ(ξ)cVx(Vx)T mpost = m − 2λ(ξ) 1 + 2λ(ξ)cVxxT m + y 2Vx −y 2 2λ(ξ) 1 + 2λ(ξ)ccVx where c = xT Vx, and ξ is calculated first from the M-step which is reduced to find the fixed point of the following one-dimensional problem and can be solved efficiently: ξ2 = c − 2λ(ξ) 1 + 2λ(ξ)cc2 + xT m − 2λ(ξ) 1 + 2λ(ξ)ccxT m + y 2c −y 2 2λ(ξ) 1 + 2λ(ξ)cc2 2 And this process will be performed for each data point to get the final approximation q(θ). 3.2 Point Estimation Although the empirical Bayes method is efficient for medium-sized problem, both its computational cost and memory requirement grow as the number of data instances or features increases. For example, it can easily happen in text or image domain where the number of features can be more than ten thousand, so we need faster methods. We can obtain the point estimation of {θk, sk}K k=1, by treating it as a limiting case of the previous algorithm. To be more specific, by letting q(θ) and q(s) converging to the Dirac delta function, step 2 in Algorithm 1 can thought as finding the MAP estimation of θ and step 4 becomes the following lasso-like optimization problem (ms denotes the point estimation of s): ˆms = arg min ms 2||ms||1 + mT s ΛT Ψ−1Λms −2mT s ΛT Ψ−1E[θ] which can be solved numerically. Furthermore, the solution of the above optimization is sparse in ms. This is a particularly nice property since we would only like to consider hidden sources for which the association with tasks are significantly supported by evidence. 4 Experimental Results The LICA model will work most effectively if the tasks we want to learn are very related. In our experiments we apply the LICA model to multi-label text classification problems, which are the case for many existing text collections including the most popular ones like Reuters-21578 and the new RCV1 corpus. Here each individual task is to classify a given document to a particular category, and it is assumed that the multi-label property implies that some of the tasks are related through some latent sources (semantic topics). For Reuters-21578 we choose nine categories out of ninety categories, which is based on fact that those categories are often correlated by previous studies[8]. After some preprocessing3 we get 3,358 unique features/words, and empirical Bayes method is used to 3We do stemming, remove stopwords and rare words (words that occur less than three times). 50 100 200 500 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Training Set Size Macro−F1 Reuters−21578 Individual LICA 100 200 500 1000 0.1 0.2 0.3 0.4 0.5 0.6 Training Set Size Micro−F1 RCV1 Individual LICA Figure 2: Multi-label Text Classification Results on Reuters-21578 and RCV1 solve this problem. On the other hand, if we include all the 116 TOPIC categories in RCV1 corpus we get a much larger vocabulary size: 47,236 unique features. Bayesian inference is intractable for this high-dimensional case since memory requirement itself is O(F 2) to store the full covariance matrix V[θ]. As a result we take the point estimation approach which reduces the memory requirement to O(F). For both data sets we use the standard training/test split, but for RCV1 since the test part of corpus is huge (around 800k documents) we only randomly sample 10k as our test set. Since the effectiveness of learning multiple related tasks jointly should be best demonstrated when we have limited resources, we evaluate our LICA by varying the size of training set. Each setting is repeated ten times and the results are summarized in Figure 2. In Figure 2 the result “individual” is obtained by using regularized logistic regression for each category individually. The number of tasks K is equal to 9 and 116 for the Reuters21578 and the RCV1 respectively, and we set H (the dimension of hidden source) to be the same as K in our experiments. We use F1 measure which is preferred to error rate in text classification due to the very unbalanced positive/negative document ratio. For the Reuters-21578 collection we report the Macro-F1 results because this corpus is easier and thus Micro-F1 are almost the same for both methods. For the RCV1 collection we only report Micro-F1 due to space limitation, and in fact we observed similar trend in Macro-F1 although values are much lower due to the large number of rare categories. Furthermore, we achieved a sparse solution for the point estimation method. In particular, we obtained less than 5 non-zero sources out of 116 for most of the tasks for the RCV1 collection. 5 Discussions on Related Work By viewing multitask learning as predicting multivariate responses, Breiman and Friedman[3] proposed a method called “Curds and Whey” for regression problems. The intuition is to apply shrinkage in a rotated basis instead of the original task basis so that information can be borrowed among tasks. By treating tasks as IID generated from some probability space, empirical process theory[2] has been applied to study the bounds and asymptotics of multiple task learning, similar to the case of standard learning. On the other hand, from the general Bayesian perspective[2][6] we could treat the problem of learning multiple tasks as learning a Bayesian prior over the task space. Despite the generality of above two principles, it is often necessary to assume some specific structure or parametric form of the task space since the functional space is usually of higher or infinite dimension compared to the input space. Our model is related to the recently proposed Semiparametric Latent Factor Model (SLFM) for regression by Teh et. al.[10]. It uses Gaussian Processes (GP) to model regression through a latent factor analysis. Besides the difference between FA and ICA, its advantage is that GP is non-parametric and works on the instance space; the disadvantage of that model is that training instances need to be shared for all tasks. Furthermore, it is not clear how to explore different task structures in this instance-space viewpoint. As pointed out earlier, the exploration of different source models is important in learning related tasks as the prior often plays a more important role than it does in standard learning. 6 Conclusion and Future Work In this paper we proposed a probabilistic framework for learning multiple related tasks, which tries to identify the shared latent independent components that are responsible for the relatedness among those tasks. We also presented the corresponding empirical Bayes method as well as point estimation algorithms for learning the model. Using non-Gaussian distributions for hidden sources makes it possible to identify independent components instead of just decorrelation, and in particular we enjoyed the sparsity by modeling hidden sources with Laplace distribution. Having the sparsity property makes the model not only parsimonious but also more robust since the dependence on latent, independent sources will be shrunk toward zero unless significantly supported by evidence from the data. By learning those related tasks jointly, we are able to get a better estimation of the latent independent sources and thus achieve a better generalization capability compared to conventional approaches where the learning of each task is done independently. Our experimental results in multi-label text classification problems show evidence to support our claim. Our approach assumes that the underlying structure in the task space is a linear subspace, which can usually capture important information about independent sources. However, it is possible to achieve better results if we can incorporate specific domain knowledge about the relatedness of those tasks into the model and obtain a reliable estimation of the structure. For future research, we would like to consider more flexible source models as well as incorporate domain specific knowledge to specify and learn the underlying structure. References [1] Ando, R. and Zhang, T. A Framework for Learning Predicative Structures from Multiple Tasks and Unlabeled Data. Technical Rerport RC23462, IBM T.J. Watson Research Center, 2004. [2] Baxter, J. A Model for Inductive Bias Learning. J. of Artificial Intelligence Research, 2000. [3] Breiman, L. and Friedman J. Predicting Multivariate Responses in Multiple Linear Regression. J. Royal Stat. Society B, 59:3-37, 1997. [4] Evgeniou, T., Micchelli, C. and Pontil, M. Learning Multiple Tasks with Kernel Methods. J. of Machine Learning Research, 6:615-637, 2005. [5] Ghosn, J. and Bengio, Y. Bias Learning, Knowledge Sharing. IEEE Transaction on Neural Networks, 14(4):748-765, 2003. [6] Heskes, T. Empirical Bayes for Learning to Learn. In Proc. of the 17th ICML, 2000. [7] Jaakkola, T. and Jordan, M. A Variational Approach to Bayesian Logistic Regression Models and Their Extensions. In Proc. of the Sixth Int. Workshop on AI and Statistics, 1997. [8] Koller, D. and Sahami, M. Hierarchically Classifying Documents using Very Few Words. In Proc. of the 14th ICML, 1997. [9] Roberts, S. and Everson, R. (editors). Independent Component Analysis: Principles and Practice, Cambridge University Press, 2001. [10] Teh, Y.-W., Seeger, M. and Jordan, M. Semiparametric Latent Factor Models. In Z. Ghahramani and R. Cowell, editors, Workshop on Artificial Intelligence and Statistics 10, 2005. [11] Yu, K., Tresp, V. and Schwaighofer, A. Learning Gaussian Processes from Multiple Tasks. In Proc. of the 22nd ICML, 2005.
|
2005
|
24
|
2,838
|
Context as Filtering Daichi Mochihashi ATR, Spoken Language Communication Research Laboratories Hikaridai 2-2-2, Keihanna Science City Kyoto, Japan daichi.mochihashi@atr.jp Yuji Matsumoto Graduate School of Information Science Nara Institute of Science and Technology Takayama 8916-5, Ikoma City Nara, Japan matsu@is.naist.jp Abstract Long-distance language modeling is important not only in speech recognition and machine translation, but also in high-dimensional discrete sequence modeling in general. However, the problem of context length has almost been neglected so far and a na¨ıve bag-of-words history has been employed in natural language processing. In contrast, in this paper we view topic shifts within a text as a latent stochastic process to give an explicit probabilistic generative model that has partial exchangeability. We propose an online inference algorithm using particle filters to recognize topic shifts to employ the most appropriate length of context automatically. Experiments on the BNC corpus showed consistent improvement over previous methods involving no chronological order. 1 Introduction Contextual effect plays an essential role in the linguistic behavior of humans. We infer the context in which we are involved to make an adaptive linguistic response by selecting an appropriate model from that information. In natural language processing research, such models are called long-distance language models that incorporate distant effects of previous words over the short-term dependencies between a few words, which are called n-gram models. Besides apparent application in speech recognition and machine translation, we note that many problems of discrete data processing reduce to language modeling, such as information retrieval [1], Web navigation [2], human-machine interaction or collaborative filtering and recommendation [3]. From the viewpoint of signal processing or control theory, context modeling is clearly a filtering problem that estimates the states of a system sequentially along time to predict the outputs according to them. However, for the problem of long-distance language modeling, natural language processing has so far only provided simple averaging using a set of whole words from the beginning of a text, totally dropping chronological order and implicitly assuming that the text comes from a stationary information source [4, 5]. The inherent difficulties that have prevented filtering approaches to language modeling are its discreteness and high dimensionality, which precludes Kalman filters and their extensions that are all designed for vector spaces and distributions like Gaussians. As we note in the following, ordinary discrete HMMs are not powerful enough for this purpose because their true state is restricted to a single hidden component [6]. In contrast, this paper proposes to solve the high-dimensional discrete filtering problem directly using a Particle Filter. By combining a multinomial Particle Filter recently proposed in statistics for DNA sequence modeling [7] with Bayesian text models LDA and DM, we introduce two models that can track multinomial stochastic processes of natural language or similar high-dimensional discrete data domains that we often encounter. 2 Mean Shift Model of Context 2.1 HMM for Multinomial Distributions The long-distance language models mentioned in Section 1 assume a hidden multinomial distribution, such as a unigram distribution or a mixture distribution over the latent topics, to predict the next word by updating its estimate according to the observations. Therefore, to track context shifts, we need a model that describes changes of multinomial distributions. One model for this purpose is a multinomial extension to the Mean shift model (MSM) recently proposed in the field of statistics [7]. This is a kind of HMM, but note that it is different from traditional discrete HMMs. In discrete HMMs, the true state is one of M components and we estimate it stochastically as a multinomial over the M components. On the other hand, since the true state here is itself a multinomial over the components, we estimate it stochastically as (possibly a mixture of) a Dirichlet distribution, a distribution of multinomial distributions on the (M −1)-simplex. This HMM has some similarity to the Factorial HMM [6] in that it has a combinatorial representational power through a distributed state representation. However, because the true state here is a multinomial over the latent variables, there are dependencies between the states that are assumed independent in the FHMM. Below, we briefly introduce a multinomial Mean shift model following [7] and an associated solution using a Particle Filter. 2.2 Multinomial Mean Shift Model The MSM is a generative model that describes the intermittent changes of hidden states and outputs according to them. Although there is a corresponding counterpart using Normal distribution that was first introduced [8, 9], here we concentrate on a multinomial extension of MSM, following [7] for DNA sequence modeling. In a multinomial MSM, we assume time-dependent true multinomials θt that may change occasionally and the following generative model for the discrete outputs yt = y1y2 . . . yt (yt ∈Σ ; Σ is a set of symbols) according to θ1θ2 . . . θt: θt ∼Dir(α) with probability ρ = θt−1 with probability (1−ρ) , yt ∼Mult(θt) (1) where Dir(α) and Mult(θ) are a Dirichlet and multinomial distribution with parameters α and θ, respectively. Here we assume that the hyperparameter α is known and fixed, an assumption we will relax in Section 3. This model first draws a multinomial θ from Dir(α) and samples output y according to θ for a certain interval. When a change point occurs with probability ρ, a new θ is sampled again from Dir(α) and subsequent y is sampled from the new θ. This process continues recursively throughout which neither θt nor the change points are known to us; all we know is the output sequence yt. However, if we know that the change has occurred at time c, y can be predicted exactly. Let It be a binary variable that represents whether a change occurred at time t: that is, It = 1 means there was a change at t (θt ̸= θt−1), and It = 0 means there was no change (θt =θt−1). When the last change occurred at time c, 1. For particles i = 1 . . . N, (a) Calculate f(t) and g(t) according to (6). (b) Sample I(i) t ∼Bernoulli (f(t)/(f(t) + g(t))), and update I(i) t−1 to I(i) t . (c) Update weight w(i) t = w(i) t−1 · (f(t) + g(t)). 2. Find a predictive distribution using w(1) t . . . w(N) t and I(1) t . . . I(N) t : p(yt+1|yt) = PN i=1 w(i) t p(yt+1|yt, I(i) t ) (4) where p(yt+1|yt, I(i) t ) is given by (3). Figure 1: Algorithm of the Multinomial Particle Filter. p(yt+1 =y | yt, Ic =1, Ic+1 =· · ·=It =0) = Z p(y|θ)p(θ|yc · · · yt)dθ (2) = αy + ny P y(αy+ny) , (3) where αy is the y’th element of α and ny is the number of occurrences of y in yc · · · yt. Therefore, the essence of this problem lies in how to detect a change point given the data up to time t, a change point problem in discrete space. Actually, this problem can be solved by an efficient Particle Filter algorithm [10] shown below. 2.3 Multinomial Particle Filter · · · Particle #1 Particle #2 Particle #3 Particle #4 Particle #N Weight : yt+1 : t t−1 z }| { Prior d1 d2 dc Figure 2: Multinomial Particle Filter in work. The prediction problem above can be solved by the efficient Particle Filter algorithm shown in Figure 1, graphically displayed in Figure 2 (excluding prior updates). The main intricacy involved is as follows. Let us denote It = {I1 . . . It}. By Bayes’ theorem, p(It|It−1, yt) ∝p(It, yt|It−1, yt−1) = p(yt|yt−1, It−1, It)p(It|It−1) (5) = p(yt|yt−1, It−1, It =1)p(It =1|It−1) =: f(t) p(yt|yt−1, It−1, It =0)p(It =0|It−1) =: g(t) (6) leading p(It =1|It−1, yt) = f(t)/(f(t) + g(t)) p(It =0|It−1, yt) = g(t)/(f(t) + g(t)) . (7) In Expression (5), the first term is a likelihood of observation yt when It has been fixed, which can be obtained through (3). The second term is a prior probability of change, which can be set tentatively by a constant ρ. However, when we endow ρ with a prior Beta distribution Be(α, β), posterior estimate of ρt given the binary change point history It−1 can be obtained using the number of 1’s in It−1, nt−1(1), following a standard Bayesian method: E[ρt|It−1] = α + nt−1(1) α + β + t −1 . (8) This means that we can estimate a “rate of topic shifts” as time proceeds in a Bayesian fashion. Throughout the following experiments, we used this online estimate of ρt. The above algorithm runs for each observation yt (t = 1 . . . T). If we observe a “strange” word that is more predictable from the prior than the contextual distribution, (6) makes f(t) larger than g(t), which leads to a higher probability that It =1 will be sampled in the Bernoulli trial of Algorithm 1(b). 3 Mean Shift Model of Natural Language Chen and Lai [7] recently proposed the above algorithm to analyze DNA sequences. However, when extending this approach to natural language, i.e. word sequences, we meet two serious problems. The first problem is that in a natural language the number of words is extremely large. As opposed to DNA, which has only four letters of A/T/G/C, a natural language usually contains a minimum of some tens of thousands of words and there are strong correlations between them. For example, if “nurse” follows “hospital”we believe that there has been no context shift; however, if “university” follows “hospital,” the context probably has been shifted to a “medical school” subtopic, even though the two words are equally distinct from “hospital.” Of course, this is due to the semantic relationship we can assume between these words. However, the original multinomial MSM cannot capture this relationship because it treats the words independently. To incorporate this relationship, we require an extensive prior knowledge of words as a probabilistic model. The second problem is that in model equation (1), the hyperparameter α of prior Dirichlet distribution of the latent multinomials is assumed to be known. In the case of natural language, this means we know beforehand what words or topics will be spoken for all the texts. Apparently, this is not a natural assumption: we need an online estimation of α as well when we want to extend MSM to natural languages. To solve these problems, we extended a multinomial MSM using two probabilistic text models, LDA and DM. Below we introduce MSM-LDA and MSM-DM, in this order. 3.1 MSM-LDA Latent Dirichlet Allocation (LDA) [3] is a probabilistic text model that assumes a hidden multinomial topic distribution θ over the M topics on a document d to estimate it stochastically as a Dirichlet distribution p(θ|d). Context modeling using LDA [5] regards a history h = w1 . . . wh as a pseudo document and estimates a variational approximation q(θ|h) of a topic distribution p(θ|h) through a variational Bayes EM algorithm on a document [3]. After obtaining topic distribution q(θ|h), we can predict the next word as follows. p(y|h) = Z p(y|θ)q(θ|h)dθ = PM i=1 p(y|θi)⟨θi⟩q(θ|h) (9) When we use this prediction with an associated VB-EM algorithm in place of the na¨ıve Dirichlet model (3) of MSM, we get an MSM-LDA that tracks a latent topic distribution θ instead of a word distribution. Since each particle computes a Dirichlet posterior of topic distribution, the final topic distribution of MSM-LDA is a mixture of Dirichlet distributions for predicting the next word through (4) and (9) as shown in Figure 3(a). Note that MSMLDA has an implicit generative model corresponding to (1) in topic space. However, here we use a conditional model where LDA parameters are already known in order to estimate the context online. In MSM-LDA, we can also update the hyperparameter α sequentially from the history. As seen in Figure 2, each particle has a history that has been segmented into pseudo “documents” d1 . . . dc by the change points sampled so far. Since each pseudo “document” has a Dirichlet posterior q(θ|di) (i = 1 . . . c), a common Dirichlet prior can be inferred by a linear-time Newton-Raphson algorithm [3]. Note that this computation needs only be run when a change point has been sampled. For this purpose, only the sufficient statistics q(θ|di) must be stored for each particle to render itself an online algorithm. Note in passing that MSM-LDA is a model that only tracks a mixing distribution of a mixture model. Therefore, in principle this model is also applicable to other mixture models, e.g. Gaussian mixtures, where mixing distribution is not static but evolves according to (1). change points 0.2 0.64 0.02 Context prior Dirichlet distribution Mixture of Dirichlet distributions Expectation of Topic Mixture next word Time 1 t Word Simplex w1 w2 wn Weights Particle#1 Particle #2 Particle #N Unigram Topic Subsimplex (a) MSM-LDA change points Mixture of Dirichlet distributions 0.2 0.64 0.02 Context prior Dirichlet distribution Unigram distributions Mixture of Mixture of Dirichlet distributions Expectation of Unigram next word Time 1 t Word Simplex w1 w2 wn Weights Particle#1 Particle #2 Particle #N (b) MSM-DM Figure 3: MSM-LDA and MSM-DM in work. However, in terms of multinomial estimation, this generality has a drawback because it uses a lower-dimensional topic representation to predict the next word, which may cause a loss of information. In contrast, MSM-DM is a model that works directly on the word space to predict the next word with no loss of information. 3.2 MSM-DM Dirichlet Mixtures (DM) [11] is a novel Bayesian text model that has the lowest perplexity reported so far in context modeling. DM uses no intermediate “topic” variables, but places a mixture of Dirichlet distributions directly on the word simplex to model word correlations. Specifically, DM assumes the following generative model for a document w = w1 . . . wN:1 m w N D (a) Unigram Mixture (UM) m wN D p α (b) Dirichlet Mixtures (DM) Figure 4: Graphical models of UM and DM. 1. Draw m ∼Mult(λ). 2. Draw p ∼Dir(αm). 3. For n = 1 . . . N, a. Draw wn ∼Mult(p). where p is a V -dimensional unigram distribution over words, α1 . . . αM = αM 1 are parameters of Dirichlet prior distributions of p, and λ is a M-dimensional prior mixing distribution of them. This model is considered a Bayesian extension of the Unigram Mixture [12] and has a graphical model shown in Figure 4. Given a set of documents D = {w1, w2, . . . , wD}, parameters λ and αM 1 can be iteratively estimated by a combination of EM algorithm and the modified Newton-Raphson method shown in Figure 5, which is a straight extension to the estimation of a Polya mixture [13]. 2 Under DM, a predictive probability p(y|h) is (omitting dependencies on λ and αM 1 ): p(y|h) = M X m=1 p(y|m, h)p(m|h) = M X m=1 Z p(y|p)p(p|αm,h)dp · p(m|h) = M X m=1 Cm αmy+ny P y(αmy+ny) , (10) 1Step 1 of the generative model in fact can be replaced by a Dirichlet process prior. Full Bayesian treatment of DM through Dirichlet processes is now under our development. 2DM is an extension to the model for amino acids [14] to natural language with a huge number of parameters, which precludes the ordinary Newton-Raphson algorithm originally proposed in [14]. E step: p(m|wi) ∝λm Γ(P v αmv) Γ(P v αmv+P v niv) VY v=1 Γ(αmv+niv) Γ(αmv) (13) M step: λm ∝PD i=1 p(m|wi) , (14) α′ mv = αmv · P i p(m|wi) niv/(αmv + niv −1) P i p(m|wi) P v niv/(P v αmv + P v niv −1) (15) Figure 5: EM-Newton algorithm of Dirichlet Mixtures. where Cm ∝λm Γ(P v αmv) Γ(P v αmv+h) VY v=1 Γ(αmv+nv) Γ(αmv) (11) and nv is the number of occurrences of v in h. This prediction can also be considered an extension to Dirichlet smoothing [15] with multiple hyperparameters αm to weigh them accordingly by Cm.3 When we replace a na¨ıve Dirichlet model (3) by a DM prediction (10), we get a flexible MSM-DM dynamic model that works on word simplex directly. Since the original multinomial MSM places a Dirichlet prior in the model (1), MSM-DM is considered a natural extension to MSM by placing a mixture of Dirichlet priors rather than a single Dirichlet prior for multinomial unigram distribution. Because each particle calculates a mixture of Dirichlet posteriors for the current context, the final MSM-DM estimate is a mixture of them, again a mixture of Dirichlet distributions as shown in Figure 3(b). In this case, we can also update the mixture prior λ sequentially. Because each particle has “pseudo documents” w1 . . . wc segmented by change points individually, posterior λm can be obtained similarly as (14), λm ∝Pc i=1 p(m|wi) (12) where p(m|wi) is obtained from (13). Also in this case, only the sufficient statistics p(m|wi) (i = 1 .. c) must be stored to make MSM-DM a filtering algorithm. 4 Experiments We conducted experiments using a standard British National Corpus (BNC). We randomly selected 100 files of BNC written texts as an evaluation set, and the remaining 2,943 files as a training set for parameter estimation of LDA and DM in advance. 4.1 Training and evaluation data Since LDA and DM did not converge on the long texts like BNC, we divided training texts into pseudo documents with a minimum of ten sentences for parameter estimation. Due to the huge size of BNC, we randomly selected a maximum of 20 pseudo documents from each of the 2,943 files to produce a final corpus of 56,939 pseudo documents comprising 11,032,233 words. We used a lexicon of 52,846 words with a frequency ≥5. Note that this segmentation is optional and has an only indirect influence on the experiments. It only affects the clustering of LDA and DM: in fact, we could use another corpus, e.g. newspaper corpus, to estimate the parameters without any preprocessing. Since the proposed method is an algorithm that simultaneously captures topic shifts and their rate in a text to predict the next word, we need evaluation texts that have different rates of topic shifts. For this purpose, we prepared four different text sets by sampling 3Therefore, MSM-DM is considered an ingenious dynamic Dirichlet smoothing as well as a context modeling. Text MSM-DM DM MSM-LDA LDA Raw 870.06 (−6.02%) 925.83 1028.04 1037.42 Slow 893.06 (−8.31%) 974.04 1047.08 1060.56 Fast 898.34 (−9.10%) 988.26 1044.56 1061.01 VFast 960.26 (−7.57%) 1038.89 1065.15 1050.83 Table 2: Contextual Unigram Perplexities for Evaluation Texts. from the long BNC texts. Specifically, we conducted sentence-based random sampling as follows. (1) Select a first sentence randomly for each text. (2) Sample contiguous X sentences from that sentence. (3) Skip Y sentences. (4) Continue steps (2) and (3) until a desired length of text is obtained. In the procedure above, X and Y are random variables that have uniform distributions given in Table 1. We sampled 100 sentences from each of the 100 files by this procedure to create the four evaluation text sets listed in the table. 4.2 Parameter settings Name Property Raw X = 100, Y = 0 Slow 1 ≤X ≤10, 1 ≤Y ≤3 Fast 1 ≤X ≤10, 1 ≤Y ≤10 VeryFast X = 1, 1 ≤Y ≤10 Table 1: Types of Evaluation Texts. The number of latent classes in LDA and DM are set to 200 and 50, respectively.4 The number of particles is set to N = 20, a relatively small number because each particle executes an exact Bayesian prediction once previous change points have been sampled. Beta prior distribution of context change can be initialized as a uniform distribution, (α, β)=(1, 1). However, based on a preliminary experiment we set it to (α, β) = (1, 50): this means we initially assume a context change rate of once every 50 words in average, which will be updated adaptively. 4.3 Experimental results Table 2 shows the unigram perplexity of contextual prediction for each type of evaluation set. Perplexity is a reciprocal of the geometric average of contextual predictions, thus better predictions yield lower perplexity. While MSM-LDA slightly improves LDA due to the topic space compression explained in Section 3.1, MSM-DM yields a consistently better prediction, and its performance is more significant for texts whose subtopics change faster. Figure 6 shows a plot of the actual improvements relative to DM, PPLMSM−PPLDM. We can see that prediction improves for most documents by automatically selecting appropriate contexts. The maximum improvement was –365 in PPL for one of the evaluation texts. Finally, we show in Figure 7 a sequential plot of context change probabilities p(i)(It = 1) (i = 1..N, t = 1..T) calculated by each particle for the first 1,000 words of one of the evaluation texts. 5 Conclusion and Future Work In this paper, we extended the multinomial Particle Filter of a small number of symbols to natural language with an extremely large number of symbols. By combining original filter with Bayesian text models LDA and DM, we get two models, MSM-LDA and MSM-DM, that can incorporate semantic relationship between words and can update their hyperparam4We deliberately chose a smaller number of mixtures in DM because it is reported to have a better performance in small mixtures since it is essentially a unitopic model, in contrast to LDA. 0 10 20 30 40 -400 -300 -200 -100 0 100 200 300 Documents Perplexity reduction Figure 6: Perplexity reductions of MSM relative to DM. 0 5 10 15 20 0 100 200 300 400 500 600 700 800 900 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Particle Time Figure 7: Context change probabilities for 1,000 words text, sampled by the particles. eter sequentially. According to this model, prediction is made using a mixture of different context lengths sampled by each Monte Carlo particle. Although the proposed method is still in its fundamental stage, we are planning to extend it to larger units of change points beyond words, and to use a forward-backward MCMC or Expectation Propagation to model a semantic structure of text more precisely. References [1] Jay M. Ponte and W. Bruce Croft. A Language Modeling Approach to Information Retrieval. In Proc. of SIGIR ’98, pages 275–281, 1998. [2] David Cohn and Thomas Hofmann. The Missing Link: a probabilistic model of document content and hypertext connectivity. In NIPS 2001, 2001. [3] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [4] Daniel Gildea and Thomas Hofmann. Topic-based Language Models Using EM. In Proc. of EUROSPEECH ’99, pages 2167–2170, 1999. [5] Takuya Mishina and Mikio Yamamoto. Context adaptation using variational Bayesian learning for ngram models based on probabilistic LSA. IEICE Trans. on Inf. and Sys., J87-D-II(7):1409– 1417, 2004. [6] Zoubin Ghahramani and Michael I. Jordan. Factorial Hidden Markov Models. In Advances in Neural Information Processing Systems (NIPS), volume 8, pages 472–478. MIT Press, 1995. [7] Yuguo Chen and Tze Leung Lai. Sequential Monte Carlo Methods for Filtering and Smoothing in Hidden Markov Models. Discussion Paper 03-19, Institute of Statistics and Decision Sciences, Duke University, 2003. [8] H. Chernoff and S. Zacks. Estimating the Current Mean of a Normal Distribution Which is Subject to Changes in Time. Annals of Mathematical Statistics, 35:999–1018, 1964. [9] Yi-Chin Yao. Estimation of a noisy discrete-time step function: Bayes and empirical Bayes approaches. Annals of Statistics, 12:1434–1447, 1984. [10] Arnaud Doucet, Nando de Freitas, and Neil Gordon. Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science. Springer-Verlag, 2001. [11] Mikio Yamamoto and Kugatsu Sadamitsu. Dirichlet Mixtures in Text Modeling. CS Technical Report CS-TR-05-1, University of Tsukuba, 2005. http://www.mibel.cs.tsukuba.ac.jp/˜myama/ pdf/dm.pdf. [12] Kamal Nigam, Andrew K. McCallum, Sebastian Thrun, and Tom M. Mitchell. Text Classification from Labeled and Unlabeled Documents using EM. Machine Learning, 39(2/3):103–134, 2000. [13] Thomas P. Minka. Estimating a Dirichlet distribution, 2000. http://research.microsoft.com/ ˜minka/papers/dirichlet/. [14] K. Sj¨olander, K. Karplus, M.P. Brown, R. Hughey, R. Krogh, I.S. Mian, and D. Haussler. Dirichlet Mixtures: A Method for Improved Detection of Weak but Significant Protein Sequence Homology. Computing Applications in the Biosciences, 12(4):327–245, 1996. [15] D. J. C. MacKay and L. Peto. A Hierarchical Dirichlet Language Model. Natural Language Engineering, 1(3):1–19, 1994.
|
2005
|
25
|
2,839
|
A Matching Pursuit Approach to Sparse Gaussian Process Regression S. Sathiya Keerthi Yahoo! Research Labs 210 S. DeLacey Avenue Pasadena, CA 91105 selvarak@yahoo-inc.com Wei Chu Gatsby Computational Neuroscience Unit University College London London, WC1N 3AR, UK chuwei@gatsby.ucl.ac.uk Abstract In this paper we propose a new basis selection criterion for building sparse GP regression models that provides promising gains in accuracy as well as efficiency over previous methods. Our algorithm is much faster than that of Smola and Bartlett, while, in generalization it greatly outperforms the information gain approach proposed by Seeger et al, especially on the quality of predictive distributions. 1 Introduction Bayesian Gaussian processes provide a promising probabilistic kernel approach to supervised learning tasks. The advantage of Gaussian process (GP) models over non-Bayesian kernel methods, such as support vector machines, comes from the explicit probabilistic formulation that yields predictive distributions for test instances and allows standard Bayesian techniques for model selection. The cost of training GP models is O(n3) where n is the number of training instances, which results in a huge computational cost for large data sets. Furthermore, when predicting a test case, a GP model requires O(n) cost for computing the mean and O(n2) cost for computing the variance. These heavy scaling properties obstruct the use of GPs in large scale problems. Recently, sparse GP models which bring down the complexity of training as well as testing have attracted considerable attention. Williams and Seeger (2001) applied the Nystr¨om method to calculate a reduced rank approximation of the original n×n kernel matrix. Csat´o and Opper (2002) developed an on-line algorithm to maintain a sparse representation of the GP models. Smola and Bartlett (2001) proposed a forward selection scheme to approximate the log posterior probability. Candela (2004) suggested a promising alternative criterion by maximizing the approximate model evidence. Seeger et al. (2003) presented a very fast greedy selection method for building sparse GP regression models. All of these methods make efforts to select an informative subset of the training instances for the predictive model. This subset is usually referred to as the set of basis vectors, denoted as I. The maximal size of I is usually limited by a value dmax. As dmax ≪n, the sparseness greatly alleviates the computational burden in both training and prediction of the GP models. The performance of the resulting sparse GP models crucially depends on the criterion used in the basis vector selection. Motivated by the ideas of Matching Pursuit (Vincent and Bengio, 2002), we propose a new criterion of greedy forward selection for sparse GP models. Our algorithm is closely related to that of Smola and Bartlett (2001), but the criterion we propose is much more efficient. Compared with the information gain method of Seeger et al. (2003) our approach yields clearly better generalization performance, while essentially having the same algorithm complexity. We focus only on regression in this paper, but the main ideas are applicable to other supervised learning tasks. The paper is organized as follows: in Section 2 we present the probabilistic framework for sparse GP models; in Section 3 we describe our method of greedy forward selection after motivating it via the previous methods; in Section 4 we discuss some issues in model adaptation; in Section 5 we report results of numerical experiments that demonstrate the effectiveness of our new method. 2 Sparse GPs for regression In regression problems, we are given a training data set composed of n samples. Each sample is a pair of an input vector xi ∈Rm and its corresponding target yi ∈R. The true function value at xi is represented as an unobservable latent variable f(xi) and the target yi is a noisy measurement of f(xi). The goal is to construct a predictive model that estimates the relationship x →f(x). Gaussian process regression. In standard GPs for regression, the latent variables {f(xi)} are random variables in a zero mean Gaussian process indexed by {xi}. The prior distribution of {f(xi)} is a multivariate joint Gaussian, denoted as P(f) = N(f; 0, K), where f = [f(x1), . . . , f(xn)]T and K is the n × n covariance matrix whose ij-th element is K(xi, xj), K being the kernel function. The likelihood is essentially a model of the measurement noise, which is usually evaluated as a product of independent Gaussian noises, P(y|f) = N(y; f, σ2I), where y = [y1, . . . , yn]T and σ2 is the noise variance. The posterior distribution P(f|y) ∝P(y|f)P(f) is also exactly a Gaussian: P(f|y) = N(f; Kα⋆, σ2K(K + σ2I)−1) (1) where α⋆= (K + σ2I)−1y. For any test instance x, the predictive distribution is N(f(x); µx, σ2 x) where µx = kT (K + σ2I)−1y = kT α⋆, σ2 x = K(x, x) −kT (K + σ2I)−1k, and k = [K(x1, x), . . . , K(xn, x)]T . The computational cost of training is O(n3), which mainly comes from the need to invert the matrix (K + σ2I) and obtain the vector α⋆. For doing predictions of a test instance the cost is O(n) to compute the mean and O(n2) for computing the variance. This heavy scaling with respect to n makes the use of standard GP computationally prohibitive on large datasets. Projected latent variables. Seeger et al. (2003) gave a neat method for working with a reduced number of latent variables, laying the foundation for forming sparse GP models. In this section we review their ideas. Instead of assuming n latent variables for all the training instances, sparse GP models assume only d latent variables placed at some chosen basis vectors {˜xi}, denoted as a column vector f I = [f(˜x1), . . . , f(˜xd)]T . The prior distribution of the sparse GP is a joint Gaussian over f I only, i.e., P(f I) = N(f I; 0, KI) (2) where KI is the d × d covariance matrix of the basis vectors whose ij-th element is K(˜xi, ˜xj). These latent variables are then projected to all the training instances. Under the imposed joint Gaussian prior, the conditional mean at the training instances is KT I,· K−1 I f I, where KI,· is a d × n matrix of the covariance functions between the basis vectors and all the training instances. The likelihood can be evaluated by these projected latent variables as follows P(y|f I) = N(y; KT I,· K−1 I f I, σ2I) (3) The posterior is P(f I|y) = N(f I; KI α⋆ I, σ2KI(σ2KI + KI,· KT I,·)−1KI), where α⋆ I = (σ2KI + KI,· KT I,·)−1KI,· y. The predictive distribution at any test instance x is N(f(x); ˜µx, ˜σ2 x), where ˜µx = ˜k T α⋆ I, ˜σ2 x = K(x, x) −˜k T K−1 I ˜k T + σ2˜k T (σ2KI + KI,· KT I,·)−1˜k, and ˜k is a column vector of the covariance functions between the basis vectors and the test instance x, i.e. ˜k = [K(˜x1, x), . . . , K(˜xd, x)]T . While the cost of training the full GP model is O(n3), the training complexity of sparse GP models is only O(nd2 max). This corresponds to the cost of forming K−1 I , (σ2KI + KI,· KT I,·)−1 and α⋆ I. Thus, if dmax is not big, learning on large datasets is feasible via sparse GP models. Also, for these sparse models, prediction for each test instance costs O(dmax) for the mean and O(d2 max) for the variance. Generally the basis vectors can be placed anywhere in the input space Rm. Since training instances usually cover the input space of interest quite well, it is quite reasonable to select basis vectors from just the set of training instances. For a given problem dmax is chosen to be as large as possible subject to constraints on computational time in training and/or testing. Then we use some basis selection method to find I of size dmax. This important step is taken up in section 3. A Useful optimization formulation. As pointed out by Smola and Bartlett (2001), it is useful to view the determination of the mean of the posterior as coming from an optimization problem. This viewpoint helps in the selection of basis vectors. The mean of the posterior distribution is exactly the maximum a posteriori (MAP) estimate, and it is possible to give an equivalent parametric representation of the latent variables as f = Kα, where α = [α1, . . . , αn]T . The MAP estimate of the full GP is equivalent to minimizing the negative logarithm of the posterior (1): min α π(α) := 1 2 αT (σ2K + KT K) α −yT K α (4) Similarly, using f I = KIαI for sparse GP models, the MAP estimate of the sparse GP is equivalent to minimizing the negative logarithm of the posterior, P(fI|y): min αI ˜π(αI) := 1 2 αT I (σ2KI + KI,·KT I,·) αI −yT KT I,· αI (5) Suppose α in (4) is composed of two parts, α = [αI ; αR] where I denotes the set of basis vectors and R denotes the remaining instances. Interestingly, as pointed out by Seeger et al. (2003), the optimization problem (5) is same as minimizing π(α) in (4) using αI only, i.e., with the constraint, αR = 0. In other words, the basis vectors of the sparse GPs can be selected to minimize the negative log-posterior of the full GPs, π(α) defined as in (4). 3 Selection of basis functions The most crucial element of the sparse GP approach of the previous section is the choice of I, the set of basis vectors, which we take to be a subset of the training vectors. The cheapest method is to select the basis vectors at random from the training data set. But, such a choice will not work well when dmax is much smaller than n. A principled approach is to select I that makes the corresponding sparse GP approximate well, the posterior distribution of the full GP. The optimization formulation of the previous section is useful here. It would be ideal to choose, among all subsets, I of size dmax, the one that gives the best value of ˜π in (5). But, this requires a combinatorial search that is infeasible for large problems. A practical approach is to do greedy forward selection. This is the approach used in previous methods as well as in our method of this paper. Before we go into the details of the methods, let us give a brief discussion of the time complexities associated with forward selection. There are two costs involved. (1) There is a basic cost associated with updating of the sparse GP solution, given a sequence of chosen basis functions. Let us refer to this cost as Tbasic. This cost is the same for all forward selection methods, and is O(nd2 max). (2) Then, depending on the basis selection method, there is the cost associated with basis selection. We will refer to the accumulated value of this cost for choosing all dmax basis functions as Tselection. Forward basis selection methods differ in the way they choose effective basis functions while keeping Tselection small. It is useful to note that the total cost associated with the random basis selection method mentioned earlier is just Tbasic = O(nd2 max). This cost forms a baseline for comparison. Smola and Bartlett’s method. Consider the typical situation in forward selection where we have a current working set I and we are interested in choosing the next basis vector, xi. The method of Smola and Bartlett (2001) evaluates each given xi /∈I by trying its complete inclusion, i.e., set I′ = I ∪{xi} and optimize π(α) using αI′ = [αI ; αi]. Thus, their selection criterion for the instance xi /∈I is the decrease in π(α) that can be obtained by allowing both αI and αi as variables to be non-zero. The minimal value of π(α) can be obtained by solving minαI′ ˜π(αI′) defined in (5). This costs O(nd) time for each candidate, xi, where d is the size of the current set, I. If all xi /∈I need to be tried, it will lead to O(n2d) cost. Accumulated till dmax basis functions are added, this leads to a Tselection that has O(n2d2 max) complexity, which is disproportionately higher than Tbasic. Therefore, Smola and Bartlett (2001) resorted to a randomized scheme by considering only κ basis elements randomly chosen from outside I during one basis selection. They used a value of κ = 59. For this randomized method, the complexity of Tselection is O(κnd2 max). Although, from a complexity viewpoint, Tbasic and Tselection are same, it should be noted that the overall cost of the method is about 60 times that of Tbasic. Seeger et al’s information gain method. Seeger et al. (2003) proposed a novel and very cheap heuristic criterion for basis selection. The “informativeness” of an input vector xi /∈ I is scored by the information gain between the true posterior distribution, P(fI′|y) and a posterior approximation, Q(f I′|y), where I′ denotes the new set of basis vectors after including a new element xi into the current set I. The posterior approximation Q(f I′|y) ignores the dependencies between the latent variable f(xi) and the targets other than yi. Due to this simplification, this value of information gain is computed in O(1) time, given the current predictive model represented by I. Thus, the scores of all instances outside I can be efficiently evaluated in O(n) time, which makes this algorithm almost as fast as using random selection! The potential weakness of this algorithm might be the non-use of the correlation in the remaining instances {xi : xi /∈I}. Post-backfitting approach. The two methods presented above are extremes in efficiency: in Smola and Bartlett’s method Tselection is disproportionately larger than Tbasic while, in Seeger et al’s method Tselection is very much smaller than Tbasic. In this section we introduce a moderate method that is effective and whose complexity is in between the two earlier methods. Our method borrows an idea from kernel matching pursuit. Kernel Matching Pursuit (Vincent and Bengio, 2002) is a sparse method for ordinary least squares that consists of two general greedy sparse approximation schemes, called prebackfitting and post-backfitting. It is worth pointing out that the same methods were also considered much earlier in Adler et al. (1996). Both methods can be generalized to select the basis vectors for sparse GPs. The pre-backfitting approach is very similar in spirit to Smola and Bartlett’s method. Our method is an efficient selection criterion that is based on the post-backfitting idea. Recall that, given the current I, the minimal value of π(α) when it is optimized using only αI as variables is equivalent to minαI ˜π(αI) as in (5). The minimizer, denoted as α⋆ I, is given by α⋆ I = (σ2KI + KI,· KT I,·)−1KI,· y (6) Our scoring criterion for an instance xi /∈I is based on optimizing π(α) by fixing αI = α⋆ I and changing αi only. The one-dimensional minimizer can be easily found as α∗ i = KT i,·(y −KT I,·α⋆ I) −σ2˜k T i α⋆ I σ2K(xi, xi) + KT i,·Ki,· (7) where Ki,· is the n×1 matrix of covariance functions between xi and all the training data, and ˜ki is a d dimensional vector having K(xj, xi), xj ∈I. The selection score of the instance xi is the decrease in π(α) achieved by the one dimensional optimization of αi, which can be written in closed form as ∆i = 1 2 (α∗ i )2 σ2K(xi, xi) + KT i,·Ki,· (8) where α∗ i is defined as in (7). Note that a full kernel column Ki,· is required and so it costs O(n) time to compute (8). In contrast, for scoring one instance, Smola and Bartlett’s method requires O(nd) time and Seeger et al’s method requires O(1) time. Ideally we would like to run over all xi /∈I and choose the instance which gives the largest decrease. This will need O(n2) effort. Summing the cost till dmax basis vectors are selected, we get an overall complexity of O(n2dmax), which is much higher than Tbasic. To restrict the overall complexity of Tselection to O(nd2 max), we resort to a randomization scheme that selects a relatively good one rather than the best. Since it costs only O(n) time to evaluate our selection criterion in (8) for one instance, we can choose the next basis vector from a set of dmax instances randomly selected from outside of I. Such a scheme keeps the overall complexity of Tselection to O(nd2 max). But, from a practical point of view the scheme is expensive because the selection criterion (8) requires computing a full kernel row Ki,· for each instance to be evaluated. As kernel evaluations could be very expensive, we propose a modified scheme to keep the number of such evaluations small. Let us maintain a matrix cache, C of size c×n, that contains c rows of the full kernel matrix K. At the beginning of the algorithm (when I is empty) we initialize C by randomly choosing c training instances, computing the full kernel row, Ki,· for the chosen i’s and putting them in the rows of C. Each step corresponding to a new basis vector selection proceeds as follows. First we compute ∆i for the c instances corresponding to the rows of C and select the instance with the highest score for inclusion in I. Let xj denote the chosen basis vector. Then we sort the remaining instances (that define C) according to their ∆i values. Finally, we randomly select κ fresh instances (from outside of I and the vectors that define C) to replace xj and the κ −1 cached instances with the lowest score. Thus, in each basis selection step, we compute the criterion scores for c instances, but evaluate full kernel rows only for κ fresh instances. An important advantage of the above scheme is that, those basis elements which have very good scores, but are overtaken by another better element in a particular step, continue to remain in C and probably get to be selected in future basis selection steps. Like in Smola and Bartlett’s method we use κ = 59. The value of c can be set to be any integer between κ and dmax. For any c in this range, the complexity of Tselection remains at most O(nd2 max). The above cache scheme is special to our method and cannot be used with Smola and Bartlett’s method without unduly increasing its complexity. If available, it is also useful to have an extra cache for storing kernel rows of instances which get discarded in one step, but which get to be considered again in a future step. Smola and Bartlett’s method can also gain from such a cache. 4 Model adaptation In this section we address the problem of model adaptation for a given number of basis functions, dmax. Seeger (2003) and Seeger et al. (2003) give the details together with a very good discussion of various issues associated with gradient based model adaptation. Since the same ideas hold for all basis selection methods, we will not discuss them in detail. The sparse GP model is conditional on the parameters in the kernel function and the Gaussian noise level σ2, which can all be collected together in θ, the hyperparameter vector. The optimal values of θ can be inferred by minimizing the negative log of the marginal likelihood, φ(θ) = −log P(y|θ) using gradient based techniques, where P(y|θ) = P(y|f I)P(f I)df I = N(y|0, σ2I + KT I,· K−1 I KI,·). One of the problems in doing this is the dependence of I on θ that makes φ a non-differentiable function. This problem can be handled by repeating the following alternating steps: (1) fix θ and select I by the given basis selection algorithm; and (2) fix I and do a (short) gradient based adaptation of θ. For the cache-based post-backfitting method of basis selection we also do the following for adding some stability to the model adaptation process. After we do step (2) using some I and obtain a θ we set the initial kernel cache, C using the rows of KI,· at θ. 5 Numerical experiments In this section, we compare our method against other sparse GP methods to verify the usefulness of our algorithm. To evaluate generalization performance, we utilize Normalized Mean Square Error (NMSE) given by 1 t t i=1 (yi−µi)2 Var(y) and Negative Logarithm of Predictive Distribution (NLPD) defined as 1 t t i=1 −log P(yi|µi, σ2 i ) where t is the number of test cases, yi, µi and σ2 i are, respectively, the target, the predictive mean and the predictive variance of the i-th test case. NMSE uses only the mean while NLPD measures the quality of predictive distributions as it penalizes over-confident predictions as well as under-confident ones. For all experiments, we use the ARD Gaussian kernel defined by K(xi, xj) = υ0 exp m ℓ=1 υℓ(xℓ i −xℓ j)2 + υb where υ0, υℓ, υb > 0 and xℓ i denotes the ℓ-th element of xi. The ARD parameters {υℓ} give variable weights to input features that leads to a type of feature selection. Quality of Basis Selection in KIN40K Data Set. We use the KIN40K data set,1 composed of 40,000 samples, to evaluate and compare the performance of the various basis selection criteria. We first trained a full GPR model with the ARD Gaussian kernel on a subset of 2000 samples randomly selected in the dataset. The optimal values of the hyperparameters that we obtained were fixed and used for all the sparse GP models in this experiment. We compare the following five basis selection methods: 1. the baseline algorithm (RAND) that selects I at random; 2. the information gain algorithm (INFO) proposed by Seeger et al. (2003); 3. our algorithm described in Section 3 with cache size c = κ = 59 (KAPPA) in which we evaluate the selection scores of κ instances at each step; 4. our algorithm described in Section 3 with cache size c = dmax (DMAX); 5. the algorithm (SB) proposed by Smola and Bartlett (2001) with κ = 59. We randomly selected 10,000 samples for training, and kept the remaining 30,000 samples as test cases. For the purpose of studying variability the methods were run on ten such random partitions. We varied dmax from 100 to 1200. The test performances of the five methods are presented in Figure 1. From the upper plot of Figure 1 we can see that INFO yields much worse NMSE results than KAPPA, DMAX and SB, when dmax is less than 600. When the size is around 100, INFO is even worse than RAND. DMAX is always better than KAPPA. Interestingly, DMAX is even slightly better than SB when dmax is less than 200. This is probably because DMAX has a bigger set of basis functions to choose from, than SB. SB generally yields slightly better results than KAPPA. From the middle plot of Figure 1 we can note that INFO always gives poor NLPD results, even worse than RAND. The performances of KAPPA, DMAX and SB are close. The lower plot of Figure 1 gives the CPU time consumed by the five algorithms for training, as a function of dmax, in log −log scale. The scaling exponents of RAND, INFO and SB are 1The dataset is available at http://www.igi.tugraz.at/aschwaig/data.html. 100 200 300 400 500 600 700 800 900 1000 1100 1200 0 0.1 0.2 0.3 NMSE 100 200 300 400 500 600 700 800 900 1000 1100 1200 −1.8 −1.6 −1.4 −1.2 −1 −0.8 −0.6 −0.4 NLPD 100 200 300 500 1000 2000 10 0 10 1 10 2 10 3 10 4 10 5 CPU TIME SB DMAX KAPPA INFO RAND Figure 1: The variations of test set NMSE, test set NLPD and CPU time (in seconds) for training of the five algorithms as a function of dmax. In the NMSE and NLPD plots, at each value of dmax, the results of the five algorithms are presented as a boxplot group. From left to right, they are RAND(blue), INFO(red), KAPPA(green), DMAX(black), and SB(magenta). Note that the CPU time plot is on a log −log scale. around 2.0 (i.e., cost is proportional to d2 max), which is consistent with our analysis. INFO is almost as fast as RAND, while SB is about 60 times slower than INFO. The gap between KAPPA and INFO is the O(κndmax) time in computing the score (8) for κ candidates.2 As dmax increases, the cost of KAPPA asymptotically gets close to INFO. The gap between DMAX and KAPPA is the O(nd2 max −κndmax) cost in computing the score (8) for the additional (dmax −κ) instances. Thus, as dmax increases, the curve of DMAX asymptotically becomes parallel to the curve of INFO. Asymptotically, the ratio of the computational times of DMAX and INFO is only about 3. Thus, unlike SB, which is about 60 times slower than INFO, DMAX is only about 3 times slower than INFO. Thus DMAX is an excellent method for achieving excellent generalization while also being quite efficient. Model Adaptation on Benchmark Data Sets. Next, we compare model adaptation abilities of the following three algorithms for dmax = 500. 1. The SB algorithm is applied to build a sparse GPR model with fixed hyperparameters (FIXED-SB). The values of these hyperparameters were obtained by training via a standard full GPR model on a manageable subset of 2000 samples randomly selected from the training data. FIXED-SB serves as a baseline. 2. The model adaptation scheme is coupled with the INFO basis selection algorithm (ADAPT-INFO). 3. The model adaptation scheme is coupled with our DMAX basis selection algorithm (ADAPT-DMAX). 2If we want to take kernel evaluations also into account, the cost of KAPPA is O(mκndmax) where m is the number of input variables. Note that INFO does not require any kernel evaluations for computing its selection criterion. Table 1: Test results of the three algorithms on the seven benchmark regression datasets. The results are the averages over 20 trials, along with the standard deviation. d denotes the number of input features, ntrg denotes the training data size and ntst denotes the test data size. We use bold face to indicate the lowest average value among the results of the three algorithms. The symbol ⋆is used to indicate the cases significantly worse than the winning entry; a p-value threshold of 0.01 in Wilcoxon rank sum test was used to decide this. N M S E N L P D DATASET d ntrg ntst FIXED-SB ADAPT-INFO ADAPT-DMAX FIXED-SB ADAPT-INFO ADAPT-DMAX BANK8FM 8 4500 3692 3.52 ± 0.08 3.54 ± 0.08 3.56 ± 0.09 3.11 ± 0.65⋆ 1.37 ± 0.34⋆ 0.67 ± 0.53 BANK32NH 32 4500 3692 48.08 ± 2.92 49.04 ± 1.34⋆47.41 ± 1.35 −1.02 ± 0.21 −0.79 ± 0.06⋆−0.88 ± 0.03⋆ CPUSMALL 12 4500 3692 2.45 ± 0.16 2.45 ± 0.15 2.46 ± 0.14 5.18 ± 0.61⋆ 3.70 ± 0.46⋆ 3.04 ± 0.17 CPUACT 21 4500 3692 1.58 ± 0.13 1.61 ± 0.14 1.61 ± 0.11 4.49 ± 0.26⋆ 3.68 ± 0.40⋆ 3.09 ± 0.20 CALHOUSE 8 10000 10640 22.58 ± 0.34⋆22.82 ± 0.46⋆20.02 ± 0.88 31.83 ± 3.35⋆21.20 ± 1.47⋆ 13.03 ± 0.30 HOUSE8L 8 10000 12784 42.27 ± 2.14⋆37.30 ± 1.29⋆35.87 ± 0.94 12.06 ± 0.67 12.06 ± 0.07⋆ 11.71 ± 0.03 HOUSE16H 16 10000 12784 53.45 ± 7.05⋆45.72 ± 1.15⋆44.29 ± 0.76 12.72 ± 1.69 12.48 ± 0.06⋆ 12.13 ± 0.04 We selected seven large regression datasets.3 Each of them is randomly partitioned into training/test splits. For the purpose of analyzing statistical significance, the partition was repeated 20 times independently. Test set performances (NMSE and NLPD) of the three methods on the seven datasets are presented in Table 1. On the four datasets with 4500 training instances, the NMSE results of the three methods are quite comparable. ADAPTDMAX yields significantly better NLPD results on three of those four datasets. On the three larger datasets with 10,000 training instances, ADAPT-DMAX is significantly better than ADAPT-INFO on both NMSE and NLPD. We also tested our algorithm on the Outaouais dataset, which consists of 29000 training samples and 20000 test cases whose targets are held by the organizers of the “Evaluating Predictive Uncertainty Challenge”.4 The results of NMSE and NLPD we obtained in this blind test are 0.014 and −1.037 respectively, which are much better than the results of other participants. References Adler, J., B. D. Rao, and K. Kreutz-Delgado. Comparison of basis selection methods. In Proceedings of the 30th Asilomar conference on signals, systems and computers, pages 252–257, 1996. Candela, J. Q. Learning with uncertainty - Gaussian processes and relevance vector machines. PhD thesis, Technical University of Denmark, 2004. Csat´o, L. and M. Opper. Sparse online Gaussian processes. Neural Computation, The MIT Press, 14:641–668, 2002. Seeger, M. Bayesian Gaussian process models: PAC-Bayesian generalisation error bounds and sparse approximations. PhD thesis, University of Edinburgh, July 2003. Seeger, M., C. K. I. Williams, and N. Lawrence. Fast forward selection to speed up sparse Gaussian process regression. In Workshop on AI and Statistics 9, 2003. Smola, A. J. and P. Bartlett. Sparse greedy Gaussian process regression. In Leen, T. K., T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 619–625. MIT Press, 2001. Vincent, P. and Y. Bengio. Kernel matching pursuit. Machine Learning, 48:165–187, 2002. Williams, C. K. I. and M. Seeger. Using the Nystr¨om method to speed up kernel machines. In Leen, T. K., T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 682–688. MIT Press, 2001. 3These datasets are vailable at http://www.liacc.up.pt/∼ltorgo/Regression/DataSets.html. 4The dataset and the results contributed by other participants can be found at the web site of the challenge http://predict.kyb.tuebingen.mpg.de/.
|
2005
|
26
|
2,840
|
Phase Synchrony Rate for the Recognition of Motor Imagery in Brain-Computer Interface Le Song Nation ICT Australia School of Information Technologies The University of Sydney NSW 2006, Australia lesong@it.usyd.edu.au Evian Gordon Brain Resource Company Scientific Chair, Brain Dynamics Center Westmead Hospitial NSW 2006, Australia eviang@brainresource.com Elly Gysels Swiss Center for Electronics and Microtechnology Neuchˆatel, CH-2007 Switzerland elly.gysels@csem.ch Abstract Motor imagery attenuates EEG µ and β rhythms over sensorimotor cortices. These amplitude changes are most successfully captured by the method of Common Spatial Patterns (CSP) and widely used in braincomputer interfaces (BCI). BCI methods based on amplitude information, however, have not incoporated the rich phase dynamics in the EEG rhythm. This study reports on a BCI method based on phase synchrony rate (SR). SR, computed from binarized phase locking value, describes the number of discrete synchronization events within a window. Statistical nonparametric tests show that SRs contain significant differences between 2 types of motor imageries. Classifiers trained on SRs consistently demonstrate satisfactory results for all 5 subjects. It is further observed that, for 3 subjects, phase is more discriminative than amplitude in the first 1.5-2.0 s, which suggests that phase has the potential to boost the information transfer rate in BCIs. 1 Introduction A brain-computer interface (BCI) is a communication system that relies on the brain rather than the body for control and feedback. Such an interface offers hope not only for those severely paralyzed to control wheelchairs but also to enhance normal performance. Current BCI research is still in its infancy. Most studies focus on finding useful brain signals and designing algorithms to interpret them [1,2]. The most exploited signal in BCI is the scalp-recorded electroencephalogram (EEG). EEG is a noninvasive measurement of the brain’s electrical activities and has a temporal resolution of milliseconds. It is well known that motor imagery attenuates EEG µ and β rhythm over sensorimotor cortices. Depending on the part of the body imagined moving, the amplitude of multichannel EEG recordings exhibits distinctive spatial patterns. Classification of these patterns is used to control computer applications. Currently, the most successful method for BCI is called Common Spatial Patterns (CSP). The CSP method constructs a few new time series whose variances contain the most discriminative information. For the problem of classifying 2 types of motor imageries, the CSP method is able to correctly recognize 90% of the single trials in many studies [3, 4]. Ongoing research on the CSP method mainly focuses on its extension to the multi-class problem [5] and its integration with other forms of EEG amplitude information [4]. EEG signals contain both amplitude and phase information. Phase, however, has been largely ignored in BCI studies. Literature from neuroscience suggests, instead, that phase can be more discriminative than amplitude [6, 7]. For example, compared to a stimuli in which no face is present, face perception induces significant changes in γ synchrony, but not in amplitude [6]. Phase synchrony has been proposed as a mechanism for dynamic integration of distributed neural networks in the brain. Decreased synchrony, on the other hand, is associated with active unbinding of the neural assemblies and preparation of the brain for the next mental state (see [7] for a review). Accumulating evidence from both micro-electrode recordings [8,9] and EEG measurements [6] provides support to the notion that phase dynamics subserve all mental processes, including motor planning and imagery. In the BCI community, only a paucity of results has demonstrated the relevance of phase information [10–12]. Fewer studies still have ever compared the difference between amplitude and phase information for BCI. To address these deficits, this paper focuses on three issues: • Does binarized phase locking value (PLV) contain relevant information for the classification of motor imageries? • How does the performance of binarized PLV compare to that of non-binarized PLV? • How does the performance of methods based on phase information compare to that of the CSP method? In the remainder of the paper, the experimental paradigm will be described first. The details of the method based on binarized PLV are presented in Section 3. Comparison between PLV, binarized PLV and CSP are then made in Section 4. Finally, conclusions are provided in Section 5. 2 Recording paradigm Data set IVa provided by the Berlin BCI group [5] is investigated in this paper (available from the BCI competition III web site). Five healthy subjects (labeled ‘aa’, ‘al’, ‘av’, ‘aw’ and ‘ay’ respectively) participated in the EEG recordings. Based on the visual cues, they were required to imagine for 3.5 s either right hand (type 1) or right foot movements (type 2). Each type of motor imagery was carried out 140 times, which results in 280 labeled trials for each subject. Furthermore, the down-sampled data (at 100 Hz) is used. For the convenience of explanation, the length of the data is also referred to as time points. Therefore, the window for the full length of a trial is [1, 350]. 3 Feature from phase 3.1 Phase locking value Two EEG signals xi(t) and xj(t) are said to be synchronized, if their instantaneous phase difference ψij(t) (complex-valued with unit modulus) stays constant for a period of time ∆ψ. Phase locking value (PLV) is commonly used to quantify the degree of synchrony, i.e. PLVij(t) = 1 ∆ψ t X t−∆ψ ψij(t) ∈[0, 1], (1) where 1 represents perfect synchrony. The instantaneous phase difference ψij(t) can be computed using either wavelet analysis or Hilbert transformation. Studies show that these two approaches are equivalent for the analysis of EEG signals [13]. In this study, Hilbert transformation is employed in a similar manner to [10]. 3.2 Synchrony rate Neuroscientists usually threshold the phase locking value and focus on statistically significant periods of strong synchrony. Only recently, researchers begin to study the transition between high and low levels of synchrony [6,14,15]. Most notably, the researcher in [15] transformed PLV into discrete values called link rates and showed that link rates could be a sensitive measure to relevant changes in synchrony. To investigate the usefulness of discretization for BCIs, we binarize the time series of PLV and define synchrony rate based on them. The threshold chosen to binarize PLV minimizes the quantization error. Suppose that the distribution of PLV is p(x), then the threshold th0 is determined by th0 = arg min th Z 1 0 (x −g(x −th))2 p(x)dx, (2) where g(·) is the hard-limit transfer function which assumes 1 for non-negative numbers and 0 otherwise. In practice, p(x) is computed at discrete locations and the integration is replaced by summation. For the data set investigated, th0s are similar across 5 subjects (≃ 0.5) when EEG signals are filtered between 4 and 40Hz and ∆ψ is 0.25 s (These parameters are used in the Result section for all 5 subjects). The thresholded sequences are binary and denoted by bij(t). The ones in bij(t) can be viewed as discrete events of strong synchrony, while zeros are those of weak synchrony. The resemblance of bij(t) to the spike trains of neurons prompts us to define synchrony rate (SR)—the number of discrete events of strong synchrony per second. Formally, given a window ∆b, the synchrony rate rij(t) at time t is: rij(t) = 1 ∆b t X t−∆b bij(t). (3) SR describes the average level of synchrony between a pair of electrodes in a given window. The size of the window will affect the value of the SR. In the next section, we will detail the choice of the windows and the selection of features from SRs. 3.3 Feature extraction Before computing synchrony rates, a circular Laplacian [16] is applied to boost the spatial resolution of the raw EEG. This method first interpolates the scalp EEG, and then re-references EEG using interpolated values on a circle around an electrode. Varying the radius of the circles achieves different spatial filtering effects, and the best radius is tuned for individual subject. Spatially filtered EEG is split into 6 sliding windows of length 100, namely [1, 100], [51, 150], [101, 200], [151, 250], [201, 300] and [251, 350]. Each window is further divivded Figure 1: Overall scheme of window division for (a) the synchrony rate (SR) method and (b) the phase locking value (PLV) method. ∆ψ for the SR method covers the length of a micro-window, while that for the PLV method corresponds to the length of a sliding window. ∆b is equal to 100 −∆ψ + 1. (Note: time axis is NOT uniformly scaled.) into 76 micro-windows (with size 25 and overlap by 24). PLVs are then computed and binarized for each micro-window (according to (1)). Averaging the 76 binarized PLVs results in the SR (according to (3)). As a whole, 6 SRs will be computed for each electrode pair in a trial. SRs from all electrode pairs will be passed to statistical tests and further used as features for classification. The overall scheme of this window division is illustrated in Fig 1(a). In order to compare PLV and SR, PLVs are also computed for the full length of each sliding window (Fig. 1(b)), which results in 6 PLVs for each electrode pair. These PLVs will go through the same statistical tests and classification stage. 3.4 Statistical test A key observation is that both PLVs and SRs contain many statistically significant differences between the 2 types of motor imagery in almost every sliding window. Statistical nonparametric tests [17] are employed to locate these differences. For each electrode pair, a null hypothesis—H0: The difference of the mean SR/PLV for the 2 types of motor imagery is zero—is formulated for each sliding window. Then the distribution of the difference is obtained by 1000 randomization. The hypothesis is rejected if the difference of the original data is larger than 99.5% or smaller than 0.5% of those from randomized data (equivalent to p < 0.01). Fig. 2 illustrates the test results with data from subject ‘av’. For simplicity, only those SRs with significant increase are displayed. Although the exact locations of these increases differ from window to window, some general patterns can be observed. Roughly speaking, window 2, 3 and 4 can be grouped as similar, while window 1, 5 and 6 are different from each other. Window 1 reflects changes in the early stage of a motor imagery, consisting increased couplings mainly within visual cortices and between visual and motor cortices. Then (window 2, 3 and 4) increased couplings occur between motor cortices of both hemispheres and between lateral and mesial areas of the motor cortices. During the last stage, these couplings first (window 5) shift to the left hemisphere and then (window 6) reduce to some sparse distant interactions. Similar patterns can also be discovered from the PLVs (not illustrated). Although the exact functional interpretation of these patterns awaits further investigation, they can be treated as potential features for classification. Figure 2: Significantly increased synchrony rates in right hand motor imagery. Data are from subject ‘av’. (A: anterior; L: left; P: posterior; R: right.) 4 Classification strategy To evaluate the usefulness of synchrony rate for the classification of motor imagery, 50×2fold cross validation is employed to compute the generalization error. This scheme randomizes the order of the trials for 50 times. Each randomization further splits the trials into two equal halves (of 70 trials), each serving as training data once. There are four steps in each fold. Averaging the prediction errors from each fold results in the generalization error. • Compute SRs for each trial (including both training and test data). As illustrated in Fig. 1(a), this results in a 6 dimensional (one for each window) feature vector for each electrode pair (6903= 118×(118−1) 2 pairs in total). Alternatively, it can be viewed as a 6903 dimensional feature vector for each window. • Filter features using the Fisher ratio. The Fisher ratio (a variant, |µ+−µ−| σ++σ−, is used in the actual computation) measures the discriminability of individual feature for classification task. It is computed using training data only, and then compared to a threshold (0.3), below which a feature is discarded. The indices of the selected features are further used to filter the test data. The selected features are not necessarily all those located by the statistical tests. Generally, they are only a subset of the most significant SRs. • Train a linear SVM for each window and use meta-training scheme to combine them. The evolving nature of the SRs (illustrated in Fig. 2) suggests that information in the 6 windows may be complementary to each other. Similar to [4], a second level of linear SVM is trained on the output of the SVMs for individual windows. This meta-training scheme allows us to exploit the inter-window relations. (Note that this step is carried out strictly on the training data.) • Predict the label of the test data. Test data are fed into the two-level SVM, and the prediction error is measured as the proportion of the misclassified trials. The above four steps are also used to compute the generalization errors for the PLV method. The only modification is in step one, where PLVs are computed instead of SRs (Fig. 1(b)). In the next section, we will present the generalization errors for both SR and PLV method, and compare them to those of the CSP method. 5 Result and comparison 5.1 Generalization error Table 1 shows the generalization errors in percentage (with standard deviations) for both synchrony rate and PLV method. For comparison, we also computed the generalization errors of the CSP method [3] using linear SVM and 50×2-fold cross validation. The parameters of the CSP method (including filtering frequency, the number of channels used and the number of spatial patterns selected) are individually tuned for each subject according the competition winning entry of data set IVa [18]. Note that all errors in Table 1 are computed using the full length (3.5 s) of a trial. Generally, the errors of the SR method is higher than those of the PLV method. This is because SR is an approximation of PLV by definition. Remember that during the computation of SRs, the PLVs in the micro-windows are first binarized with a threshold th0. This threshold is so chosen that the approximation is as close to its original as possible. It works especially well for two of the subjects (‘al’ and ‘ay’), with the difference between the two methods less than 1%. Although SR method produces higher errors, it may have some advantages in practice. Especially for hardware implemented BCI systems, smaller window for PLV computation means smaller buffer and binarized PLV makes further processing easier and faster. The errors of the CSP method is lowest for most of the subjects. For subject ‘aa’ and ‘aw’, it is better than the other two methods by 10-20%, but the gaps are narrowed for subject ‘al’ and ‘av’ (less than 2.5%). Most notably, for subject ‘ay’, the SR method even outperforms the CSP method by about 5%. Remember that the CSP method is implemented using individually optimized parameters, while those for the SR and PLV method are the same across the 5 subjects. Fine tuning the parameters has the potential to further improve the performance of the latter two methods. The errors computed above, however, reveals only partial difference between the three methods. In the next subsection, a more thorough investigation will be carried out using information transfer rates. 5.2 Information transfer rate Information transfer rate (ITR) [1] is the amount of information (measured in bits) generated by a BCI system within a second. It takes both the error and the length of a trial into account. If two BCI systems produce the same error, the one with a short trial will have higher information transfer rate. To investigate the performance of the three methods in this context, we shortened the trials into 5 different lengths, namely 1.0 s, 1.5 s, 2.0 s, 2.5 s and 3.0 s. The generalization errors are computed for these shortened trials and then converted into information transfer rates, as showed in Fig. 3. Interesting results emerge from the curves in Fig. 3. Most subjects (except subject ‘aw’) achieve the highest information transfer rates within the first 1.5-2.0 s. Although longer trials usually decrease the errors, they do not necessarily result in increased information transfer rates. Furthermore, for subject ‘al’, ‘av’ and ‘ay’, the highest information transfer Table 1: Generalization errors (%) of the synchrony rate (SR), PLV and CSP methods Subject aa al av aw ay SR 29.34±3.97 4.05±1.28 32.67±3.41 22.96±4.39 5.93±1.75 PLV 23.05±3.39 3.59±1.28 29.91±3.23 18.65±3.48 5.41±1.53 CSP 12.58±2.56 2.65±1.35 30.30±3.02 3.16±1.32 11.43±2.34 Figure 3: Information transfer rates (ITR) for synchrony rate (SR), PLV and CSP method. Horizontal axis is time T (in seconds). Vertical axis on the left measures information transfer rate (in bit/second) and that on the right shows the generalization error (GE) in decimals. The three lines of Greek characters under each subplot code the results of statistical comparisons (Student’s t-test, significance level 0.01) of different methods. Line 1 is the comparison between SR and CSP methods; Line 2 is between SR and PLV method; and Line 3 between PLV and CSP method. rates are achieved by methods based on phase. Especially for subject ‘ay’, phase generates about 0.2 bits more information per second. The qualitative similarity between SR and PLV method suggests that phase can be more discriminative than amplitude within the first 1.5-2.0 s. Common to the three methods, however, the near zero information transfer rates within the first second virtually pose a limit for BCIs. In the case where real-time application is of high priority, such as navigating wheelchairs, this problem is even more pronounced. Incorporating phase information and continuing the search for new features have the potential to overcome this limit. 6 Conclusion EEG phase contains complex dynamics. Changes of phase synchrony provide complementary information to EEG amplitude. Our results show that within the first 1.5-2.0 s of a motor imagery, phase can be more useful for classification and can be exploited by our synchrony rate method. Although methods based on phase have achieved good results in some subjects, the subject-wise difference and the exact functional interpretation of the selected features need further investigation. Solving these problems have the potential to boost information transfer rates in BCIs. Acknowledgments The author would like to thank Ms. Yingxin Wu and Dr. Julien Epps from NICTA, and Dr. Michael Breakspear from Brain Dynamics Center for discussion. References [1] J.R. Wolpaw et al., “Brain-computer interface technology: a review of the first international meeting,” IEEE Trans. Rehab. Eng., vol. 8, pp. 164-173, 2000. [2] T.M. Vaughan et al., “Brain-computer interface technonolgy: a review of the second international meeting,” IEEE Trans. Rehab. Eng., vol. 11, pp. 94-109, 2003. [3] H. Ramoser, J. M¨uller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of single trial EEG during imagined hand movement,” IEEE Trans. Rehab. Eng., vol. 8, pp. 441-446, 2000. [4] G. Dornhege, B. Blankertz, G. Curio, and K.R. M¨uller, “Combining features for BCI,” Advances in Neural Inf. Proc. Systems (NIPS 02), vol. 15, pp. 1115-1122, 2003. [5] G. Dornhege, B. Blankertz, G. Curio, and K.R. M¨uller, “Boosting bit rates in non-invasive EEG single-trial classifications by feature combination and multi-class paradigms,” IEEE Trans. Biomed. Eng., vol. 51, pp. 993-1002, 2004. [6] E. Rodriguez et al., “Perception’s shadow: long distance synchronization of human brain activity,” Nature, vol. 397, pp. 430-433, 1999. [7] F. Varela, J.P. Lachaux, E. Rodriguez, and J. Martinerie, “The brainweb: phase synchronization and large-scale integration,” Nature Reviews Neuroscience, vol. 2, pp. 229-239, 2001. [8] W. Singer, and C.M. Gray, “Visual feature integration and the temporal correlation hypothesis,” Annu. Rev. Neurosci, vol. 18, pp. 555-586, 1995. [9] P.R. Roelfsema, A.K. Engel, P. K¨onig, and W. Singer, “Visuomotor integration is associated with zero time-lag synchronization among cortical areas,” Nature, vol. 385, pp. 157-161, 1997. [10] E. Gysels, and P. Celka, “Phase synchronization for the recognition of mental tasks in braincomputer interface,” IEEE Trans. Neural Syst. Rehab. Eng., vol. 12, pp. 406-415, 2004. [11] C. Brunner, B. Graimann, J.E. Huggins, S.P Levine and G. Pfurtscheller, “Phase relationships between different subdural electrode recordings in man,” Neurosci. Lett., vol. 275, pp.69-74, 2005. [12] L. Song, “Desynchronization network analysis for the recognition of imagined movement in BCIs,”, Proc. of 27th IEEE EMBS conference, Shanghai, China, September 2005. [13] M. Le Van Quyen et al., “Comparison of Hilbert transform and wavelet methods for the analysis of neuronal synchrony,” J. Neurosci. Methods, vol. 111, pp. 83-98, 2001. [14] M. Breakspear, L. Williams, and C.J. Stam, “A novel method for the topographic analysis of neural activity reveals formation and dissolution of ‘dynmaic cell assemblies’,” J. Comput. Neurosci., vol. 16, pp. 49-68, 2004. [15] M.J.A.M van Putten, “Proposed link rates in the human brain,” J. of Neurosci. Methods, vol. 127, pp. 1-10, 2003. [16] L. Song, and J. Epps, “Improving separability of EEG signals during motor imagery with an efficient circular Laplacian,” in preparation. [17] T.E. Nichols, and A.P. Holmes, “Nonparametric permutation tests for functional neuroimaging: a primer with examples,” Human Brain Mapping, vol. 15, pp. 1-25, 2001. [18] Y.J. Wang, X.R. Gao, Z.G. Zhang, B. Hong, and S.K. Gao, “BCI competition III—data set IVa: classifying single-trial EEG during motor imagery with a small training set,” IEEE Trans. Neural Syst. Rehab. Eng., submitted.
|
2005
|
27
|
2,841
|
Inference with Minimal Communication: a Decision-Theoretic Variational Approach O. Patrick Kreidl and Alan S. Willsky Department of Electrical Engineering and Computer Science MIT Laboratory for Information and Decision Systems Cambridge, MA 02139 {opk,willsky}@mit.edu Abstract Given a directed graphical model with binary-valued hidden nodes and real-valued noisy observations, consider deciding upon the maximum a-posteriori (MAP) or the maximum posterior-marginal (MPM) assignment under the restriction that each node broadcasts only to its children exactly one single-bit message. We present a variational formulation, viewing the processing rules local to all nodes as degrees-of-freedom, that minimizes the loss in expected (MAP or MPM) performance subject to such online communication constraints. The approach leads to a novel message-passing algorithm to be executed offline, or before observations are realized, which mitigates the performance loss by iteratively coupling all rules in a manner implicitly driven by global statistics. We also provide (i) illustrative examples, (ii) assumptions that guarantee convergence and efficiency and (iii) connections to active research areas. 1 Introduction Given a probabilistic model with discrete-valued hidden variables, Belief Propagation (BP) and related graph-based algorithms are commonly employed to solve for the Maximum APosteriori (MAP) assignment (i.e., the mode of the joint distribution of all hidden variables) and Maximum-Posterior-Marginal (MPM) assignment (i.e., the modes of the marginal distributions of every hidden variable) [1]. The established “message-passing” interpretation of BP extends naturally to a distributed network setting: associating to each node and edge in the graph a distinct processor and communication link, respectively, the algorithm is equivalent to a sequence of purely-local computations interleaved with only nearestneighbor communications. Specifically, each computation event corresponds to a node evaluating its local processing rule, or a function by which all messages received in the preceding communication event map to messages sent in the next communication event. Practically, the viability of BP appears to rest upon an implicit assumption that network communication resources are abundant. In a general network, because termination of the algorithm is in question, the required communication resources are a-priori unbounded. Even when termination can be guaranteed, transmission of exact messages presumes communication channels with infinite capacity (in bits per observation), or at least of sufficiently high bandwidth such that the resulting finite message precision is essentially error-free. In some distributed settings (e.g., energy-limited wireless sensor networks), it may be prohibitively costly to justify such idealized online communications. While recent evidence suggests substantial but “small-enough” message errors will not alter the behavior of BP [2], [3], it also suggests BP may perform poorly when communication is very constrained. Assuming communication constraints are severe, we examine the extent to which alternative processing rules can avoid a loss in (MAP or MPM) performance. Specifically, given a directed graphical model with binary-valued hidden variables and real-valued noisy observations, we assume each node may broadcast only to its children a single binary-valued message. We cast the problem within a variational formulation [4], seeking to minimize a decision-theoretic penalty function subject to such online communication constraints. The formulation turns out to be an extension of the optimization problem underlying the decentralized detection paradigm [5], [6], which advocates a team-theoretic [7] relaxation of the original problem to both justify a particular finite parameterization for all local processing rules and obtain an iterative algorithm to be executed offline (i.e., before observations are realized). To our knowledge, that this relaxation permits analytical progress given any directed acyclic network is new. Moreover, for MPM assignment in a tree-structured network, we discover an added convenience with respect to the envisioned distributed processor setting: the offline computation itself admits an efficient message-passing interpretation. This paper is organized as follows. Section 2 details the decision-theoretic variational formulation for discrete-variable assignment. Section 3 summarizes the main results derived from its connection to decentralized detection, culminating in the offline message-passing algorithm and the assumptions that guarantee convergence and maximal efficiency. We omit the mathematical proofs [8] here, focusing instead on intuition and illustrative examples. Closing remarks and relations to other active research areas appear in Section 4. 2 Variational Formulation In abstraction, the basic ingredients are (i) a joint distribution p(x, y) for two length-N random vectors X and Y , taking hidden and observable values in the sets {0, 1}N and RN, respectively; (ii) a decision-theoretic penalty function J : Γ →R, where Γ denotes the set of all candidate strategies γ : RN →{0, 1}N for posterior assignment; and (iii) the set ΓG ⊂Γ of strategies that also respect stipulated communication constraints in a given N-node directed acyclic network G. The ensuing optimization problem is expressed by J(γ∗) = min γ∈Γ J(γ) subject to γ ∈ΓG, (1) where γ∗then represents an optimal network-constrained strategy for discrete-variable assignment. The following subsections provide details unseen at this level of abstraction. 2.1 Decision-Theoretic Penalty Function Let U = γ(Y ) denote the decision process induced from the observation process Y by any candidate assignment strategy γ ∈Γ. If we associate a numeric “cost” c(u, x) to every possible joint realization of (U, X), then the expected cost is a well-posed penalty function: J(γ) = E [c (γ(Y ), X)] = E [E [c(γ(Y ), X) | Y ]] . (2) Expanding the inner expectation and recognizing p(x|y) to be proportional to p(x)p(y|x) for every y such that p(y) > 0, it follows that ¯γ∗minimizes (2) over Γ if and only if ¯γ∗(Y ) = arg min u∈{0,1}N X x∈{0,1}N p(x)c(u, x)p(Y |x) with probability one. (3) Of note are (i) the likelihood function p(Y |x) is a finite-dimensional sufficient statistic of Y , (ii) real-valued coefficients ¯b(u, x) provide a finite parameterization of the function space Γ and (iii) optimal coefficient values ¯b∗(u, x) = p(x)c(u, x) are computable offline. Before introducing communication constraints, we illustrate by examples how the decisiontheoretic penalty function relates to familiar discrete-variable assignment problems. Example 1: Let c(u, x) indicate whether u ̸= x. Then (2) and (3) specialize to, respectively, the word error rate (viewing each x as an N-bit word) and the MAP strategy: ¯γ∗(Y ) = arg max x∈{0,1}N p(x|Y ) with probability one. Example 2: Let c(u, x) = PN n=1 cn(un, xn), where each cn indicates whether un ̸= xn. Then (2) and (3) specialize to, respectively, the bit error rate and the MPM strategy: ¯γ∗(Y ) = arg max x1∈{0,1} p(x1|Y ), . . . , arg max xN∈{0,1} p(xN|Y ) with probability one. 2.2 Network Communication Constraints Let G(V, E) be any directed acyclic graph with vertex set V = {1, . . . , N} and edge set E = {(i, j) ∈V × V | i ∈π(j) ⇔j ∈χ(i)}, where index sets π(n) ⊂V and χ(n) ⊂V indicate, respectively, the parents and children of each node n ∈V. Without loss-of-generality, we assume the node labels respect the natural partial-order implied by the graph G; specifically, we assume every node n has parent nodes π(n) ⊂{1, . . . , n−1} and child nodes χ(n) ⊂{n+1, . . . , N}. Local to each node n ∈V are the respective components Xn and Yn of the joint process (X, Y ). Under best-case assumptions on p(x, y) and G, Belief Propagation methods (e.g., max-product in Example 1, sum-product in Example 2) require at least 2|E| real-valued messages per observation Y = y, one per direction along each edge in G. In contrast, we insist upon a single forward-pass through G where each node n broadcasts to its children (if any) a single binary-valued message. This yields communication overhead of only |E| bits per observation Y = y, but also renders the minimizing strategy of (3) infeasible. Accepting that performance-communication tradeoffs are inherent to distributed algorithms, we proceed with the goal of minimizing the loss in performance relative to J(¯γ∗). Specifically, we now translate the stipulated restrictions on communication into explicit constraints on the function space Γ over which to minimize (2). The simplest such translation assumes the binary-valued message produced by node n also determines the respective component un in decision vector u = γ(y). Recognizing that every node n receives the messages uπ(n) from its parents (if any) as side information to yn, any function of the form γn : R × {0, 1}|π(n)| →{0, 1} is a feasible processing rule; we denote the set of all such rules by Γn. Then, every strategy in the set ΓG = Γ1 × · · · × ΓN respects the constraints. 3 Summary of Main Results As stated in Section 1, the variational formulation presented in Section 2 can be viewed as an extension of the optimization problem underlying decentralized Bayesian detection [5], [6]. Even for specialized network structures (e.g., the N-node chain), it is known that exact solution to (1) is NP-hard, stemming from the absence of a guarantee that γ∗∈ΓG possesses a finite parameterization. Also known is that analytical progress can be made for a relaxation of (1), which is based on the following intuition: if strategy γ∗= (γ∗ 1, . . . , γ∗ N) is optimal over ΓG, then for each n and assuming all components i ∈V\n are fixed at rules γ∗ i , the component rule γ∗ n must be optimal over Γn. Decentralized detection has roots in team decision theory [7], a subset of game theory, in which the relaxation is named person-by-person (pbp) optimality. While global optimality always implies pbp-optimality, the converse is false—in general, there can be multiple pbp-optimal solutions with varying penalty. Nonetheless, pbp-optimality (along with a specialized observation process) justifies a particular finite parameterization for the function space ΓG, leading to a nonlinear fixed-point equation and an iterative algorithm with favorable convergence properties. Before presenting the general algorithm, we illustrate its application in two simple examples. Example 3: Consider the MPM assignment problem in Example 2, assuming N = 2 and distribution p(x, y) is defined by positive-valued parameters α, β1 and β2 as follows: p(x) ∝ 1 , x1 = x2 α , x1 ̸= x2 and p(y|x) = N Y n=1 1 √ 2π exp −(yn −βnxn)2 2 . Note that X1 and X2 are marginally uniform and α captures their correlation (positive, zero, or negative when α is less than, equal to, or greater than unity, respectively), while Y captures the presence of additive white Gaussian noise with signal-to-noise ratio at node n equal to βn. The (unconstrained) MPM strategy ¯γ∗simplifies to a pair of threshold rules L1(y1) u1 = 1 >< u1 = 0 ¯η∗ 1 = 1 + αL2(y2) α + L2(y2) and L2(y2) u2 = 1 >< u2 = 0 ¯η∗ 2 = 1 + αL1(y1) α + L1(y1) , where Ln(yn) = exp [βn (yn −βn/2)] denotes the likelihood-ratio local to node n. Let E = {(1, 2)} and define two network-constrained strategies: myopic strategy γ0 employs thresholds η0 1 = η0 2 = 1, meaning each node n acts to minimize Pr[Un ̸= Xn] as if in isolation, whereas heuristic strategy γh employs thresholds ηh 1 = η0 1 and ηh 2 = α2u1−1, meaning node 2 adjusts its threshold as if X1 = u1 (i.e., as if the myopic decision by node 1 is always correct). Figure 1 compares these strategies and a pbp-optimal strategy γk—only γk is both feasible and consistently “hedging” against all uncertainty i.e., J(γ0) ≥J(γk) ≥J(¯γ∗). 0.4 1 2.5 0.4 0.6 0.8 .01 1 100 0.5 0.6 ¯γ∗ γ0 γh γk L1 L2 0 1 1 α−1 α−1 α α (0, 0) (0, 1) (1, 0) (1, 1) (a) Shown for α < 1 J(γ) J(γ) α β1 (b) With α = 0.1, β2 = 1 (c) With β1 = β2 = 1 Figure 1. Comparison of the four alternative strategies in Example 3: (a) sketch of the decision regions in likelihood-ratio space, showing that network-constrained threshold rules cannot exactly reproduce ¯γ∗(unless α = 1); (b) bit-error-rate versus β1 with α and β2 fixed, showing γh performs comparably to γk when Y1 is accurate relative to Y2 but otherwise performs worse than even γ0 (which requires no communication); (c) bit-error-rate versus α with β1 and β2 fixed, showing γk uses the allotted bit of communication such that roughly 35% of the loss J(γ0)−J(¯γ∗) is recovered. Example 4: Extend Example 3 to N > 2 nodes, but assuming X is equally-likely to be all zeros or all ones (i.e., the extreme case of positive correlation) and Y has identicallyaccurate components with βn = 1 for all n. The MPM strategy employs thresholds ¯η∗ n = Q i∈V\n 1/Li(yi) for all n, leading to U = ¯γ∗(Y ) also being all zeros or all ones; thus, its cost distribution, or the probability mass function for c(¯γ∗(Y ), X), has mass only on the values 0 and N. The myopic strategy employs thresholds η0 n = 1 for all n, leading to independent and identically-distributed (binary-valued) random variables cn(γ0 n(Yn), Xn); thus, its cost distribution, approaching a normal shape as N gets large, has mass on all values 0, 1, . . . , N. Figure 2 considers a particular directed network G and, initializing to γ0, shows the sequence of cost distributions resulting from the iterative offline algorithm—note the shape progression towards the cost distribution of the (infeasible) MPM strategy and the successive reduction in bit-error-rate J(γk). Also noteworthy is the rapid convergence and the successive reduction in word-error-rate Pr[c(γk(Y ), X) ̸= 0]. 0 4 8 12 0 4 8 12 0 4 8 12 0 4 8 12 0 0.1 0.2 0.3 0.4 Cost Distribution per Iteration k = 0, 1, . . . J(γ0) = 3.7 J(γ1) = 2.9 J(γ2) = 2.8 J(γ3) = 2.8 number of bit errors probability mass function γk 3 γk 2 γk 1 γk 5 γk 4 γk 9 γk 8 γk 7 γk 6 γk 12 γk 11 γk 10 u3 u2 u1 u5 u4 u9 u8 u7 u6 u12 u11 u10 Figure 2. Illustration of the iterative offline computation given p(x, y) as described in Example 4 and the directed network shown (N = 12). A Monte-Carlo analysis of ¯γ∗yields an estimate for its bit-error-rate of J(¯γ∗) ≈0.49 (with standard deviation of 0.05)—thus, with a total of just |E| = 11 bits of communication, the pbp-optimal strategy γ3 recovers roughly 28% of the loss J(γ0) −J(¯γ∗). 3.1 Necessary Optimality Conditions We start by providing an explicit probabilistic interpretation of the general problem in (1). Lemma 1 The minimum penalty J(γ∗) defined in (1) is, firstly, achievable by a deterministic1 strategy and, secondly, equivalently defined by J(γ∗) = min p(u|y) X x∈{0,1}N p(x) X u∈{0,1}N c(u, x) Z y∈RN p(u|y)p(y|x) dy subject to p(u|y) = Y n∈V p(un|yn, uπ(n)). Lemma 1 is primarily of conceptual value, establishing a correspondence between fixing a component rule γn ∈Γn and inducing a decision process Un from the information (Yn, Uπ(n)) local to node n. The following assumption permits analytical progress towards a finite parameterization for each function space Γn and the basis of an offline algorithm. Assumption 1 The observation process Y satisfies p(y|x) = Q n∈V p(yn|x). Lemma 2 Let Assumption 1 hold. Upon fixing a deterministic rule γn ∈Γn local to node n (in correspondence with p(un|yn, uπ(n)) by virtue of Lemma 1), we have the identity p(un|x, uπ(n)) = Z yn∈R p(un|yn, uπ(n))p(yn|x) dyn. (4) Moreover, upon fixing a deterministic strategy γ ∈ΓG, we have the identity p(u|x) = Y n∈V p(un|x, uπ(n)). (5) Lemma 2 implies fixing component rule γn ∈Γn is in correspondence with inducing the conditional distribution p(un|x, uπ(n)), now a probabilistic description that persists local to node n no matter the rule γi at any other node i ∈V\n. Lemma 2 also introduces further structure in the constrained optimization expressed by Lemma 1: recognizing the integral over RN to equal p(u|x), (4) and (5) together imply it can be expressed as a product of 1A randomized (or mixed) strategy, modeled as a probabilistic selection from a finite collection of deterministic strategies, takes more inputs than just the observation process Y . That deterministic strategies suffice, however, justifies “post-hoc” our initial abuse of notation for elements in the set Γ. component integrals, each over R. We now argue that, despite these simplifications, the component rules of γ∗continue to be globally coupled. Starting with any deterministic strategy γ ∈ΓG, consider optimizing the nth component rule γn over Γn assuming all other components stay fixed. With γn a degree-of-freedom, decision process Un is no longer well-defined so each un ∈{0, 1} merely represents a candidate decision local to node n. Online, each local decision will be made only upon receiving both the local observation Yn = yn and all parents’ local decisions Uπ(n) = uπ(n). It follows that node n, upon deciding a particular un, may assert that random vector U is restricted to values in the subset U[uπ(n), un] = {u′ ∈{0, 1}N | u′ π(n) = uπ(n), u′ n = un}. Then, viewing (Yn, Uπ(n)) as a composite local observation and proceeding in the manner by which (3) is derived, the pbp-optimal relaxation of (1) reduces to the following form. Proposition 1 Let Assumption 1 hold. In an optimal network-constrained strategy γ∗∈ΓG, for each n and assuming all components i ∈V\n are fixed at rules γ∗ i (each in correspondence with p∗(ui|x, uπ(i)) by virtue of Lemma 2), the rule γ∗ n satisfies γ∗ n(Yn, Uπ(n)) = arg min un∈{0,1} X x∈{0,1}N b∗ n(un, x; Uπ(n))p(Yn|x) with probability one (6) where, for each uπ(n) ∈{0, 1}|π(n)|, b∗ n(un, x; uπ(n)) = p(x) X u∈U[uπ(n),un] c(u, x) Y i∈V\n p∗(ui|x, uπ(i)). (7) Of note are (i) the likelihood function p(Yn|x) is a finite-dimensional sufficient statistic of Yn, (ii) real-valued coefficients bn provide a finite parameterization of the function space Γn and (iii) the pbp-optimal coefficient values b∗ n, while still computable offline, also depend on the distributions p∗(ui|x, uπ(i)) in correspondence with all fixed rules γ∗ i . 3.2 Offline Message-Passing Algorithm Let fn map from coefficients {bi; i ∈V\n} to coefficients bn by the following operations: 1. for each i ∈V\n, compute p(ui|x, uπ(i)) via (4) and (6) given bi and p(yi|x); 2. compute bn via (7) given p(x), c(u, x) and {p(ui|x, uπ(i)); i ∈V\n}. Then, the simultaneous satisfaction of Proposition 1 at all N nodes can be viewed as a system of 2N+1 P n∈V 2|π(n)| nonlinear equations in as many unknowns, bn = fn(b1, . . . , bn−1, bn+1, . . . , bN), n = 1, . . . , N, (8) or, more concisely, b = f(b). The connection between each fn and Proposition 1 affords an equivalence between solving the fixed-point equation f via a Gauss-Seidel iteration and minimizing J(γ) via a coordinate-descent iteration [9], implying an algorithm guaranteed to terminate and achieve penalty no greater than that of an arbitrary initial strategy γ0 ↔b0. Proposition 2 Initialize to any coefficients b0 = (b0 1, . . . , b0 N) and generate the sequence {bk} using a component-wise iterative application of f in (8) i.e., for k = 1, 2, . . . , bk n := fn(bk−1 1 , . . . , bk−1 n−1, bk n+1, . . . , bk N), n = N, N −1, . . . , 1. (9) If Assumption 1 holds, the associated sequence {J(γk)} is non-increasing and converges: J(γ0) ≥J(γ1) ≥· · · ≥J(γk) →J∗≥J(γ∗) ≥J(¯γ∗). Direct implementation of (9) is clearly imprudent from a computational perspective, because the transformation from fixed coefficients bk n to the corresponding distribution pk(un|x, uπ(n)) need not be repeated within every component evaluation of f. In fact, assuming every node n stores in memory its own likelihood function p(yn|x), this transformation can be accomplished locally (cf. (4) and (6)) and, also assuming the resulting distribution is broadcast to all other nodes before they proceed with their subsequent component evaluation of f, the termination guarantee of Proposition 2 is retained. Requiring every node to perform a network-wide broadcast within every iteration k makes (9) a decidedly global algorithm, not to mention that each node n must also store in memory p(x, yn) and c(u, x) to carry forth the supporting local computations. Assumption 2 The cost function satisfies c(u, x) = P n∈V cn(un, x) for some collection of functions {cn : {0, 1}N+1 →R} and the directed graph G is tree-structured. Proposition 3 Under Assumption 2, the following two-pass procedure is identical to (9): • Forward-pass at node n: upon receiving messages from all parents i ∈π(n), store them for use in the next reverse-pass and send to each child j ∈χ(n) the following messages: P k n→j(un|x) := X uπ(n)∈{0,1}|π(n)| pk−1 un|x, uπ(n) Y i∈π(n) P k i→n(ui|x). (10) • Reverse-pass at node n: upon receiving messages from all children j ∈χ(n), update bk n un, x; uπ(n) := p(x) Y i∈π(n) P k i→n(ui|x) cn(un, x) + X j∈χ(n) Ck j→n(un, x) (11) and the corresponding distribution pk(un|x, uπ(n)) via (4) and (6), store the distribution for use in the next forward pass and send to each parent i ∈π(n) the following messages: Ck n→i(ui, x) := X un∈{0,1} p(un|x, ui) cn(un, x) + X j∈χ(n) Ck j→n(un, x) , (12) p(un|x, ui) = X uπ(n)∈{u′∈{0,1}|π(n)||u′ i=ui} pk un|x, uπ(n) Y ℓ∈π(n)\i P k ℓ→n(uℓ|x). An intuitive interpretation of Proposition 3, from the perspective of node n, is as follows. From (10) in the forward pass, the messages received from each parent define what, during subsequent online operation, that parent’s local decision means (in a likelihood sense) about its ancestors’ outputs and the hidden process. From (12) in the reverse pass, the messages received from each child define what the local decision will mean (in an expected cost sense) to that child and its descendants. From (11), both types of incoming messages impact the local rule update and, in turn, the outgoing messages to both types of neighbors. While Proposition 3 alleviates the need for the iterative global broadcast of distributions pk(un|x, uπ(n)), the explicit dependence of (10)-(12) on the full vector x implies the memory and computation requirements local to each node can still be exponential in N. Assumption 3 The hidden process X is Markov on G, or p(x) = Q n∈V p(xn|xπ(n)), and all component likelihoods/costs satisfy p(yn|x) = p(yn|xn) and cn(un, x) = cn(un, xn). Proposition 4 Under Assumption 3, the iterates in Proposition 3 specialize to the form of bk n(un, xn; uπ(n)), P k n→j(un|xn) and Ck n→i(ui, xi), k = 0, 1, . . . and each node n need only store in memory p(xπ(n), xn, yn) and cn(un, xn) to carry forth the supporting local computations. (The actual equations can be found in [8].) Proposition 4 implies the convergence properties of Proposition 2 are upheld with maximal efficiency (linear in N) when G is tree-structured and the global distribution and costs satisfy p(x, y) = Q n∈V p(xn|xπ(n))p(yn|xn) and c(u, x) = P n∈V cn(un, xn), respectively. Note that these conditions hold for the MPM assignment problems in Examples 3 & 4. 4 Discussion Our decision-theoretic variational approach reflects several departures from existing methods for communication-constrained inference. Firstly, instead of imposing the constraints on an algorithm derived from an ideal model, we explicitly model the constraints and derive a different algorithm. Secondly, our penalty function drives the approximation by the desired application of inference (e.g., posterior assignment) as opposed to a generic error measure on the result of inference (e.g., divergence in true and approximate marginals). Thirdly, the necessary offline computation gives rise to a downside, namely less flexibility against time-varying statistical environments, decision objectives or network conditions. Our development also evokes principles in common with other research areas. Similar to the sum-product version of Belief Propagation (BP), our message-passing algorithm originates assuming a tree structure, an additive cost and a synchronous message schedule. It is thus enticing to claim that the maturation of BP (e.g., max-product, asynchronous schedule, cyclic graphs) also applies, but unique aspects to our development (e.g., directed graph, weak convergence, asymmetric messages) merit caution. That we solve for correlated equilibria and depend on probabilistic structure commensurate with cost structure for efficiency is in common with graphical games [10], which distinctly are formulated on undirected graphs and absent of hidden variables. Finally, our offline computation resembles learning a conditional random field [11], in the sense that factors of p(u|x) are iteratively modified to reduce penalty J(γ); online computation via strategy u = γ(y), repeated per realization Y = y, is then viewed as sampling from this distribution. Along the learning thread, a special case of our formulation appears in [12], but assuming p(x, y) is unknown. Acknowledgments This work supported by the Air Force Office of Scientific Research under contract FA955004-1 and by the Army Research Office under contract DAAD19-00-1-0466. We are grateful to Professor John Tsitsiklis for taking time to discuss the correctness of Proposition 1. References [1] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [2] L. Chen, et al. Data association based on optimization in graphical models with application to sensor networks. Mathematical and Computer Modeling, 2005. To appear. [3] A. T. Ihler, et al. Message errors in belief propagation. Advances in NIPS 17, MIT Press, 2005. [4] M. I. Jordan, et al. An introduction to variational methods for graphical models. Learning in Graphical Models, pp. 105–161, MIT Press, 1999. [5] J. N. Tsitsiklis. Decentralized detection. Adv. in Stat. Sig. Proc., pp. 297–344, JAI Press, 1993. [6] P. K. Varshney. Distributed Detection and Data Fusion. Springer-Verlag, 1997. [7] J. Marschak and R. Radner. The Economic Theory of Teams. Yale University Press, 1972. [8] O. P. Kreidl and A. S. Willsky. Posterior assignment in directed graphical models with minimal online communication. Available: http://web.mit.edu/opk/www/res.html [9] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, 1995. [10] S. Kakade, et al. Correlated equilibria in graphical games. ACM-CEC, pp. 42–47, 2003. [11] J. Lafferty, et al. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. ICML, 2001. [12] X. Nguyen, et al. Decentralized detection and classification using kernel methods. ICML,2004.
|
2005
|
28
|
2,842
|
Variational EM Algorithms for Non-Gaussian Latent Variable Models J. A. Palmer, D. P. Wipf, K. Kreutz-Delgado, and B. D. Rao Department of Electrical and Computer Engineering University of California San Diego, La Jolla, CA 92093 {japalmer,dwipf,kreutz,brao}@ece.ucsd.edu Abstract We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases. 1 Introduction Probabilistic methods have become well-established in the analysis of learning algorithms over the past decade, drawing largely on classical Gaussian statistical theory [21, 2, 28]. More recently, variational Bayes and ensemble learning methods [22, 13] have been proposed. In addition to the evidence and VB methods, variational methods based on convex bounding have been proposed for dealing with non-gaussian latent variables [18, 14]. We concentrate here on the theory of the linear model, with direct application to ICA [14], factor analysis [2], mixture models [13], kernel regression [30, 11, 32], and linearization approaches to nonlinear models [15]. The methods can likely be applied in other contexts. In Mackay’s evidence framework, ”hierarchical priors” are employed on the latent variables, using Gamma priors on the inverse variances, which has the effect of making the marginal distribution of the latent variable prior the non-Gaussian Student’s t [30]. Based on Mackay’s framework, Tipping proposed the Relevance Vector Machine (RVM) [30] for estimation of sparse solutions in the kernel regression problem. A relationship between the evidence framework and ensemble/VB methods has been noted in [22, 6] for the particular case of the RVM with t hyperprior. Figueiredo [11] proposed EM algorithms based on hyperprior representations of the Laplacian and Jeffrey’s priors. In [14], Girolami employed the convex variational framework of [16] to derive a different type of variational EM algorithm using a convex variational representation of the Laplacian prior. Wipf et al. [32] demonstrated the equivalence between the variational approach of [16, 14] and the evidence based RVM for the case of t priors, and thus via [6], the equivalence of the convex variational method and the ensemble/VB methods for the particular case of the t prior. In this paper we consider these methods from a unifying viewpoint, deriving algorithms in more general form and establishing a more general relationship among the methods than has previously been shown. In §2, we define the model and estimation problems we shall be concerned with, and in §3 we discuss criteria for variational representations. In §4 we consider the relationships among these methods. 2 The Bayesian linear model Throughout we shall consider the following model, y = Ax + ν , (1) where A ∈Rm×n, x ∼p(x) = Q i p(xi), and ν ∼N(0, Σν), with x and ν independent. The important thing to note for our purposes is that the xi are non-Gaussian. We consider two types of variational representation of the non-Gaussian priors p(xi), which we shall call convex type and integral type. In the convex type of variational representation, the density is represented as a supremum over Gaussian functions of varying scale, p(x) = sup ξ>0 N(x; 0, ξ−1) ϕ(ξ) . (2) The essential property of “concavity in x2” leading to this representation was used in [29, 17, 16, 18, 6] to represent the Logistic link function. A convex type representation of the Laplace density was applied to learning overcomplete representations in [14]. In the integral type of representation, the density p(x) is represented as an integral over the scale parameter of the density, with respect to some positive measure µ, p(x) = Z ∞ 0 N(x; 0, ξ−1) dµ(ξ) . (3) Such representations with a general kernel are referred to as scale mixtures [19]. Gaussian scale mixtures were discussed in the examples of Dempster, Laird, and Rubin’s original EM paper [9], and treated more extensively in [10]. The integral representation has been used, sometimes implicitly, for kernel-based estimation [30, 11] and ICA [20]. The distinction between MAP estimation of components and estimation of hyperparameters has been discussed in [23] and [30] for the case of Gamma distributed inverse variance. We shall be interested in variational EM algorithms for solving two basic problems, corresponding essentially to the two methods of handling hyperparameters discussed in [23]: the MAP estimate of the latent variables ˆx = arg max x p(x|y) (4) and the MAP estimate of the hyperparameters, ˆξ = arg max ξ p(ξ|y) . (5) The following section discusses the criteria for and relationship between the two types of variational representation. In §4, we discuss algorithms for each problem based on the two types of variational representations, and determine when these are equivalent. We also discuss the approximation of the likelihood p(y; A) using the ensemble learning or VB method, which approximates the posterior p(x, ξ|y) by a factorial density q(x|y)q(ξ|y). We show that the ensemble method is equivalent to the hyperparameter MAP method. 3 Variational representations of super-Gaussian densities In this section we discuss the criteria for the convex and integral type representations. 3.1 Convex variational bounds We wish to determine when a symmetric, unimodal density p(x) can be represented in the form (2) for some function ϕ(ξ). Equivalently, when, −log p(x) = −sup ξ>0 log N ¡ x ; 0, ξ−1¢ ϕ(ξ) = inf ξ>0 1 2 x2ξ −log ξ 1 2 ϕ(ξ) for all x > 0. The last formula says that −log p(√x) is the concave conjugate of (the closure of the convex hull of) the function, log ξ 1 2 ϕ(ξ) [27, §12]. This is possible if and only if −log p(√x) is closed, increasing and concave on (0, ∞). Thus we have the following. Theorem 1. A symmetric probability density p(x) ≡exp(−g(x2)) can be represented in the convex variational form, p(x) = sup ξ>0 N(x; 0, ξ−1) ϕ(ξ) if and only if g(x) ≡−log p(√x) is increasing and concave on (0, ∞). In this case we can use the function, ϕ(ξ) = p 2π/ξ exp ¡ g∗(ξ/2) ¢ , where g∗is the concave conjugate of g. Examples of densities satisfying this criterion include: (i) Generalized Gaussian ∝ exp(−|x|β), 0 < β ≤ 2, (ii) Logistic ∝ 1/ cosh2(x/2), (iii) Student’s t ∝ (1 + x2/ν)−(ν+1)/2, ν > 0, and (iv) symmetric α-stable densities (having characteristic function exp(−|ω|α), 0 < α ≤2). The convex variational representation motivates the following definition. Definition 1. A symmetric probability density p(x) is strongly super-gaussian if p(√x) is log-convex on (0, ∞), and strongly sub-gaussian if p(√x) is log-concave on (0, ∞). An equivalent definition is given in [5, pp. 60-61], which defines p(x) = exp(−f(x)) to be sub-gaussian (super-gaussian) if f ′(x)/x is increasing (decreasing) on (0, ∞). This condition is equivalent to f(x) = g(x2) with g concave, i.e.g′ decreasing. The property of being strongly sub- or super-gaussian is independent of scale. 3.2 Scale mixtures We now wish to determine when a probability density p(x) can be represented in the form (3) for some µ(ξ) non-decreasing on (0, ∞). A fundamental result dealing with integral representations was given by Bernstein and Widder (see [31]). It uses the following definition. Definition 1. A function f(x) is completely monotonic on (a, b) if, (−1)nf (n)(x) ≥0 , n = 0, 1, . . . for every x ∈(a, b). That is, f(x) is completely monotonic if it is positive, decreasing, convex, and so on. Bernstein’s theorem [31, Thm.12b] states: Theorem 2. A necessary and sufficient condition that p(x) should be completely monotonic on (0, ∞) is that, p(x) = Z ∞ 0 e−txdα(t) , where α(t) is non-decreasing on (0, ∞). Thus for p(x) to be a Gaussian scale mixture, p(x) = e−f(x) = e−g(x2) = Z ∞ 0 e−1 2 tx2dα(t) , a necessary and sufficient condition is that p(√x) = e−g(x) be completely monotonic for 0 < x < ∞, and we have the following (see also [19, 1]), Theorem 3. A function p(x) can be represented as a Gaussian scale mixture if and only if p(√x) is completely monotonic on (0, ∞). 3.3 Relationship between convex and integral type representations We now consider the relationship between the convex and integral types of variational representation. Let p(x) = exp(−g(x2)). We have seen that p(x) can be represented in the form (2) if and only if g(x) is symmetric and concave on (0, ∞). And we have seen that p(x) can be represented in the form (3) if and only if p(√x) = exp(−g(x)) is completely monotonic. We shall consider now whether or not complete monotonicity of p(√x) implies the concavity of g(x) = −log p(√x), that is whether representability in the integral form implies representability in the convex form. Complete monotonicity of a function q(x) implies that q ≥0, q′ ≤0, q′′ ≥0, etc. For example, if p(√x) is completely monotonic, then, d2 dx2 p(√x) = d2 dx2 e−g(x) = e−g(x)¡ g′(x)2 −g′′(x) ¢ ≥0 . Thus if g′′ ≤0, then p(√x) is convex, but the converse does not necessarily hold. That is, concavity of g does not follow from convexity of p(√x), as the latter only requires that g′′ ≤g′ 2. Concavity of g does follow however from the complete monotonicity of p(√x). For example, we can use the following result [8, §3.5.2]. Theorem 4. If the functions ft(x), t ∈D, are convex, then R D eft(x)dt is convex. Thus completely monotonic functions, being scale mixtures of the log convex function e−x by Theorem 2, are also log convex. We thus see that any function representable in the integral variational form (3) is also representable in the convex variational form (2). In fact, a stronger result holds. The following theorem [7, Thm. 4.1.5] establishes the equivalence between q(x) and g′(x) = d/dx−log q(x) in terms of complete monotonicity. Theorem 5. If g(x) > 0, then e−ug(x) is completely monotonic for every u > 0, if and only if g′(x) is completely monotonic. In particular, it holds that q(x) ≡p(√x) = exp(−g(x)) is convex only if g′′(x) ≤0. To summarize, let p(x) = e−g(x2). If g is increasing and concave for x > 0, then p(x) admits the convex type of variational representation (2). If, in addition, the higher derivatives satisfy g(3)(x) ≥0, g(4)(x) ≤0, g(5)(x) ≥0, etc., then p(x) also admits the Gaussian scale mixture representation (3). 4 General equivalences among Variational methods 4.1 MAP estimation of components Consider first the MAP estimate of the latent variables (4). 4.1.1 Component MAP – Integral case Following [10]1, consider an EM algorithm to estimate x when the p(xi) are independent Gaussian scale mixtures as in (3). Differentiating inside the integral gives, p′(x) = d dx Z ∞ 0 p(x|ξ)p(ξ)dξ = − Z ∞ 0 ξxp(x, ξ) dξ = −xp(x) Z ∞ 0 ξp(ξ|x) dξ . 1In [10], the xi in (1) are actually estimated as non-random parameters, with the noise ν being non-gaussian, but the underlying theory is essentially the same. Thus, with p(x) ≡exp(−f(x)), we see that, E(ξi|xi) = Z ∞ 0 ξip(ξi|xi) dξi = −p′(xi) xip(xi) = f ′(xi) xi . (6) The EM algorithm alternates setting ˆξi to the posterior mean, E(ξi|xi) = f ′(xi)/xi, and setting x to minimize, −log p(y|x)p(x|ˆξ) = 1 2 xTAT Σ−1 ν Ax −yT Σ−1 ν Ax + 1 2 xTΛx + const., (7) where Λ = diag(ˆξ)−1. At iteration k, we put ξk i = f ′(xk i )/xk i , and Λk = diag(ξk)−1, and xk+1 = ΛkAT (AΛkAT + Σν)−1y . 4.1.2 Component MAP – Convex case Again consider the MAP estimate of x. For strongly super-gaussian priors, p(xi), we have, arg max x p(x|y) = arg max x p(y|x)p(x) = arg max x max ξ p(y|x)p(x; ξ)ϕ(ξ) Now since, −log p(y|x)p(x; ξ)ϕ(ξ) = 1 2 xTAT Σ−1 ν Ax −yT Σ−1 ν Ax + n X i=1 1 2 x2 i ξi −g∗(ξi/2) , the MAP estimate can be improved iteratively by alternately maximizing x and ξ, ξk i = 2 g∗′−1(xk 2 i ) = 2 g′(xk 2 i ) = f ′(xk i ) xk i , (8) with x updated as in §4.1.1. We thus see that this algorithm is equivalent to the MAP algorithm derived in §4.1.1 for Gaussian scale mixtures. That is, for direct MAP estimation of latent variable x, the EM Gaussian scale mixture method and the variational bounding method yield the same algorithm. This algorithm has also been derived in the image restoration literature [12] as the “halfquadratic” algorithm, and it is the basis for the FOCUSS algorithms derived in [26, 25]. The regression algorithm given in [11] for the particular cases of Laplacian and Jeffrey’s priors is based on the theory in §4.1.1, and is in fact equivalent to the FOCUSS algorithm derived in [26]. 4.2 MAP estimate of variational parameters Now consider MAP estimation of the (random) variational hyperparameters ξ. 4.2.1 Hyperparameter MAP – Integral case Consider an EM algorithm to find the MAP estimate of the hyperparameters ξ in the integral representation (Gaussian scale mixture) case, where the latent variables x are hidden. For the complete likelihood, we have, p(ξ, x|y) ∝p(y|x, ξ)p(x|ξ)p(ξ) = p(y|x)p(x|ξ)p(ξ) . The function to be minimized over ξ is then, −log p(x|ξ)p(ξ) ® x = X i 1 2 ⟨x2 i ⟩ξi −log p ξi p(ξi) + const. (9) If we define h(ξ) ≡log √ξi p(ξi), and assume that this function is concave, then the optimal value of ξ is given by, ξi = h∗′¡ 1 2 ⟨x2 i ⟩ ¢ . This algorithm converges to a local maximum of p(ξ|y), ˆξ, which then yields an estimate of x by taking ˆx = E(x|y, ˆξ). Alternative algorithms result from using this method to find the MAP estimate of different functions of the scale random variable ξ. 4.2.2 Hyperparameter MAP – Convex case In the convex representation, the ξ parameters do not actually represent a probabilistic quantity, but rather arise as parameters in a variational inequality. Specifically, we write, p(y) = Z p(y, x) dx = Z max ξ p(y|x) p(x|ξ) ϕ(ξ) dx ≥ max ξ Z p(y|x) p(x|ξ) ϕ(ξ) dx = max ξ N ¡ y; 0, AΛAT + Σν ¢ ϕ(ξ) . Now we define the function, ˜p(y; ξ) ≡ N ¡ y; 0, AΛAT + Σν ¢ ϕ(ξ) and try to find ˆξ = arg max ˜p(y; ξ). We maximize ˜p by EM, marginalizing over x, ˜p(y; ξ) = Z p(y|x) p(x|ξ) ϕ(ξ) dx . The algorithm is then equivalent to that in §4.1.2 except that the expectation is taken of x2 as the E step, and the diagonal weighting matrix becomes, ξi = f ′(σi) σi , where σi = p E (x2 i |y; ξi). Although ˜p is not a true probability density function, the proof of convergence for EM does not assume unit normalization. This theory is the basis for the algorithm presented in [14] for the particular case of a Laplacian prior (where in addition A in the model (1) is updated according to the standard EM update.) 4.3 Ensemble learning In the ensemble learning approach (also Variational Bayes [4, 3, 6]) the idea is to find the approximate separable posterior that minimizes the KL divergence from the true posterior, using the following decomposition of the log likelihood, log p(y) = Z q(z|y) logp(z, y) q(z|y) dz + D ¡ q(z|y) ¯¯¯¯ p(z|y) ¢ ≡ −F(q) + D(q||p) . The term F(q) is commonly called the variational free energy [29, 24]. Minimizing the F over q is equivalent to minimizing D over q. The posterior approximating distribution is taken to be factorial, q(z|y) = q(x, ξ|y) = q(x|y)q(ξ|y) . For fixed q(ξ|y), the free energy F is given by, − ZZ q(x|y)q(ξ|y) log p(x, ξ|y) q(x|y)q(ξ|y) dξ dx = D ³ q(x|y) ¯¯¯¯ e⟨log p(x,ξ|y)⟩ξ ´ + const., (10) where ⟨·⟩ξ denotes expectation with respect to q(ξ|y), and the constant is the entropy, H ¡ q(ξ|y) ¢ . The minimum of the KL divergence in (10) is attained if and only if q(x|y) ∝exp log p(x, ξ|y) ® ξ ∝p(y|x) exp log p(x|ξ) ® ξ almost surely. An identical derivation yields the optimal q(ξ|y) ∝exp log p(x, ξ|y) ® x ∝p(ξ) exp log p(x|ξ) ® x when q(x|y) is fixed. The ensemble (or VB) algorithm consists of alternately updating the parameters of these approximating marginal distributions. In the linear model with Gaussian scale mixture latent variables, the complete likelihood is again, p(y, x, ξ) = p(y|x)p(x|ξ)p(ξ) . The optimal approximate posteriors are given by, q(x|y) = N(x; µx|y, Σx|y) , q(ξi|y) = p ³ ξi ¯¯ xi = ⟨x2 i ⟩1/2´ , where, letting Λ = diag(⟨ξ⟩)−1, the posterior moments are given by, µx|y ≡ ΛAT (AΛAT + Σν)−1y Σx|y ≡ (AT Σ−1 ν A + Λ−1)−1 = Λ −ΛAT (AΛAT + Σν)−1AΛ . The only relevant fact about q(ξ|y) that we need is ⟨ξ⟩, for which we have, using (6), ⟨ξi⟩= Z ξiq(ξi|y) dξi = Z ξip ³ ξi | xi = ⟨x2 i ⟩1/2´ dξi = f ′(σi) σi , where σi = p E (x2 i |y; ξi). We thus see that the ensemble learning algorithm is equivalent to the approximate hyperparameter MAP algorithm of §4.2.2. Note also that this shows that the VB methods can be applied to any Gaussian scale mixture density, using only the form of the latent variable prior p(x), without needing the marginal hyperprior p(ξ) in closed form. This is particularly important in the case of the Generalized Gaussian and Logistic densities, whose scale parameter densities are α-Stable and Kolmogorov [1] respectively. 5 Conclusion In this paper, we have discussed criteria for variational representations of non-Gaussian latent variables, and derived general variational EM algorithms based on these representations. We have shown a general equivalence between the two representations in MAP estimation taking hyperparameters as hidden, and we have shown the general equivalence between the variational convex approximate MAP estimate of hyperparameters and the ensemble learning or VB method. References [1] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. J. Roy. Statist. Soc. Ser. B, 36:99–102, 1974. [2] H. Attias. Independent factor analysis. Neural Computation, 11:803–851, 1999. [3] H. Attias. A variational Bayesian framework for graphical models. In Advances in Neural Information Processing Systems 12. MIT Press, 2000. [4] M. J. Beal and Z. Ghahrarmani. The variational Bayesian EM algorithm for incomplete data: with application to scoring graphical model structures. In Bayesian Statistics 7, pages 453–464. University of Oxford Press, 2002. [5] A. Benveniste, M. M´etivier, and P. Priouret. Adaptive algorithms and stochastic approximations. Springer-Verlag, 1990. [6] C. M. Bishop and M. E. Tipping. Variational relevance vector machines. In C. Boutilier and M. Goldszmidt, editors, Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 46–53. Morgan Kaufmann, 2000. [7] S. Bochner. Harmonic analysis and the theory of probability. University of California Press, Berkeley and Los Angeles, 1960. [8] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [9] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39:1–38, 1977. [10] A. P. Dempster, N. M. Laird, and D. B. Rubin. Iteratively reweighted least squares for linear regression when errors are Normal/Independent distributed. In P. R. Krishnaiah, editor, Multivariate Analysis V, pages 35–57. North Holland Publishing Company, 1980. [11] M. Figueiredo. Adaptive sparseness using Jeffreys prior. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [12] D. Geman and G. Reynolds. Constrained restoration and the recovery of discontinuities. IEEE Trans. Pattern Analysis and Machine Intelligence, 14(3):367–383, 1992. [13] Z. Ghahramani and M. J. Beal. Variational inference for Bayesian mixtures of factor analysers. In Advances in Neural Information Processing Systems 12. MIT Press, 2000. [14] M. Girolami. A variational method for learning sparse and overcomplete representations. Neural Computation, 13:2517–2532, 2001. [15] A. Honkela and H. Valpola. Unsupervised variational Bayesian learning of nonlinear models. In Advances in Neural Information Processing Systems 17. MIT Press, 2005. [16] T. S. Jaakkola. Variational Methods for Inference and Estimation in Graphical Models. PhD thesis, Massachusetts Institute of Technology, 1997. [17] T. S. Jaakkola and M. I. Jordan. A variational approach to Bayesian logistic regression models and their extensions. In Proceedings of the 1997 Conference on Artificial Intelligence and Statistics, 1997. [18] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer Academic Publishers, 1998. [19] J. Keilson and F. W. Steutel. Mixtures of distributions, moment inequalities, and measures of exponentiality and Normality. The Annals of Probability, 2:112–130, 1974. [20] H. Lappalainen. Ensemble learning for independent component analysis. In Proceedings of the First International Workshop on Independent Component Analysis, 1999. [21] D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4(3):415–447, 1992. [22] D. J. C. MacKay. Ensemble learning and evidence maximization. Unpublished manuscript, 1995. [23] D. J. C. Mackay. Comparison of approximate methods for handling hyperparameters. Neural Computation, 11(5):1035–1068, 1999. [24] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355–368. Kluwer, 1998. [25] B. D. Rao, K. Engan, S. F. Cotter, J. Palmer, and K. Kreutz-Delgado. Subset selection in noise based on diversity measure minimization. IEEE Trans. Signal Processing, 51(3), 2003. [26] B. D. Rao and I. F. Gorodnitsky. Sparse signal reconstruction from limited data using FOCUSS: a re-weighted minimum norm algorithm. IEEE Trans. Signal Processing, 45:600–616, 1997. [27] R. T. Rockafellar. Convex Analysis. Princeton, 1970. [28] Sam Roweis and Zoubin Ghahramani. A unifying review of linear gaussian models. Neural Computation, 11(5):305–345, 1999. [29] L. K. Saul, T. S. Jaakkola, and M. I. Jordan. Mean field theory for sigmoid belief networks. Journal of Artificial Intelligence Research, 4:61–76, 1996. [30] M. E. Tipping. Sparse Bayesian learning and the Relevance Vector Machine. Journal of Machine Learning Research, 1:211–244, 2001. [31] D. V. Widder. The Laplace Transform. Princeton University Press, 1946. [32] D. Wipf, J. Palmer, and B. Rao. Perspectives on sparse bayesian learning. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16, Cambridge, MA, 2003. MIT Press.
|
2005
|
29
|
2,843
|
Learning Shared Latent Structure for Image Synthesis and Robotic Imitation Aaron P. Shon † Keith Grochow † Aaron Hertzmann ‡ Rajesh P. N. Rao † †Department of Computer Science and Engineering University of Washington Seattle, WA 98195 USA ‡Department of Computer Science University of Toronto Toronto, ON M5S 3G4 Canada {aaron,keithg,rao}@cs.washington.edu, hertzman@dgp.toronto.edu Abstract We propose an algorithm that uses Gaussian process regression to learn common hidden structure shared between corresponding sets of heterogenous observations. The observation spaces are linked via a single, reduced-dimensionality latent variable space. We present results from two datasets demonstrating the algorithms’s ability to synthesize novel data from learned correspondences. We first show that the method can learn the nonlinear mapping between corresponding views of objects, filling in missing data as needed to synthesize novel views. We then show that the method can learn a mapping between human degrees of freedom and robotic degrees of freedom for a humanoid robot, allowing robotic imitation of human poses from motion capture data. 1 Introduction Finding common structure between two or more concepts lies at the heart of analogical reasoning. Structural commonalities can often be used to interpolate novel data in one space given observations in another space. For example, predicting a 3D object’s appearance given corresponding poses of another, related object relies on learning a parameterization common to both objects. Another domain where finding common structure is crucial is imitation learning, also called “learning by watching” [11, 12, 6]. In imitation learning, one agent, such as a robot, learns to perform a task by observing another agent, for example, a human instructor. In this paper, we propose an efficient framework for discovering parameterizations shared between multiple observation spaces using Gaussian processes. Gaussian processes (GPs) are powerful models for classification and regression that subsume numerous classes of function approximators, such as single hidden-layer neural networks and RBF networks [8, 15, 9]. Recently, Lawrence proposed the Gaussian process latent variable model (GPLVM) [4] as a new technique for nonlinear dimensionality reduction and data visualization [13, 10]. An extension of this model, the scaled GPLVM (SGPLVM), has been used successfully for dimensionality reduction on human motion capture data for motion synthesis and visualization [1]. In this paper, we propose a generalization of the GPLVM model that can handle multiple observation spaces, where each set of observations is parameterized by a different set of kernel parameters. Observations are linked via a single, reduced-dimensionality latent variable space. Our framework can be viewed as a nonlinear extension to canonical correlation analysis (CCA), a framework for learning correspondences between sets of observations. Our goal is to find correspondences on testing data, given a limited set of corresponding training data from two observation spaces. Such an algorithm can be used in a variety of applications, such as inferring a novel view of an object given a corresponding view of a different object and estimating the kinematic parameters for a humanoid robot given a human pose. Several properties motivate our use of GPs. First, finding latent representations for correlated, high-dimensional sets of observations requires non-linear mappings, so linear CCA is not viable. Second, GPs reduce the number of free parameters in the regression model, such as number of basis units needed, relative to alternative regression models such as neural networks. Third, the probabilistic nature of GPs facilitates learning from multiple sources with potentially different variances. Fourth, probabilistic models provide an estimate of uncertainty in classification or interpolating between data; this is especially useful in applications such as robotic imitation where estimates of uncertainty can be used to decide whether a robot should attempt a particular pose or not. GPs can also generate samples of novel data, unlike many nonlinear dimensionality reduction methods [10, 13]. Fig. 1(a) shows the graphical model for learning shared structure using Gaussian processes. A latent space X maps to two (or more) observation spaces Y, Z using nonlinear kernels, and “inverse” Gaussian processes map back from observations to latent coordinates. Synthesis employs a map from latent coordinates to observations, while recognition employs an inverse mapping. We demonstrate our approach on two datasets. The first is an image dataset containing corresponding views of two different objects. The challenge is to predict corresponding views of the second object given novel views of the first based on a limited training set of corresponding object views. The second dataset consists of human poses derived from motion capture data and corresponding kinematic poses from a humanoid robot. The challenge is to estimate the kinematic parameters for robot pose, given a potentially novel pose from human motion capture, thereby allowing robotic imitation of human poses. Our results indicate that the model generalizes well when only limited training correspondences are available, and that the model remains robust when testing data is noisy. 2 Latent Structure Model The goal of our model is to find a shared latent variable parameterization in a space X that relates corresponding pairs of observations from two (or more) different spaces Y, Z. The observation spaces might be very dissimilar, despite the observations sharing a common structure or parameterization. For example, a robot’s joint space may have very different degrees of freedom than a human’s joint space, although they may both be made to assume similar poses. The latent variable space then characterizes the common pose space. Let Y, Z be matrices of observations (training data) drawn from spaces of dimensionality DY , DZ respectively. Each row represents one data point. These observations are drawn so that the first observation y1 corresponds to the observation z1, observation y2 corresponds to observation z2, etc. up to the number of observations N. Let X be a “latent space” of dimensionality DX ≪DY , DZ. We initialize a matrix of latent points X by averaging the top DX principal components of Y, Z. As with the original GPLVM, we optimize over a limited subset of training points (the active set) to accelerate training, determined by the informative vector machine (IVM) [5]. The SGPLVM assumes that a diagonal “scaling matrix” W scales the variances of each dimension k of the Y matrix (a similar matrix V scales each dimension m of Z). The scaling matrix helps in domains where different output dimensions (such as the degrees of freedom of a robot) can have vastly different variances. We assume that each latent point xi generates a pair of observations yi, zi via a nonlinear function parameterized by a kernel matrix. GPs parameterize the functions fY : X 7→Y and fZ : X 7→Z. The SGPLVM model uses an exponential (RBF) kernel, defining the similarity between two data points x, x′ as: k (x, x′) = αY exp −γY 2 ||x −x′||2 + δx,x′β−1 Y (1) given hyperparameters for the Y space θY = {αY , βY , γY }. δ represents the delta function. Following standard notation for GPs [8, 15, 9], the priors P(θY ), P(θZ), P(X), the likelihoods P(Y), P(Z) for the Y, Z observation spaces, and the joint likelihood PGP (X, Y, Z, θY , θZ) are given by: P(Y|θY , X) = |W|N p (2π)NDY |K|DY exp −1 2 DY X k=1 w2 kYT k K−1 Y Yk ! (2) P(Z|θZ, X) = |V|N p (2π)NDZ|K|DZ exp −1 2 DZ X m=1 v2 mZT mK−1 Z Zm ! (3) P(θY ) ∝ 1 αY βY γY P(θZ) ∝ 1 αZβZγZ (4) P(X) = 1 √ 2π exp −1 2 X i ||xi||2 ! (5) PGP (X, Y, Z, θY , θZ) = P(Y|θY , X)P(Z|θZ, X)P(θY )P(θZ)P(X) (6) where αZ, βZ, γZ are hyperparameters for the Z space, and wk, vm respectively denote the diagonal entries for matrices W, V. Let Y, K −1 Y respectively denote the Y observations from the active set (with mean µY subtracted out) and the kernel matrix for the active set. The joint negative log likelihood of a latent point x and observations y, z is: Ly|x (x, y) = ||W (y −fY (x)) ||2 2σ2 Y (x) + DY 2 ln σ2 Y (x) (7) fY (x) = µY + Y TK −1 Y k(x) (8) σ2 Y (x) = k(x, x) −k(x)TK −1 Y k(x) (9) Lz|x (x, z) = ||V (z −fZ(x)) ||2 2σ2 Z(x) + DZ 2 ln σ2 Z(x) (10) fZ(x) = µZ + Z TK −1 Z k(x) (11) σ2 Z(x) = k(x, x) −k(x)TK −1 Z k(x) (12) Lx,y,z = Ly|x + Lz|x + 1 2||x||2 (13) The model learns a separate kernel for each observation space, but a single set of common latent points. A conjugate gradient solver adjusts model parameters and latent coordinates to maximize Eq. 6. Given a trained SGPLVM, we would like to infer the parameters in one observation space given parameters in the other (e.g., infer robot pose z given human pose y). We solve this problem in two steps. First, we determine the most likely latent coordinate x given the observation y using argmaxx LX (x, y). In principle, one could find x at ∂LX ∂x = 0 using gradient descent. However, to speed up recognition, we instead learn a separate “inverse” Gaussian process f −1 Y : y 7→x that maps back from the space Y to the space X. Once the correct latent coordinate x has been inferred for a given y, the model uses the trained SGPLVM to predict the corresponding observation z. 3 Results We first demonstrate how the our model can be used to synthesize new views of an object, character or scene from known views of another object, character or scene, given a common latent variable model. For ease of visualization, we used 2D latent spaces for all results shown here. The model was applied to image pairs depicting corresponding views of 3D objects. Different views show the objects1 rotated at varying degrees out of the camera plane. We downsampled the images to 32 × 32 grayscale pixels. For fitting images, the scaling matrices W, V are of minimal importance (since we expect all pixels should a priori have the same variance). We also found empirically that using fY (x) = YTK −1 Y k(x) instead of Eqn. 8 produced better renderings. We rescaled each fY to use the full range of pixel values [0 . . . 255], creating the images shown in the figures. Fig. 1(b) shows how the model extrapolates to novel datasets given a limited set of training correspondences. We trained the model using 72 corresponding views of two different objects, a coffee cup and a toy truck. Fixing the latent coordinates learned during training, we then selected 8 views of a third object (a toy car). We selected latent points corresponding to those views, and learned kernel parameters for the 8 images. Empirically, priors on kernel parameters are critical for acceptable performance, particularly when only limited data are available such as the 8 different poses for the toy car. In this case, we used the kernel parameters learned for the cup and toy truck (based on 72 different poses) to impose a Gaussian prior on the kernel parameters for the car (replacing P(θ) in Eqn. 4): −log P(θcar) = −log PGP + (θcar −θµ)T Γ−1 θ (θcar −θµ) (14) where θcar, θµ, Γ−1 θ are respectively kernel parameters for the car, the mean kernel parameters for previously learned kernels (for the cup and truck), and inverse covariance matrix for learned kernel parameters. θµ, Γ−1 θ in this case are derived from only two samples, but nonetheless successfully constrain the kernel parameters for the car so the model functions on the limited set of 8 example poses. To test the model’s robustness to noise and missing data, we randomly selected 10 latent coordinates corresponding to a subset of learned cup and truck image pairs. We then added varying displacements to the latent coordinates and synthesized the corresponding novel views for all 3 observation spaces. Displacements varied from 0 to 0.45 (all 72 latent coordinates lie on the interval [-0.70,-0.87] to [0.72,0.56]). The synthesized views are shown in Fig. 1(b), with images for the cup and truck in the first two rows. Latent coordinates in regions of low model likelihood generate images that appear blurry or noisy. More interestingly, despite the small number of images used for the car, the model correctly matches the orientation of the car to the synthesized images of the cup and truck. Thus, the model can synthesize reasonable correspondences (given a latent point) even if the number of training examples used to learn kernel parameters is small. Fig. 2 illustrates the recognition performance of the “inverse” Gaussian process model as a function of the amount of noise added to the inputs. Using the latent space and kernel parameters learned for Fig. 1, we present 72 views of the coffee cup with varying amounts of additive, zero-mean white noise, and determine the fraction of the 72 poses correctly classified by the model. The model estimates the pose using 1-nearest-neighbor classification of the latent coordinates x learned during training: argmax x′ k (x, x′) (15) The recognition performance degrades gracefully with increasing noise power. Fig. 2 also plots sample images from one pose of the cup at several different noise levels. For two of the noise levels, we show the “denoised” cup image selected using the nearest-neighbor 1http://www1.cs.columbia.edu/CAVE/research/softlib/coil-100.html Novel Z Y 0 .05 .10 .15 .25 .20 .30 .35 .40 .45 ... Y Z X GPLVM GPLVM Inverse GP kernels a) Displacement from latent coordinate: b) Figure 1: Pose synthesis for multiple objects using shared structure: (a) Graphical model for our shared structure latent variable model. The latent space X maps to two (or more) observation spaces Y, Z using a nonlinear kernel. “Inverse” Gaussian process kernels map back from observations to latent coordinates. (b) The model learns pose correspondences for images of the coffee cup and toy truck (Y and Z) by fitting kernel parameters and a 2-dimensional latent variable space. After learning the latent coordinates for the cup and truck, we fit kernel parameters for a novel object (the toy car). Unlike the cup and truck, where 72 pairs of views were used to fit kernel parameters and latent coordinates, only 8 views were used to fit kernel parameters for the car. The model is robust to noise in the latent coordinates; numbers above each column represent the amount of noise added to the latent coordinates used to synthesize the images. Even at points where the model is uncertain (indicated by the rightmost results in the Y and Z rows), the learned kernel extrapolates the correct view of the toy car (the “novel” row). classification, and the corresponding reconstructed truck. This illustrates how even noisy observations in one space can predict corresponding observations in the companion space. Fig. 3 illustrates the ability of the model to synthesize novel views of one object given a novel view of a different object. A limited set of corresponding poses (24 of 72 total) of a cat figurine and a mug were used to train the GP model. The remaining 48 poses of the mug were then used as testing data. For each snapshot of the mug, we inferred a latent point using the “inverse” Gaussian process model and used the learned model to synthesize what the cat figurine should look like in the same pose. A subset of these results is presented in the rows on the left in Fig. 3: the “Test” rows show novel images of the mug, the “Inferred” rows show the model’s best estimate for the cat figurine, and the “Actual” rows show the ground truth. Although the images for some poses are blurry and the model fails to synthesize the correct image for pose 44, the model nevertheless manages to capture fine detail on most of the images. The grayscale plot at upper right in Fig. 3 shows model certainty 1/ σ2 Y (x) + σ2 Z(x) , with white where the model is highly certain and black where the model is highly uncertain. Arrows indicate the path in latent space formed by the training images. The dashed line indicates latent points inferred from testing images of the mug. Numbered latent coordinates correspond to the synthesized images at left. The latent space shows structure: latent points for similar poses are grouped together, and tend to move along a smooth curve in latent space, with coordinates for the final pose lying close to coordinates for the first pose (as desired for a cyclic image sequence). The bar graph at lower right compares model certainty for the numbered latent coordinates; higher bars indicate greater model certainty. The model appears particularly uncertain for blurry inferred images, such as 8, 14, and 26. Fig. 4 shows an application of our framework to the problem of robotic imitation of human actions. We trained our model on a dataset containing human poses (acquired with a Vicon motion capture system) and corresponding poses of a Fujitsu HOAP-2 humanoid robot. Note that the robot has 25 degrees-of-freedom which differ significantly from the degrees0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 Noise power (σ of noise distribution) Fraction correct Figure 2: Recognition using a Learned Latent Variable Space: After learning from 72 paired correspondences between poses of a coffee cup and of a toy truck, the model is able to recognize different poses of the coffee cup in the presence of additive white noise. Fraction of images recognized are plotted on the Y axis and standard deviation of white noise is plotted on the X axis. One pose of the cup (of 72 total) is plotted for various noise levels (see text for details). “Denoised” images obtained from nearest-neighbor classification and the corresponding images for the Z space (the toy truck) are also shown. of-freedom of the human skeleton used in motion capture. After training on 43 roughly matching poses (only linear time scaling applied to align training poses), we tested the model by presenting a set of 123 human motion capture poses (which includes the original training set). Because the recognition model f −1 Y : y 7→x is not trained from samples from the prior distribution of the data, P(x, y), we found it necessary to approximate k (x) for the recognition model by rescaling k (x) for the testing points to lie on the same interval as the k (x) values of the training points. We suspect that providing proper samples from the prior will improve recognition performance. As illustrated in Fig. 4 (inset panels, human and robot skeletons), the model was able to correctly infer appropriate robot kinematic parameters given a range of novel human poses. These inferred parameters were used in conjunction with a simple controller to instantiate the pose in the humanoid robot (see photos in the inset panels). 4 Discussion Our Gaussian process model provides a novel method for learning nonlinear relationships between corresponding sets of data. Our results demonstrate the model’s utility for diverse tasks such as image synthesis and robotic programming by demonstration. The GP model is closely related to other kernel methods for solving CCA [3] and similar problems [2]. The problems addressed by our model can also be framed as a type of nonlinear CCA. Our method differs from the latent variable method proposed in [14] by using Gaussian process regression. Disadvantages of our method with respect to [14] include lack of global optimality for the latent embedding; advantages include fewer independent parameters and the ability to easily impose priors on the latent variable space (since GPLVM regression uses conjugate gradient optimization instead of eigendecomposition). Empirically we found the flexiblity of the GPLVM approach desirable for modeling a diversity of data sources. Our framework learns mappings between each observation space and a latent space, rather than mapping directly between the observation spaces. This makes visualization and interaction much easier. An intermediate mapping to a latent space is also more economical in Actual Inferred Test Actual Inferred Test 2 8 14 20 26 32 38 44 50 56 62 68 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 0.033 0.034 0.035 0.036 0.037 14 20 26 32 50 44 68 62 2 8 38 56 8 2 14 20 26 32 38 44 50 68 62 56 0.29 0.35 Figure 3: Synthesis of novel views using a shared latent variable model: After training on 24 paired images of a mug with a cat figurine (out of 72 total paired images), we ask the model to infer what the remaining 48 poses of the cat would look like given 48 novel views of the mug. The system uses an inverse Gaussian process model to infer a 2D latent point for each of the 48 novel mug views, then synthesizes a corresponding view of the cat figurine. At left we plot the novel testing mug images given to the system (“test”), the synthesized cat images (“inferred”), and the actual views of the cat figurine from the database (“actual”). At upper right we plot the model uncertainty in the latent space. The 24 latent coordinates from the training data are plotted as arrows, while the 48 novel latent points are plotted as crosses on a dashed line. At lower right we show model certainty for the cat figurine data (1/σ2 Z(x)) for each testing latent point x. Note the low certainty for the blurry inferred images labeled 8, 14, and 26. the limit of many correlated observation spaces. Rather than learning all pairwise relations between observation spaces (requiring a number of parameters quadratic in the number of observation spaces), our method learns one generative and one inverse mapping between each observation space and the latent space (so the number of parameters grows linearly). From a cognitive science perspective, such an approach is similar to the Active Intermodal Mapping (AIM) hypothesis of imitation [6]. In AIM, an imitating agent maps its own actions and its perceptions of others’ actions into a single, modality-independent space. This modality-independent space is analogous to the latent variable space in our model. Our model does not directly address the “correspondence problem” in imitation [7], where correspondences between an agent and a teacher are established through some form of unsupervised feature matching. However, it is reasonable to assume that imitation by a robot of human activity could involve some initial, explicit correspondence matching based on simultaneity. Turn-taking behavior is an integral part of human-human interaction. Thus, to bootstrap its database of corresponding data points, a robot could invite a human to take turns playing out motor sequences. Initially, the human would imitate the robot’s actions and the robot could use this data to learn correspondences using our GP model; later, the robot could check and if necessary, refine its learned model by attempting to imitate the human’s actions. Acknowledgements: This work was supported by NSF AICS grant no. 130705 and an ONR YIP award/NSF Career award to RPNR. We thank the anonymous reviewers for their comments. References [1] K. Grochow, S. L. Martin, A. Hertzmann, and Z. Popovi´c. Style-based inverse kinematics. In Proc. SIGGRAPH, 2004. [2] J. Ham, D. Lee, and L. Saul. Semisupervised alignment of manifolds. In AISTATS, 2004. [3] P. L. Lai and C. Fyfe. Kernel and nonlinear canonical correlation analysis. Int. J. Neural Sys., 10(5):365–377, 2000. Figure 4: Learning shared latent structure for robotic imitation of human actions: The plot in the center shows the latent training points (red circles) and model precision 1/σ2 Z for the robot model (grayscale plot), with examples of recovered latent points for testing data (blue diamonds). Model precision is qualitatively similar for the human model. Inset panels show the pose of the human motion capture skeleton, the simulated robot skeleton, and the humanoid robot for each example latent point. The model correctly infers robot poses from the human walking data (inset panels). [4] N. D. Lawrence. Gaussian process models for visualization of high dimensional data. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, Advances in NIPS 16. [5] N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: the informative vector machine. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in NIPS 15, 2003. [6] A. N. Meltzoff. Elements of a developmental theory of imitation. In A. N. Meltzoff and W. Prinz, editors, The imitative mind: Development, evolution, and brain bases, pages 19–41. Cambridge: Cambridge University Press, 2002. [7] C. Nehaniv and K. Dautenhahn. The correspondence problem. In Imitation in Animals and Artifacts. MIT Press, 2002. [8] A. O’Hagan. On curve fitting and optimal design for regression. Journal of the Royal Statistical Society B, 40:1–42, 1978. [9] C. E. Rasmussen. Evaluation of Gaussian Processes and other Methods for Non-Linear Regression. PhD thesis, University of Toronto, 1996. [10] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000. [11] S. Schaal, A. Ijspeert, and A. Billard. Computational approaches to motor learning by imitation. Phil. Trans. Royal Soc. London: Series B, 358:537–547, 2003. [12] A. P. Shon, D. B. Grimes, C. L. Baker, and R. P. N. Rao. A probabilistic framework for modelbased imitation learning. In Proc. 26th Ann. Mtg. Cog. Sci. Soc., 2004. [13] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. [14] J. J. Verbeek, S. T. Roweis, and N. Vlassis. Non-linear CCA and PCA by alignment of local models. In Advances in NIPS 16, pages 297–304. 2003. [15] C. K. I. Williams. Computing with infinite networks. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in NIPS 9. Cambridge, MA: MIT Press, 1996.
|
2005
|
3
|
2,844
|
Fast Information Value for Graphical Models Brigham S. Anderson School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 brigham@cmu.edu Andrew W. Moore School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 awm@cs.cmu.edu Abstract Calculations that quantify the dependencies between variables are vital to many operations with graphical models, e.g., active learning and sensitivity analysis. Previously, pairwise information gain calculation has involved a cost quadratic in network size. In this work, we show how to perform a similar computation with cost linear in network size. The loss function that allows this is of a form amenable to computation by dynamic programming. The message-passing algorithm that results is described and empirical results demonstrate large speedups without decrease in accuracy. In the cost-sensitive domains examined, superior accuracy is achieved. 1 Introduction In a diagnosis problem, one wishes to select the best test (or observation) to make in order to learn the most about a system of interest. Medical settings and disease diagnosis immediately come to mind, but sensor management (Krishnamurthy, 2002), sensitivity analysis (Kjrulff & van der Gaag, 2000), and active learning (Anderson & Moore, 2005) all make use of similar computations. These generally boil down to an all-pairs analysis between observable variables (queries) and the variables of interest (targets.) A common technique in the field of diagnosis is to compute the mutual information between each query and target, then select the query that is expected to provide the most information (Agostak & Weiss, 1999). Likewise, a sensitivity analysis between the query variable and the target variables can be performed (Laskey, 1995; Kjrulff & van der Gaag, 2000). However, both suffer from a quadratic blowup with respect to the number of queries and targets. In the current paper we present a loss function which can be used in a message-passing framework to perform the all-pairs computation with cost linear in network size. We describe the loss function in Section 2, we describe a polynomial expression for networkwide expected loss in Section 3, and in Section 4 we present a message-passing scheme to perform this computation efficiently for each node in the network. Section 5 shows the empirical speedups and accuracy gains achieved by the algorithm. 1.1 Graphical Models To simplify presentation, we will consider only Bayesian networks, but the results generalize to any graphical model. We also restrict the class of networks to those without undirected loops, or polytrees, of which Junction trees are a member. We have a Bayesian Network B, which is composed of an independence graph, G and parameters for CPT tables. The independence graph G = (X, E) is a directed acyclic graph (DAG) in which X is a set of N discrete random variables {x1, x2, ..., xN} ∈X, and the edges define the independence relations. We will denote the marginal distribution of a single node P(x|B) by πx, where (πx)i is P(x = i). We will omit conditioning on B for the remainder of the paper. We indicate the number states a node x can assume as |x|. Additionally, each node x is assigned a cost matrix Cx, in which (Cx)ij is the cost of believing x = j when in fact the true value x∗= i. A cost matrix of all zeros indicates that one is not interested in the node’s value. The cost matrix C is useful because inhomogeneous costs are a common feature in most realistic domains. This ubiquity results from the fact that information almost always has a purpose, so that some variables are more relevant than others, some states of a variable are more relevant than others, and confusion between some pairs of states are more relevant than between other pairs. For our task, we are given B, and wish to estimate P(X) accurately by iteratively selecting the next node to observe. Although typically only a subset of the nodes are queryable, we will for the purposes of this paper assume that any node can be queried. How do we select the most informative node to query? We must first define our objective function, which is determined by our definition of error. 2 Risk Due to Uncertainty The underlying error function for the information gain computation will be denoted Error(P(X)||X ∗), which quantifies the loss associated with the current belief state, P(X) given the true values X ∗. There are several common candidates for this role, a log-loss function, a log-loss function over marginals, and an expected 0-1 misclassification rate (Kohavi & Wolpert, 1996). Constant factors have been omitted. Errorlog(P(X)||X ∗) = −log P(X ∗) (1) Errormlog(P(X)||X ∗) = − X u∈X log P(u∗) (2) Error01(P(X)||X ∗) = − X u∈X P(u∗) (3) Where X is the set of nodes, and u∗is the true value of node u. The error function of Equation 1 will prove insufficient for our needs as it cannot target individual node errors, while the error function of Equation 2 results in an objective function that is quadratic in cost to compute. We will be exploring a more general form of Equation 3 which allows arbitrary weights to be placed on different types of misclassifications. For instance, we would like to specify that misclassifying a node’s state as 0 when it is actually 1 is different from misclassifying it as 0 when it is actually in state 2. Different costs for each node can be specified with cost matrices Cu for u ∈X. The final error function is Error(P(X)||X ∗) = X u∈X |u| X i P(u = i)Cu[u∗, i] (4) Where C[i, j] is the ijth element of the matrix C, and |u| is the number of states that the node u can assume. The presence of the cost matrix Cu in Equation 4 constitutes a significant advantage in real applications, as they often need to specify inhomogeneous costs. There is a separate consideration, that of query cost, or cost(x), which is the cost incurred by the action of observing x (e.g., the cost of a medical test.) If both the query cost and the misclassification cost C are formulated in the same units, e.g., dollars, then they form a coherent decision framework. The query costs will be omitted from this presentation for clarity. In general, one does not actually know the true values X ∗of the nodes, so one cannot directly minimize the error function as described. Instead, the expected error, or risk, is used. Risk(P(X)) = X x P(x)ErrorP(X||x) (5) which for the error function of Equation 4 reduces to Risk(P(X)) = X u∈X X j X k P(u = j)P(u = k)Cu[j, k] (6) = X u∈X πT u Cuπu (7) where (πu)i = P(u = i). This is the objective we will minimize. It quantifies “On average, how much is our current ignorance going cost us?” For comparison, note that the log-loss function, Errorlog, results in an entropy risk function Risklog(P(X)) = H(X), and the log-loss function over the marginals, Errormlog, results in the risk function Riskmlog(P(X)) = P u∈X H(u). Ultimately, we want to find the nodes that have the greatest effect on Risk(P(X)), so we must condition Risk(P(X)) on the beliefs at each node. In other words, if we learned that the true marginal probabilities of node x were πx, what effect would that have on our current risk, or rather, what is Risk(P(X)|P(x) = πx)? Discouragingly, however, any change in πx propagates to all the other beliefs in the network. It seems as if we must perform several network evaluations for each node, a prohibitive cost for networks of any appreciable size. However, we will show that in fact dynamic programming can perform this computation for all nodes in only two passes through the network. 3 Risk Calculation To clarify our objective, we wish to construct a function Ra(π) for each node a, where Ra(π) = Risk(P(X)|P(a) = π). Suppose, for instance, that we learn that the value of node a is equal to 3. Our P(X) is now constrained to have the marginal P(a) = π′ a, where (π′ a)3 = 1 and equals zero elsewhere. If we had the function Ra in hand, we could simply evaluate Ra(π′ a) to immediately compute our new network-wide risk, which would account for all the changes in beliefs to all the other nodes due to learning that a = 3. This is exactly our objective; we would like to precompute Ra for all a ∈X. Define Ra(π) = Risk(P(X)|P(a) = π) (8) = X u∈X πT u Cuπu P(a)=π (9) This simply restates the risk definition of Equation 7 under the condition that P(a) = π. As shown in the next theorem, the function Ra has a surprisingly simple form. Theorem 3.1. For any node x, the function Rx(π) is a second-degree polynomial function of the elements of π Proof. Define the matrix Pu|v for every pair of nodes (u, v), such that (Pu|v)ij = P(u = j|v = i). Recall that the the beliefs at node x have a strictly linear relationship to the beliefs of node u, since (πu)i = X k P(u = i|x = k)P(x = k) (10) is equivalent to πu = Pu|xπx. Substituting Pu|xπx for πu in Equation 9 obtains Rx(π) = X u∈X πT x PT u|xCuPu|xπx πx=π (11) = πT X u∈X PT u|xCuPu|x ! π (12) = πT Θxπ (13) Where Θx is an |x| × |x| matrix. Note that the matrix Θx is sufficient to completely describe Rx, so we only need to consider the computation of Θx for x ∈X. From Equation 12, we see a simple equation for computing these Θx directly (though expensively): Θx = X u∈X PT u|xCuPu|x (14) Example #1 Given the 2-node network a →b, how do we calculate Ra(π), the total risk associated with our beliefs about the value of node a? Our objective is thus to determine Ra(π) = Risk(P(a, b)|P(a) = π) (15) = πT Θaπ (16) Equation 14 will give Θa as Θa = PT a|aCaPa|a + PT b|aCbPb|a (17) = Ca + PT b|aCbPb|a (18) with Pa|a = I by definition. The individual coefficients of Θa are thus θaij = Caij + X k X l P(b = k|a = i)P(b = l|a = j)Cbkl (19) Now we can compute the relation between any marginal π at node a and our total networkwide risk via Ra(π). However, using Equation 14 to compute all the Θ would require evaluating the entire network once per node. The function can, however, be decomposed further, which will enable much more efficient computation of Θx for x ∈X. 3.1 Recursion To create an efficient message-passing algorithm for computing Θx for all x ∈X, we will introduce ΘW x , where W is a subset of the network over which Risk(P(X)) is summed. ΘW x = X u∈W PT u|xCuPu|x (20) This is otherwise identical to Equation 14. It implies, for instance, that Θx x = Cx. More importantly, these matrices can be usefully decomposed as follows. Theorem 3.2. ΘW x = PT y|xΘW y Py|x if x and W are conditionally independent given y. Proof. Note that Pu|x = Pu|yPy|x for u ∈X, since (Pu|yPy|x)ij = |y| X k P(u = i|y = k)P(y = k|x = j) (21) = P(u = i|x = j) (22) = (Pu|x)ij (23) Step (21) is only true if x and u are conditionally independent given y. Substituting this result into Equation 20, we conclude ΘW x = X u∈W PT u|xCuPu|x (24) = X u∈W PT y|xPT u|yΘu uPu|yPy|x (25) = PT y|x X u∈W PT u|yΘu uPu|y ! Py|x (26) = PT y|xΘW y Py|x (27) Example #2 Suppose we now have a 3-node network, a →b →c, and we are only interested in the effect that node a has on the network-wide Risk. Our objective is to compute Ra(π) = Risk(P(a, b, c)|P(a) = π) (28) = πT Θaπ (29) where Θa is by definition Θa = Θabc a (30) = Ca + PT b|aCbPb|a + PT c|aCcPc|a (31) Using Theorem 3.2 and the fact that a is conditionally independent of c given b, we know Θabc a = Θa a + PT b|aΘbc b Pb|a (32) Θbc b = Θb b + PT c|bΘc cPc|b (33) Substituting 33 into 32 Θa = Θa a + PT b|a Θb b + PT c|bΘc cPc|b Pb|a (34) = Ca + PT b|a Cb + PT c|bCcPc|b Pb|a (35) Note that the coefficient Θa is obtained from probabilities between neighboring nodes only, without having to explicitly compute Pc|a. 4 Message Passing We are now ready to define message passing. Messages are of two types; in-messages and out-messages. They are denoted by λ and µ, respectively. Out-messages µ are passed from parent to child, and in-messages λ are passed from child to parent. The messages from x to y will be denoted as µxy and λxy. In the discrete case, µxy and λxy will both be matrices of size |y| × |y|. The messages summarize the effect that y has on the part of the network that y is d-separated from by x. Messages relate to the Θ coefficients by the following definition λyx = Θ\y x (36) µyx = Θ\y x (37) where the (nonstandard) notation Θ\y x indicates the matrix ΘV x for which V is the set of all the nodes in X that are reachable by x if y were removed from the graph. In other words, Θ\y x is summarizing the effect that x has on the entire network except for the part of the network that x can only reach through y. Propagation: The message-passing scheme is organized to recursively compute the Θ matrices using Theorem 3.2. As can be seen from Equations 36 and 37, the two types of messages are very similar in meaning. They differ only in that passing a message from a parent to child automatically separates the child from the rest of the network the parent is connected to, while a child-to-parent message does not necessarily separate the parent from the rest of the network that the child is connected to (due to the “explaining away” effect.) The contruction of the µ-message involves a short sequence of basic linear algebra. The µ-message from x to child c is created from all other messages entering x except those from c. The definition is µxc = PT x|c Cx + X u∈pa(x) µux + X v∈ch(x)\c λvx Px|c (38) The λ-messages from x to parent u are only slightly more involved. To account for the “explaining away” effect, we must construct λxu directly from the parents of x. λxu =PT x|u Cx + X c∈ch(x) λcx Px|u+ X w∈pa(x)\u PT w|u Cw + X v∈pa(w) µvw + X c∈ch(w)\x λcw Pw|u (39) Messages are constructed (or “sent”) whenever all of the required incoming messages are present and that particular message has not already been sent. For example, the outmessage µxc can be sent only when messages from all the parents of x and all the children of x (save c) are present. The overall effect of this constraint is a single leaves-inward propagation followed by a single root-outward propagation. Initialization and Termination: Initialization occurs naturally at any singly-connected (leaf) node x, where the message is by definition Cx. Termination occurs when no more messages meet the criteria for sending. Once all message propagation is finished, for each node x the coefficients Θx can be computed by a simple summation: Θx = X c∈ch(x) λcx + X u∈par(x) µux + Cx (40) 0 200 400 600 800 1000 0 50 100 150 200 Number of Nodes Secs MI Cost Prop Figure 1: Comparison of execution times with synthetic polytrees. Propagation runs in time linear in the number of nodes once the initial local probabilities are calculated. The local probabilities required are the matrices P for each parent-child probability,e.g., P(child = j|parent = i), and for each pair (not set) of parents that share a child, P(parent = j|coparent = i). These are all immediately available from a junction tree, or they can be obtained with a run of belief propagation. It is worth noting that the apparent complexity of the λ, µ message propagation equations is due to the Bayes Net representation. The equivalent factor graph equations (not shown) are markedly more succinct. 5 Experiments The performance of the message-passing algorithm (hereafter CostProp) was compared with a standard information gain algorithm which uses mutual information (hereafter MI). The error function used by MI is from Equation 2, where Errormlog(P(X)||X ∗) = P x∈X log P(x∗), with a corresponding risk function Risk(P(X)) = P x∈X H(x). This corresponds to selecting the node x that has the highest summed mutual information with each of the target nodes (in this case, the set of target nodes is X and the set of query nodes is also X.) The computational cost of MI grows quadratically as the product of the number of queries and of targets. In order to test the speed and relative accuracy of CostProp, we generated random polytrees with varying numbers of trinary nodes. The CPT tables were randomly generated with a slight bias towards lower-entropy probabilities. The code was written in Matlab using the Bayes Net Toolbox (Murphy, 2005). Speed: We generated polytrees of sizes ranging from 2 to 1000 nodes and ran the MI algorithm, the CostProp algorithm, and a random-query algorithm on each. The two nonrandom algorithms were run using a junction tree, the build time of which was not included in the reported run times of either algorithm. Even with the relatively slow Matlab code, the speedup shown in Figure 1 is obvious. As expected, CostProp is many orders of magnitude faster than the MI algorithm, and shows a qualitative difference in scaling properties. Accuracy: Due to the slow running time of MI, the accuracy comparison was performed on polytrees of size 20. For each run, a true assignment X ∗was generated from the tree, but were initially hidden from the algorithms. Each algorithm would then determine for itself the best node to observe, receive the true value of that node, then select the next node to observe, et cetera. The true error at each step was computed as the 0-1 error of Equation 3. The reduction in error plotted against number of queries is shown in Figure 2. With uniform 5 10 15 20 0 20 40 60 80 100 120 Number of Queries True Cost Random MI Cost Prop Figure 2: Performance on synthetic polytrees with symmetric costs. 0 5 10 15 20 0 1000 2000 3000 4000 5000 6000 7000 Number of Queries True Cost Random MI Cost Prop Figure 3: Performance on synthetic polytrees with asymmetric costs. cost matrices, performance of MI and CostProp are approximately equal on this task, but both are better than random. We next made the cost matrices asymmetric by initializing them such that confusing one pair of states was 100 times more costly than confusing the other two pairs. The results of Figure 3 show that CostProp reduces error faster than MI, presumably because it can accomodate the cost matrix information. 6 Discussion We have described an all-pairs information gain calculation that scales linearly with network size. The objective function used has a polynomial form that allows for an efficient message-passing algorithm. Empirical results demonstrate large speedups and even improved accuracy in cost-sensitive domains. Future work will explore other applications of this method, including sensitivity analysis and active learning. Further research into other uses for the belief polynomials will also be explored. References Agostak, J. M., & Weiss, J. (1999). Active Fusion for Diagnosis Guided by Mutual Information. Proceedings of the 2nd International Conference on Information Fusion. Anderson, B. S., & Moore, A. W. (2005). Active learning for hidden markov models: Objective functions and algorithms. Proceedings of the 22nd International Conference on Machine Learning. Kjrulff, U., & van der Gaag, L. (2000). Making sensitivity analysis computationally efficient. Kohavi, R., & Wolpert, D. H. (1996). Bias Plus Variance Decomposition for Zero-One Loss Functions. Machine Learning : Proceedings of the Thirteenth International Conference. Morgan Kaufmann. Krishnamurthy, V. (2002). Algorithms for optimal scheduling and management of hidden markov model sensors. IEEE Transactions on Signal Processing, 50, 1382–1397. Laskey, K. B. (1995). Sensitivity Analysis for Probability Assessments in Bayesian Networks. IEEE Transactions on Systems, Man, and Cybernetics. Murphy, K. (2005). Bayes net toolbox for matlab. U. C. Berkeley. http://www.ai.mit.edu/˜ murphyk/Software/BNT/bnt.html.
|
2005
|
30
|
2,845
|
Value Function Approximation with Diffusion Wavelets and Laplacian Eigenfunctions Sridhar Mahadevan Department of Computer Science University of Massachusetts Amherst, MA 01003 mahadeva@cs.umass.edu Mauro Maggioni Program in Applied Mathematics Department of Mathematics Yale University New Haven, CT 06511 mauro.maggioni@yale.edu Abstract We investigate the problem of automatically constructing efficient representations or basis functions for approximating value functions based on analyzing the structure and topology of the state space. In particular, two novel approaches to value function approximation are explored based on automatically constructing basis functions on state spaces that can be represented as graphs or manifolds: one approach uses the eigenfunctions of the Laplacian, in effect performing a global Fourier analysis on the graph; the second approach is based on diffusion wavelets, which generalize classical wavelets to graphs using multiscale dilations induced by powers of a diffusion operator or random walk on the graph. Together, these approaches form the foundation of a new generation of methods for solving large Markov decision processes, in which the underlying representation and policies are simultaneously learned. 1 Introduction Value function approximation (VFA) is a well-studied problem: a variety of linear and nonlinear architectures have been studied, which are not automatically derived from the geometry of the underlying state space, but rather handcoded in an ad hoc trial-and-error process by a human designer [1]. A new framework for VFA called proto-reinforcement learning (PRL) was recently proposed in [7, 8, 9]. Instead of learning task-specific value functions using a handcoded parametric architecture, agents learn proto-value functions, or global basis functions that reflect intrinsic large-scale geometric constraints that all value functions on a manifold [11] or graph [3] adhere to, using spectral analysis of the selfadjoint Laplace operator. This approach also yields new control learning algorithms called representation policy iteration (RPI) where both the underlying representations (basis functions) and policies are simultaneously learned. Laplacian eigenfunctions also provide ways of automatically decomposing state spaces since they reflect bottlenecks and other global geometric invariants. In this paper, we extend the earlier Laplacian approach in a new direction using the recently proposed diffusion wavelet transform (DWT), which is a compact multi-level representation of Markov diffusion processes on manifolds and graphs [4, 2]. Diffusion wavelets provide an interesting alternative to global Fourier eigenfunctions for value function approximation, since they encapsulate all the traditional advantages of wavelets: basis functions have compact support, and the representation is inherently hierarchical since it is based on multi-resolution modeling of processes at different spatial and temporal scales. 2 Technical Background This paper uses the framework of spectral graph theory [3] to build basis representations for smooth (value) functions on graphs induced by Markov decision processes. Given any graph G, an obvious but poor choice of representation is the “table-lookup” orthonormal encoding, where φ(i) = [0 . . . i . . . 0] is the encoding of the ith node in the graph. This representation does not reflect the topology of the specific graph under consideration. Polynomials are another popular choice of orthonormal basis functions [5], where φ(s) = [1 s . . . sk] for some fixed k. This encoding has two disadvantages: it is numerically unstable for large graphs, and is dependent on the ordering of vertices. In this paper, we outline a new approach to the problem of building basis functions on graphs using Laplacian eigenfunctions and diffusion wavelets. A finite Markov decision process (MDP) M = (S, A, P a ss′, Ra ss′) is defined as a finite set of states S, a finite set of actions A, a transition model P a ss′ specifying the distribution over future states s′ when an action a is performed in state s, and a corresponding reward model Ra ss′ specifying a scalar cost or reward [10]. A state value function is a mapping S →R or equivalently a vector in R|S|. Given a policy π : S →A mapping states to actions, its corresponding value function V π specifies the expected long-term discounted sum of rewards received by the agent in any given state s when actions are chosen using the policy. Any optimal policy π∗defines the same unique optimal value function V ∗which satisfies the nonlinear constraints V ∗(s) = max a X s′ P a ss′ (Ra ss′ + γV ∗(s′)) For any MDP, any policy induces a Markov chain that partitions the states into classes: transient states are visited initially but not after a finite time, and recurrent states are visited infinitely often. In ergodic MDPs, the set of transient states is empty. The construction of basis functions below assumes that the Markov chain induced by a policy is a reversible random walk on the state space. While some policies may not induce such Markov chains, the set of basis functions learned from a reversible random walk can still be useful in approximating value functions for (reversible or non-reversible) policies. In other words, the construction of the basis functions can be considered an off-policy method: just as in Q-learning where the exploration policy differs from the optimal learned policy, in the proposed approach the actual MDP dynamics may induce a different Markov chain than the one analyzed to build representations. Reversible random walks greatly simplify spectral analysis since such random walks are similar to a symmetric operator on the state space. 2.1 Smooth Functions on Graphs and Value Function Representation We assume the state space can be modeled as a finite undirected weighted graph (G, E, W), but the approach generalizes to Riemannian manifolds. We define x ∼y to mean an edge between x and y, and the degree of x to be d(x) = P x∼y w(x, y). D will denote the diagonal matrix defined by Dxx = d(x), and W the matrix defined by Wxy = w(x, y) = w(y, x). The L2 norm of a function on G is ||f||2 2 = P x∈G |f(x)|2d(x). The gradient of a function is ∇f(i, j) = w(i, j)(f(i) −f(j)) if there is an edge e connecting i to j, 0 otherwise. The smoothness of a function on a graph, can be measured by the Sobolev norm ||f||2 H2 = ||f||2 2 + ||∇f||2 2 = X x |f(x)|2d(x) + X x∼y |f(x) −f(y)|2w(x, y) . (1) The first term in this norm controls the size (in terms of L2-norm) for the function f, and the second term controls the size of the gradient. The smaller ||f||H2, the smoother is f. We will assume that the value functions we consider have small H2 norms, except at a few points, where the gradient may be large. Important variations exist, corresponding to different measures on the vertices and edges of G. Classical techniques, such as value iteration and policy iteration [10], represent value functions using an orthonormal basis (e1, . . . , e|S|) for the space R|S| [1]. For a fixed precision ϵ, a value function V π can be approximated as ||V π − X i∈S(ϵ) απ i ei|| ≤ϵ with αi =< V π, ei > since the ei’s are orthonormal, and the approximation is measured in some norm, such as L2 or H2. The goal is to obtain representations in which the index set S(ϵ) in the summation is as small as possible, for a given approximation error ϵ. This hope is well founded at least when V π is smooth or piecewise smooth, since in this case it should be compressible in some well chosen basis {ei}. 3 Function Approximation using Laplacian Eigenfunctions The combinatorial Laplacian L [3] is defined as Lf(x) = X y∼x w(x, y)(f(x) −f(y)) = (D −W)f . Often one considers the normalized Laplacian L = D−1 2 (D−W)D−1 2 which has spectrum in [0, 2]. This Laplacian is related to the notion of smoothness as above, since ⟨f, Lf⟩= P x f(x) Lf(x) = P x,y w(x, y)(f(x) −f(y))2 = ||∇f||2 2, which should be compared with (1). Functions that satisfy the equation Lf = 0 are called harmonic. The Spectral Theorem can be applied to L (or L), yielding a discrete set of eigenvalues 0 ≤λ0 ≤λ1 ≤ . . . λi ≤. . . and a corresponding orthonormal basis of eigenfunctions {ξi}i≥0, solutions to the eigenvalue problem Lξi = λiξi. The eigenfunctions of the Laplacian can be viewed as an orthonormal basis of global Fourier smooth functions that can be used for approximating any value function on a graph. These basis functions capture large-scale features of the state space, and are particularly sensitive to “bottlenecks”, a phenomenon widely studied in Riemannian geometry and spectral graph theory [3]. Observe that ξi satisfies ||∇ξi||2 2 = λi. In fact, the variational characterization of eigenvectors shows that ξi is the normalized function orthogonal to ξ0, . . . , ξi−1 with minimal ||∇ξi||2. Hence the projection of a function f on S onto the top k eigenvectors of the Laplacian is the smoothest approximation to f, in the sense of the norm in H2. A potential drawback of Laplacian approximation is that it detects only global smoothness, and may poorly approximate a function which is not globally smooth but only piecewise smooth, or with different smoothness in different regions. These drawbacks are addressed in the context of analysis with diffusion wavelets, and in fact partly motivated their construction. 4 Function Approximation using Diffusion Wavelets Diffusion wavelets were introduced in [4, 2], in order to perform a fast multiscale analysis of functions on a manifold or graph, generalizing wavelet analysis and associated signal processing techniques (such as compression or denoising) to functions on manifolds and graphs. They allow the fast and accurate computation of high powers of a Markov chain DiffusionWaveletTree (H0, Φ0, J, ϵ): // H0: symmetric conjugate to random walk matrix, represented on the basis Φ0 // Φ0 : initial basis (usually Dirac’s δ-function basis), one function per column // J : number of levels to compute // ϵ: precision for j from 0 to J do, 1. Compute sparse factorization Hj ∼ϵ QjRj, with Qj orthogonal. 2. Φj+1 ←Qj = HjR−1 j and [H2j 0 ] Φj+1 Φj+1 ∼jϵ Hj+1 ←RjR∗ j . 3. Compute sparse factorization I −Φj+1Φ∗ j+1 = Q′ jR′ j, with Q′ j orthogonal. 4. Ψj+1 ←Q′ j. end Figure 1: Pseudo-code for constructing a Diffusion Wavelet Tree P on the manifold or graph, including direct computation of the Green’s function (or fundamental matrix) of the Markov chain, (I −P)−1, which can be used to solve Bellman’s equation. Here, “fast” means that the number of operations required is O(|S|), up to logarithmic factors. Space constraints permit only a brief description of the construction of diffusion wavelet trees. More details are provided in [4, 2]. The input to the algorithm is a “precision” parameter ϵ > 0, and a weighted graph (G, E, W). We can assume that G is connected, otherwise we can consider each connected component separately. The construction is based on using the natural random walk P = D−1W on a graph and its powers to “dilate”, or “diffuse” functions on the graph, and then defining an associated coarse-graining of the graph. We symmetrize P by conjugation and take powers to obtain Ht = D 1 2 P tD−1 2 = (D−1 2 WD−1 2 )t = (I −L)t = X i≥0 (1 −λi)tξi(·)ξi(·) (2) where {λi} and {ξi} are the eigenvalues and eigenfunctions of the Laplacian as above. Hence the eigenfunctions of Ht are again ξi and the ith eigenvalue is (1−λi)t. We assume that H1 is a sparse matrix, and that the spectrum of H1 has rapid decay. A diffusion wavelet tree consist of orthogonal diffusion scaling functions Φj that are smooth bump functions, with some oscillations, at scale roughly 2j (measured with respect to geodesic distance, for small j), and orthogonal wavelets Ψj that are smooth localized oscillatory functions at the same scale. The scaling functions Φj span a subspace Vj, with the property that Vj+1 ⊆Vj, and the span of Ψj, Wj, is the orthogonal complement of Vj into Vj+1. This is achieved by using the dyadic powers H2j as “dilations”, to create smoother and wider (always in a geodesic sense) “bump” functions (which represent densities for the symmetrized random walk after 2j steps), and orthogonalizing and downsampling appropriately to transform sets of “bumps” into orthonormal scaling functions. Computationally (Figure 1), we start with the basis Φ0 = I and the matrix H0 := H1, sparse by assumption, and construct an orthonormal basis of well-localized functions for its range (the space spanned by the columns), up to precision ϵ, through a variation of the Gram-Schmidt orthonormalization scheme, described in [4]. In matrix form, this is a sparse factorization H0 ∼ϵ Q0R0, with Q0 orthonormal. Notice that H0 is |G| × |G|, but in general Q0 is |G| × |G(1)| and R0 is |G(1)| × |G|, with |G(1)| ≤|G|. In fact |G(1)| is approximately equal to the number of singular values of H0 larger than ϵ. The columns of Q0 are an orthonormal basis of scaling functions Φ1 for the range of H0, written as a linear combination of the initial basis Φ0. We can now write H2 0 on the basis Φ1: H1 := [H2]Φ1 Φ1 = Q∗ 0H0H0Q0 = R0R∗ 0, where we used H0 = H∗ 0. This is a compressed representation of H2 0 acting on the range of H0, and it is a |G(1)| × |G(1)| matrix. We proceed by induction: at scale j we have an orthonormal basis Φj for the rank of H2j−1 up to precision jϵ, represented as a linear combination of elements in Φj−1. This basis contains |G(j)| functions, where |G(j)| is comparable with the number of eigenvalues λj of H0 such that λ2j−1 j ≥ϵ. We have the operator H2j 0 represented on Φj by a |G(j)| × |G(j)| matrix Hj, up to precision jϵ. We compute a sparse decomposition of Hj ∼ϵ QjRj, and obtain the next basis Φj+1 = Qj = HjR−1 j and represent H2j+1 on this basis by the matrix Hj+1 := [H2j]Φj+1 Φj+1 = Q∗ jHjHjQj = RjR∗ j. Wavelet bases for the spaces Wj can be built analogously by factorizing IVj −Qj+1Q∗ j+1, which is the orthogonal projection on the complement of Vj+1 into Vj. The spaces can be further split to obtain wavelet packets [2]. A Fast Diffusion Wavelet Transform allows expanding in O(n) (where n is the number of vertices) computations any function in the wavelet, or wavelet packet, basis, and efficiently search for the most suitable basis set. Diffusion wavelets and wavelet packets are a very efficient tool for representation and approximation of functions on manifolds and graphs [4, 2], generalizing to these general spaces the nice properties of wavelets that have been so successfully applied to similar tasks in Euclidean spaces. Diffusion wavelets allow computing H2kf for any fixed f, in order O(kn). This is nontrivial because while the matrix H is sparse, large powers of it are not, and the computation H · H . . . · (H(Hf)) . . .) involves 2k matrix-vector products. As a notable consequence, this yields a fast algorithm for computing the Green’s function, or fundamental matrix, associated with the Markov process H, via (I−H1)−1f = P k≥0 Hk = Q k≥0(I+H2k)f. In a similar way one can compute (I −P)−1. For large classes of Markov chains we can perform this computation in time O(n), in a direct (as opposed to iterative) fashion. This is remarkable since in general the matrix (I −H1)−1 is full and only writing down the entries would take time O(n2). It is the multiscale compression scheme that allows to efficiently represent (I −H1)−1 in compress form, taking advantage of the smoothness of the entries of the matrix. This is discussed in general in [4]. We use this approach to develop a faster policy evaluation step for solving MDPs described in [6] 5 Experiments Figure 2 contrasts Laplacian eigenfunctions and diffusion wavelet basis functions in a three room grid world environment. Laplacian eigenfunctions were produced by solving Lf = λf, where L is the combinatorial Laplacian, whereas diffusion wavelet basis functions were produced using the algorithm described in Figure 1. The input to both methods is an undirected graph, where edges connect states reachable through a single (reversible) action. Such graphs can be easily learned from a sample of transitions, such as that generated by RL agents while exploring the environment in early phases of policy learning. Note how the intrinsic multi-room environment is reflected in the Laplacian eigenfunctions. The Laplacian eigenfunctions are globally defined over the entire state space, whereas diffusion wavelet basis functions are progressively more compact at higher levels, beginning at the lowest level with the table-lookup representation, and converging at the highest level to basis functions similar to Laplacian eigenfunctions. Figure 3 compares the approximations produced in a two-room grid world MDP with 630 states. These experiments illustrate the superiority of diffusion wavelets: in the first experiment (top row), diffusion wavelets handily outperform Laplacian eigenfunctions because the function is highly nonlinear near Figure 2: Examples of Laplacian eigenfunctions (left) and diffusion wavelet basis functions (right) computed using the graph Laplacian on a complete undirected graph of a deterministic grid world environment with reversible actions. the goal, but mostly linear elsewhere. The eigenfunctions contain a lot of ripples in the flat region causing a large residual error. In the second experiment (bottom row), Laplacian eigenfunctions work significantly better because the value function is globally smooth. Even here, the superiority of diffusion wavelets is clear. 0 10 20 30 0 10 20 30 0 10 20 30 40 50 0 10 20 30 0 10 20 30 0 10 20 30 40 50 0 10 20 30 0 10 20 30 −5 0 5 10 0 50 100 150 200 −7 −6 −5 −4 −3 −2 −1 0 1 2 WP Eig 0 10 20 30 0 10 20 30 −100 0 100 200 300 0 10 20 30 0 10 20 30 −200 −100 0 100 200 300 0 10 20 30 0 10 20 30 −60 −40 −20 0 20 40 60 0 50 100 150 200 0.5 1 1.5 2 2.5 3 WP Eig Figure 3: Left column: value functions in a two room grid world MDP, where each room has 21 × 15 states connected by a door in the middle of the common wall. Middle two columns: approximations produced by 5 diffusion wavelet bases and Laplacian eigenfunctions. Right column: least-squares approximation error (log scale) using up to 200 basis functions (bottom curve: diffusion wavelets; top curve: Laplacian eigenfunctions). In the top row, the value function corresponds to a random walk. In the bottom row, the value function corresponds to the optimal policy. 5.1 Control Learning using Representation Policy Iteration This section describes results of using the automatically generated basis functions inside a control learning algorithm, in particular the Representation Policy Iteration (RPI) algorithm [8]. RPI is an approximate policy iteration algorithm where the basis functions φ(s, a) handcoded in other methods, such as LSPI [5] are learned from a random walk of transitions by computing the graph Laplacian and then computing the eigenfunctions or the diffusion wavelet bases as described above. One striking property of the eigenfunction and diffusion wavelet basis functions is their ability to reflect nonlinearities arising from “bottlenecks” in the state space. Figure 4 contrasts the value function approximation produced by RPI using Laplacian eigenfunctions with that produced by a polynomial approximator. The polynomial approximator yields a value function that is “blind” to the nonlinearities produced by the walls in the two room grid world MDP. Figure 4: This figures compares the value functions produced by RPI using Laplacian eigenfunctions with that produced by LSPI using a polynomial approximator in a two room grid world MDP with a “bottleneck” region representing the door connecting the two rooms. The Laplacian basis functions on the left clearly capture the nonlinearity arising from the bottleneck, whereas the polynomial approximator on the right smooths the value function across the walls as it is “blind” to the large-scale geometry of the environment. Table 1 compares the performance of diffusion wavelets and Laplacian eigenfunctions using RPI on the classic chain MDP from [5]. Here, an initial random walk of 5000 steps was carried out to generate the basis functions in a 50 state chain. The chain MDP is a sequential open (or closed) chain of varying number of states, where there are two actions for moving left or right along the chain. In the experiments shown, a reward of 1 was provided in states 10 and 41. Given a fixed k, the encoding φ(s) of a state s for Laplacian eigenfunctions is the vector comprised of the values of the kth lowest-order eigenfunctions on state k. For diffusion wavelets, all the basis functions at level k were evaluated at state s to produce the encoding. Method #Trials Error RPI DF (5) 4.4 2.4 RPI DF (14) 6.8 4.8 RPI DF (19) 8.2 0.6 RPI Lap (5) 4.2 3.8 RPI Lap (15) 7.2 3 RPI Lap (25) 9.4 2 Method #Trials Error LSPI RBF (6) 3.8 20.8 LSPI RBF (14) 4.4 2.8 LSPI RBF (26) 6.4 2.8 LSPI Poly (5) 4.2 4 LSPI Poly (15) 1 34.4 LSPI Poly (25) 1 36 Table 1: This table compares the performance of RPI using diffusion wavelets and Laplacian eigenfunctions with LSPI using handcoded polynomial and radial basis functions on a 50 state chain graph MDP. Each row reflects the performance of either RPI using learned basis functions or LSPI with a handcoded basis function (values in parentheses indicate the number of basis functions used for each architecture). The two numbers reported are steps to convergence and the error in the learned policy (number of incorrect actions), averaged over 5 runs. Laplacian and diffusion wavelet basis functions provide a more stable performance at both the low end and at the higher end, as compared to the handcoded basis functions. As the number of basis functions are increased, RPI with Laplacian basis functions takes longer to converge, but learns a more accurate policy. Diffusion wavelets converge slower as the number of basis functions is increased, giving the best results overall with 19 basis functions. Unlike Laplacian eigenfunctions, the policy error is not monotonically decreasing as the number of bases functions is increased. This result is being investigated. LSPI with RBF is unstable at the low end, converging to a very poor policy for 6 basis functions. LSPI with a 5 degree polynomial approximator works reasonably well, but its performance noticeably degrades at higher degrees, converging to a very poor policy in one step for k = 15 and k = 25. 6 Future Work We are exploring many extensions of this framework, including extensions to factored MDPs, approximating action value functions as well as large state spaces by exploiting symmetries defined by a group of automorphisms of the graph. These enhancements will facilitate efficient construction of eigenfunctions and diffusion wavelets. For large state spaces, one can randomly subsample the graph, construct the eigenfunctions of the Laplacian or the diffusion wavelets on the subgraph, and then interpolate these functions using the Nystr¨om approximation and related low-rank linear algebraic methods. In experiments on the classic inverted pendulum control task, the Nystr¨om approximation yielded excellent results compared to radial basis functions, learning a more stable policy with a smaller number of samples. Acknowledgements This research was supported in part by a grant from the National Science Foundation IIS0534999. References [1] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, Massachusetts, 1996. [2] J. Bremer, R. Coifman, M. Maggioni, and A. Szlam. Diffusion wavelet packets. Technical Report Tech. Rep. YALE/DCS/TR-1304, Yale University, 2004. to appear in Appl. Comp. Harm. Anal. [3] F. Chung. Spectral Graph Theory. American Mathematical Society, 1997. [4] R. Coifman and M Maggioni. Diffusion wavelets. Technical Report Tech. Rep. YALE/DCS/TR1303, Yale University, 2004. to appear in Appl. Comp. Harm. Anal. [5] M. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:1107–1149, 2003. [6] M. Maggioni and S. Mahadevan. Fast direct policy evaluation using multiscale Markov Diffusion Processes. Technical Report Tech. Rep.TR-2005-39, University of Massachusetts, 2005. [7] S. Mahadevan. Proto-value functions: Developmental reinforcement learning. In Proceedings of the 22nd International Conference on Machine Learning, 2005. [8] S. Mahadevan. Representation policy iteration. In Proceedings of the 21st International Conference on Uncertainty in Artificial Intelligence, 2005. [9] S. Mahadevan. Samuel meets Amarel: Automating value function approximation using global state space analysis. In National Conference on Artificial Intelligence (AAAI), 2005. [10] M. L. Puterman. Markov decision processes. Wiley Interscience, New York, USA, 1994. [11] S Rosenberg. The Laplacian on a Riemannian Manifold. Cambridge University Press, 1997.
|
2005
|
31
|
2,846
|
A Bayesian Spatial Scan Statistic Daniel B. Neill Andrew W. Moore School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 {neill,awm}@cs.cmu.edu Gregory F. Cooper Center for Biomedical Informatics University of Pittsburgh Pittsburgh, PA 15213 gfc@cbmi.pitt.edu Abstract We propose a new Bayesian method for spatial cluster detection, the “Bayesian spatial scan statistic,” and compare this method to the standard (frequentist) scan statistic approach. We demonstrate that the Bayesian statistic has several advantages over the frequentist approach, including increased power to detect clusters and (since randomization testing is unnecessary) much faster runtime. We evaluate the Bayesian and frequentist methods on the task of prospective disease surveillance: detecting spatial clusters of disease cases resulting from emerging disease outbreaks. We demonstrate that our Bayesian methods are successful in rapidly detecting outbreaks while keeping number of false positives low. 1 Introduction Here we focus on the task of spatial cluster detection: finding spatial regions where some quantity is significantly higher than expected. For example, our goal may be to detect clusters of disease cases, which may be indicative of a naturally occurring epidemic (e.g. influenza), a bioterrorist attack (e.g. anthrax release), or an environmental hazard (e.g. radiation leak). [1] discusses many other applications of cluster detection, including mining astronomical data, medical imaging, and military surveillance. In all of these applications, we have two main goals: to identify the locations, shapes, and sizes of potential clusters, and to determine whether each potential cluster is more likely to be a “true” cluster or simply a chance occurrence. Thus we compare the null hypothesis H0 of no clusters against some set of alternative hypotheses H1(S), each representing a cluster in some region or regions S. In the standard frequentist setting, we do this by significance testing, computing the p-values of potential clusters by randomization; here we propose a Bayesian framework, in which we compute posterior probabilities of each potential cluster. Our primary motivating application is prospective disease surveillance: detecting spatial clusters of disease cases resulting from a disease outbreak. In this application, we perform surveillance on a daily basis, with the goal of finding emerging epidemics as quickly as possible. For this task, we are given the number of cases of some given syndrome type (e.g. respiratory) in each spatial location (e.g. zip code) on each day. More precisely, we typically cannot measure the actual number of cases, and instead rely on related observable quantities such as the number of Emergency Department visits or over-the-counter drug sales. We must then detect those increases which are indicative of emerging outbreaks, as close to the start of the outbreak as possible, while keeping the number of false positives low. In biosurveillance of disease, every hour of earlier detection can translate into thousands of lives saved by more timely administration of antibiotics, and this has led to widespread interest in systems for the rapid and automatic detection of outbreaks. In this spatial surveillance setting, each day we have data collected for a set of discrete spatial locations si. For each location si, we have a count ci (e.g. number of disease cases), and an underlying baseline bi. The baseline may correspond to the underlying population at risk, or may be an estimate of the expected value of the count (e.g. derived from the time series of previous count data). Our goal, then, is to find if there is any spatial region S (set of locations si) for which the counts are significantly higher than expected, given the baselines. For simplicity, we assume here (as in [2]) that the locations si are aggregated to a uniform, two-dimensional, N ×N grid G, and we search over the set of rectangular regions S ⊆G. This allows us to search both compact and elongated regions, allowing detection of elongated disease clusters resulting from dispersal of pathogens by wind or water. 1.1 The frequentist scan statistic One of the most important statistical tools for cluster detection is Kulldorff’s spatial scan statistic [3-4]. This method searches over a given set of spatial regions, finding those regions which maximize a likelihood ratio statistic and thus are most likely to be generated under the alternative hypothesis of clustering rather than the null hypothesis of no clustering. Randomization testing is used to compute the p-value of each detected region, correctly adjusting for multiple hypothesis testing, and thus we can both identify potential clusters and determine whether they are significant. Kulldorff’s framework assumes that counts ci are Poisson distributed with ci ∼Po(qbi), where bi represents the (known) census population of cell si and q is the (unknown) underlying disease rate. Then the goal of the scan statistic is to find regions where the disease rate is higher inside the region than outside. The statistic used for this is the likelihood ratio F(S) = P(Data|H1(S)) P(Data|H0) , where the null hypothesis H0 assumes a uniform disease rate q = qall. Under H1(S), we assume that q = qin for all si ∈S, and q = qout for all si ∈G −S, for some constants qin > qout. From this, we can derive an expression for F(S) using maximum likelihood estimates of qin, qout, and qall: F(S) = (Cin Bin )Cin(Cout Bout )Cout(Call Ball )−Call, if Cin Bin > Cout Bout , and F(S) = 1 otherwise. In this expression, we have Cin = ∑S ci, Cout = ∑G−S ci, Call = ∑G ci, and similarly for the baselines Bin = ∑S bi, Bout = ∑G−S bi, and Ball = ∑G bi. Once we have found the highest scoring region S∗= argmaxS F(S) of grid G, and its score F∗= F(S∗), we must still determine the statistical significance of this region by randomization testing. To do so, we randomly create a large number R of replica grids by sampling under the null hypothesis ci ∼Po(qallbi), and find the highest scoring region and its score for each replica grid. Then the p-value of S∗is Rbeat+1 R+1 , where Rbeat is the number of replicas G′ with F∗higher than the original grid. If this p-value is less than some threshold (e.g. 0.05), we can conclude that the discovered region is unlikely to have occurred by chance, and is thus a significant spatial cluster; otherwise, no significant clusters exist. The frequentist scan statistic is a useful tool for cluster detection, and is commonly used in the public health community for detection of disease outbreaks. However, there are three main disadvantages to this approach. First, it is difficult to make use of any prior information that we may have, for example, our prior beliefs about the size of a potential outbreak and its impact on disease rate. Second, the accuracy of this technique is highly dependent on the correctness of our maximum likelihood parameter estimates. As a result, the model is prone to parameter overfitting, and may lose detection power in practice because of model misspecification. Finally, the frequentist scan statistic is very time consuming, and may be computationally infeasible for large datasets. A naive approach requires searching over all rectangular regions, both for the original grid and for each replica grid. Since there are O(N4) rectangles to search for an N × N grid, the total computation time is O(RN4), where R = 1000 is a typical number of replications. In past work [5, 2, 6], we have shown how to reduce this computation time by a factor of 20-2000x through use of the “fast spatial scan” algorithm; nevertheless, we must still perform this faster search both for the original grid and for each replica. We propose to remedy these problems through the use of a Bayesian spatial scan statistic. First, our Bayesian model makes use of prior information about the likelihood, size, and impact of an outbreak. If these priors are chosen well, we should achieve better detection power than the frequentist approach. Second, the Bayesian method uses a marginal likelihood approach, averaging over possible values of the model parameters qin, qout, and qall, rather than relying on maximum likelihood estimates of these parameters. This makes the model more flexible and less prone to overfitting, and reduces the potential impact of model misspecification. Finally, under the Bayesian model there is no need for randomization testing, and (since we need only to search the original grid) even a naive search can be performed relatively quickly. We now present the Bayesian spatial scan statistic, and then compare it to the frequentist approach on the task of detecting simulated disease epidemics. 2 The Bayesian scan statistic Here we consider the natural Bayesian extension of Kulldorff’s scan statistic, moving from a Poisson to a conjugate Gamma-Poisson model. Bayesian Gamma-Poisson models are a common representation for count data in epidemiology, and have been used in disease mapping by Clayton and Kaldor [7], Molli´e [8], and others. In disease mapping, the effect of the Gamma prior is to produce a spatially smoothed map of disease rates; here we instead focus on computing the posterior probabilities, allowing us to determine the likelihood that an outbreak has occurred, and to estimate the location and size of potential outbreaks. For the Bayesian spatial scan, as in the frequentist approach, we wish to compare the null hypothesis H0 of no clusters to the set of alternative hypotheses H1(S), each representing a cluster in some region S. As before, we assume Poisson likelihoods, ci ∼Po(qbi). The difference is that we assume a hierarchical Bayesian model where the disease rates qin, qout, and qall are themselves drawn from Gamma distributions. Thus, under the null hypothesis H0, we have q = qall for all si ∈G, where qall ∼Ga(αall,βall). Under the alternative hypothesis H1(S), we have q = qin for all si ∈S and q = qout for all si ∈G−S, where we independently draw qin ∼Ga(αin,βin) and qout ∼Ga(αout,βout). We discuss how the α and β priors are chosen below. From this model, we can compute the posterior probabilities P(H1(S)|D) of an outbreak in each region S, and the probability P(H0 | D) that no outbreak has occurred, given dataset D: P(H0 | D) = P(D|H0)P(H0) P(D) and P(H1(S) | D) = P(D|H1(S))P(H1(S)) P(D) , where P(D) = P(D|H0)P(H0)+∑S P(D|H1(S))P(H1(S)). We discuss the choice of prior probabilities P(H0) and P(H1(S)) below. To compute the marginal likelihood of the data given each hypothesis, we must integrate over all possible values of the parameters (qin, qout, qall) weighted by their respective probabilities. Since we have chosen a conjugate prior, we can easily obtain a closed-form solution for these likelihoods: P(D|H0) = Z P(qall ∼Ga(αall,βall)) ∏ si∈G P(ci ∼Po(qallbi))dqall P(D|H1(S)) = Z P(qin ∼Ga(αin,βin)) ∏ si∈S P(ci ∼Po(qinbi))dqin × Z P(qout ∼Ga(αout,βout)) ∏ si∈G−S P(ci ∼Po(qoutbi))dqout Now, computing the integral, and letting C = ∑ci and B = ∑bi, we obtain: Z P(q ∼Ga(α,β))∏ si P(ci ∼Po(qbi))dq = Z βα Γ(α)qα−1e−βq∏ si (qbi)cie−qbi (ci)! dq ∝ βα Γ(α) Z qα−1e−βqq∑cie−q∑bi dq = βα Γ(α) Z qα+C−1e−(β+B)q dq = βα Γ(α+C) (β+B)α+C Γ(α) Thus we have the following expressions for the marginal likelihoods: P(D | H0) ∝ (βall)αall Γ(αall+Call) (βall+Ball)αall+Call Γ(αall), and P(D|H1(S)) ∝ (βin)αin Γ(αin+Cin) (βin+Bin)αin+Cin Γ(αin) × (βout)αout Γ(αout+Cout) (βout+Bout)αout +Cout Γ(αout). The Bayesian spatial scan statistic can be computed simply by first calculating the score P(D | H1(S))P(H1(S)) for each spatial region S, maintaining a list of regions ordered by score. We then calculate P(D|H0)P(H0), and add this to the sum of all region scores, obtaining the probability of the data P(D). Finally, we can compute the posterior probability P(H1(S)|D) = P(D|H1(S))P(H1(S)) P(D) for each region, as well as P(H0 |D) = P(D|H0)P(H0) P(D) . Then we can return all regions with non-negligible posterior probabilities, the posterior probability of each, and the overall probability of an outbreak. Note that no randomization testing is necessary, and thus overall complexity is proportional to number of regions searched, e.g. O(N4) for searching over axis-aligned rectangles in an N ×N grid. 2.1 Choosing priors One of the most challenging tasks in any Bayesian analysis is the choice of priors. For any region S that we examine, we must have values of the parameter priors αin(S), βin(S), αout(S), and βout(S), as well as the region prior probability P(H1(S)). We must also choose the global parameter priors αall and βall, as well as the “no outbreak” prior P(H0). Here we consider the simple case of a uniform region prior, with a known prior probability of an outbreak P1. In other words, if there is an outbreak, it is assumed to be equally likely to occur in any spatial region. Thus we have P(H0) = 1−P1, and P(H1(S)) = P1 Nreg , where Nreg is the total number of regions searched. The parameter P1 can be obtained from historical data, estimated by human experts, or can simply be used to tune the sensitivity and specificity of the algorithm. The model can also be easily adapted to a non-uniform region prior, taking into account our prior beliefs about the size and shape of outbreaks. For the parameter priors, we assume that we have access to a large number of days of past data, during which no outbreaks are known to have occurred. We can then obtain estimated values of the parameter priors under the null hypothesis by matching the moments of each Gamma distribution to their historical values. In other words, we set the expectation and variance of the Gamma distribution Ga(αall,βall) to the sample expectation and variance of Call Ball observed in past data: αall βall = Esample h Call Ball i , and αall β2 all = Varsample h Call Ball i . Solving for αall and βall, we obtain αall = Esample h Call Ball i2 Varsample h Call Ball i and βall = Esample h Call Ball i Varsample h Call Ball i. The calculation of priors αin(S), βin(S), αout(S), and βout(S) is identical except for two differences: first, we must condition on the region S, and second, we must assume the alternative hypothesis H1(S) rather than the null hypothesis H0. Repeating the above derivation for the “out” parameters, we obtain αout(S) = Esample h Cout (S) Bout (S) i2 Varsample h Cout (S) Bout (S) i and βout(S) = Esample h Cout (S) Bout (S) i Varsample h Cout (S) Bout (S) i, whereCout(S) and Bout(S) are respectively the total count ∑G−S ci and total baseline ∑G−S bi outside the region. Note that an outbreak in some region S does not affect the disease rate outside region S. Thus we can use the same values of αout(S) and βout(S) whether we are assuming the null hypothesis H0 or the alternative hypothesis H1(S). On the other hand, the effect of an outbreak inside region S must be taken into account when computing αin(S) and βin(S); since we assume that no outbreak has occurred in the past data, we cannot just use the sample mean and variance, but must consider what we expect these quantities to be in the event of an outbreak. We assume that the outbreak will increase qin by a multiplicative factor m, thus multiplying the mean and variance of Cin Bin by m. To account for this in the Gamma distribution Ga(αin,βin), we multiply αin by m while leaving βin unchanged. Thus we have αin(S) = m Esample h Cin(S) Bin(S) i2 Varsample h Cin(S) Bin(S) i and βin(S) = Esample h Cin(S) Bin(S) i Varsample h Cin(S) Bin(S) i, where Cin(S) = ∑S ci and Bin(S) = ∑S bi. Since we typically do not know the exact value of m, here we use a discretized uniform distribution for m, ranging from m = 1...3 at intervals of 0.2. Then scores can be calculated by averaging likelihoods over the distribution of m. Finally, we consider how to deal with the case where the past values of the counts and baselines are not given. In this “blind Bayesian” (BBayes) case, we assume that counts are randomly generated under the null hypothesis ci ∼Po(q0bi), where q0 is the expected ratio of count to baseline under the null (for example, q0 = 1 if baselines are obtained by estimating the expected value of the count). Under this simple assumption, we can easily compute the expectation and variance of the ratio of count to baseline under the null hypothesis: E C B = E[Po(q0B)] B = q0B B = q0, and Var C B = Var[Po(q0B)] B2 = q0B B2 = q0 B . Thus we have α = q0B and β = B under the null hypothesis. This gives us αall = q0Ball, βall = Ball, αout(S) = q0Bout(S), βout(S) = Bout(S), αin(S) = mq0Bin(S), and βin(S) = Bin(S). We can use a uniform distribution for m as before. In our empirical evaluation below, we consider both the Bayes and BBayes methods of generating parameter priors. 3 Results: detection power We evaluated the Bayesian and frequentist methods on two types of simulated respiratory outbreaks, injected into real Emergency Department and over-the-counter drug sales data for Allegheny County, Pennsylvania. All data were aggregated to the zip code level to ensure anonymity, giving the daily counts of respiratory ED cases and sales of OTC cough and cold medication in each of 88 zip codes for one year. The baseline (expected count) for each zip code was estimated using the mean count of the previous 28 days. Zip code centroids were mapped to a 16×16 grid, and all rectangles up to 8×8 were examined. We first considered simulated aerosol releases of inhalational anthrax (e.g. from a bioterrorist attack), generated by the Bayesian Aerosol Release Detector, or BARD [9]. The BARD simulator uses a Bayesian network model to determine the number of spores inhaled by individuals in affected areas, the resulting number and severity of anthrax cases, and the resulting number of respiratory ED cases on each day of the outbreak in each affected zip code. Our second type of outbreak was a simulated “Fictional Linear Onset Outbreak” (or “FLOO”), as in [10]. A FLOO(∆,T) outbreak is a simple simulated outbreak with duration T, which generates t∆cases in each affected zip code on day t of the outbreak (0 < t ≤T/2), then generates T∆/2 cases per day for the remainder of the outbreak. Thus we have an outbreak where the number of cases ramps up linearly and then levels off. While this is clearly a less realistic outbreak than the BARD-simulated anthrax attack, it does have several advantages: most importantly, it allows us to precisely control the slope of the outbreak curve and examine how this affects our methods’ detection ability. To test detection power, a semi-synthetic testing framework similar to [10] was used: we first run our spatial scan statistic for each day of the last nine months of the year (the first three months are used only to estimate baselines and priors), and obtain the score F∗for each day. Then for each outbreak we wish to test, we inject that outbreak into the data, and obtain the score F∗(t) for each day t of the outbreak. By finding the proportion of baseline days with scores higher than F∗(t), we can determine the proportion of false positives we would have to accept to detect the outbreak on day t. This allows us to compute, for any given level of false positives, what proportion of outbreaks can be detected, and the mean number of days to detection. We compare three methods of computing the score F∗: the frequentist method (F∗is the maximum likelihood ratio F(S) over all regions S), the Bayesian maximum method (F∗is the maximum posterior probability P(H1(S)|D) over all regions S), and the Bayesian total method (F∗is the sum of posterior probabilities P(H1(S)|D) over all regions S, i.e. total posterior probability of an outbreak). For the two Bayesian methods, we consider both Bayes and BBayes methods for calculating priors, thus giving us a total of five methods to compare (frequentist, Bayes max, BBayes max, Bayes tot, BBayes tot). In Table 1, we compare these methods with respect to proportion of outbreaks detected and Table 1: Days to detect and proportion of outbreaks detected, 1 false positive/month FLOO ED FLOO ED FLOO ED BARD ED BARD ED FLOO OTC FLOO OTC method (4,14) (2,20) (1,20) (.125) (.016) (40,14) (25,20) frequentist 1.859 3.324 6.122 1.733 3.925 3.582 5.393 (100%) (100%) (96%) (100%) (88%) (100%) (100%) Bayes max 1.740 2.875 5.043 1.600 3.755 5.455 7.588 (100%) (100%) (100%) (100%) (88%) (63%) (79%) BBayes max 1.683 2.848 4.984 1.600 3.698 5.164 7.035 (100%) (100%) (100%) (100%) (88%) (65%) (77%) Bayes tot 1.882 3.195 5.777 1.633 3.811 3.475 5.195 (100%) (100%) (100%) (100%) (88%) (100%) (100%) BBayes tot 1.840 3.180 5.672 1.617 3.792 4.380 6.929 (100%) (100%) (100%) (100%) (88%) (100%) (99%) mean number of days to detect, at a false positive rate of 1/month. Methods were evaluated on seven types of simulated outbreaks: three FLOO outbreaks on ED data, two FLOO outbreaks on OTC data, and two BARD outbreaks (with different amounts of anthrax release) on ED data. For each outbreak type, each method’s performance was averaged over 100 or 250 simulated outbreaks for BARD or FLOO respectively. In Table 1, we observe very different results for the ED and OTC datasets. For the five runs on ED data, all four Bayesian methods consistently detected outbreaks faster than the frequentist method. This difference was most evident for the more slowly growing (harder to detect) outbreaks, especially FLOO(1,20). Across all ED outbreaks, the Bayesian methods showed an average improvement of between 0.13 days (Bayes tot) and 0.43 days (BBayes max) as compared to the frequentist approach; “max” methods performed substantially better than “tot” methods, and “BBayes” methods performed slightly better than “Bayes” methods. For the two runs on OTC data, on the other hand, most of the Bayesian methods performed much worse (over 1 day slower) than the frequentist method. The exception was the Bayes tot method, which again outperformed the frequentist method by an average of 0.15 days. We believe that the main reason for these differing results is that the OTC data is much noisier than the ED data, and exhibits much stronger seasonal trends. As a result, our baseline estimates (using mean of the previous 28 days) are reasonably accurate for ED, but for OTC the baseline estimates will lag behind the seasonal trends (and thus, underestimate the expected counts for increasing trends and overestimate for decreasing trends). The BBayes methods, which assume E[C/B] = 1 and thus rely heavily on the accuracy of baseline estimates, are not reasonable for OTC. On the other hand, the Bayes methods (which instead learn the priors from previous counts and baselines) can adjust for consistent misestimation of baselines and thus more accurately account for these seasonal trends. The “max” methods perform badly on the OTC data because a large number of baseline days have posterior probabilities close to 1; in this case, the maximum region posterior varies wildly from day to day, depending on how much of the total probability is assigned to a single region, and is not a reliable measure of whether an outbreak has occurred. The total posterior probability of an outbreak, on the other hand, will still be higher for outbreak than non-outbreak days, so the “tot” methods can perform well on OTC as well as ED data. Thus, our main result is that the Bayes tot method, which infers baselines from past counts and uses total posterior probability of an outbreak to decide when to sound the alarm, consistently outperforms the frequentist method for both ED and OTC datasets. 4 Results: computation time As noted above, the Bayesian spatial scan must search over all rectangular regions for the original grid only, while the frequentist scan (in order to calculate statistical significance by randomization) must also search over all rectangular regions for a large number (typically R = 1000) of replica grids. Thus, as long as the search time per region is comparable for the Bayesian and frequentist methods, we expect the Bayesian approach to be approximately 1000x faster. In Table 2, we compare the run times of the Bayes, BBayes, and frequenTable 2: Comparison of run times for varying grid size N method N = 16 N = 32 N = 64 N = 128 N = 256 Bayes (naive) 0.7 sec 10.8 sec 2.8 min 44 min 12 hrs BBayes (naive) 0.6 sec 9.3 sec 2.4 min 37 min 10 hrs frequentist (naive) 12 min 2.9 hrs 49 hrs ∼31 days ∼500 days frequentist (fast) 20 sec 1.8 min 10.7 min 77 min 10 hrs tist methods for searching a single grid and calculating significance (p-values or posterior probabilities for the frequentist and Bayesian methods respectively), as a function of the grid size N. All rectangles up to size N/2 were searched, and for the frequentist method R = 1000 replications were performed. The results confirm our intuition: the Bayesian methods are 900-1200x faster than the frequentist approach, for all values of N tested. However, the frequentist approach can be accelerated dramatically using our “fast spatial scan” algorithm [2], a multiresolution search method which can find the highest scoring region of a grid while searching only a small subset of regions. Comparing the fast spatial scan to the Bayesian approach, we see that the fast spatial scan is slower than the Bayesian method for grid sizes up to N = 128, but slightly faster for N = 256. Thus we now have two options for making the spatial scan statistic computationally feasible for large grid sizes: to use the fast spatial scan to speed up the frequentist scan statistic, or to use the Bayesian scan statistics framework (in which case the naive algorithm is typically fast enough). For even larger grid sizes, it may be possible to extend the fast spatial scan to the Bayesian approach: this would give us the best of both worlds, searching only one grid, and using a fast algorithm to do so. We are currently investigating this potentially useful synthesis. 5 Discussion We have presented a Bayesian spatial scan statistic, and demonstrated several ways in which this method is preferable to the standard (frequentist) scan statistics approach. In Section 3, we demonstrated that the Bayesian method, with a relatively non-informative prior distribution, consistently outperforms the frequentist method with respect to detection power. Since the Bayesian framework allows us to easily incorporate prior information about size, shape, and impact of an outbreak, it is likely that we can achieve even better detection performance using more informative priors, e.g. obtained from experts in the domain. In Section 4, we demonstrated that the Bayesian spatial scan can be computed in much less time than the frequentist method, since randomization testing is unnecessary. This allows us to search large grid sizes using a naive search algorithm, and even larger grids might be searched by extending the fast spatial scan to the Bayesian framework. We now consider three other arguments for use of the Bayesian spatial scan. First, the Bayesian method has easily interpretable results: it outputs the posterior probability that an outbreak has occurred, and the distribution of this probability over possible outbreak regions. This makes it easy for a user (e.g. public health official) to decide whether to investigate each potential outbreak based on the costs of false positives and false negatives; this type of decision analysis cannot be done easily in the frequentist framework. Another useful result of the Bayesian method is that we can compute a “map” of the posterior probabilities of an outbreak in each grid cell, by summing the posterior probabilities P(H1(S)|D) of all regions containing that cell. This technique allows us to deal with the case where the posterior probability mass is spread among many regions, by observing cells which are common to most or all of these regions. We give an example of such a map below: Figure 1: Output of Bayesian spatial scan on baseline OTC data, 1/30/05. Cell shading is based on posterior probability of an outbreak in that cell, ranging from white (0%) to black (100%). The bold rectangle represents the most likely region (posterior probability 12.27%) and the darkest cell is the most likely cell (total posterior probability 86.57%). Total posterior probability of an outbreak is 86.61%. Second, calibration of the Bayesian statistic is easier than calibration of the frequentist statistic. As noted above, it is simple to adjust the sensitivity and specificity of the Bayesian method by setting the prior probability of an outbreak P1, and then we can “sound the alarm” whenever posterior probability of an outbreak exceeds some threshold. In the frequentist method, on the other hand, many regions in the baseline data have sufficiently high likelihood ratios that no replicas beat the original grid; thus we cannot distinguish the p-values of outbreak and non-outbreak days. While one alternative is to “sound the alarm” when the likelihood ratio is above some threshold (rather than when p-value is below some threshold), this is technically incorrect: because the baselines for each day of data are different, the distribution of region scores under the null hypothesis will also differ from day to day, and thus days with higher likelihood ratios do not necessarily have lower p-values. Third, we argue that it is easier to combine evidence from multiple detectors within the Bayesian framework, i.e. by modeling the joint probability distribution. We are in the process of examining Bayesian detectors which look simultaneously at the day’s Emergency Department records and over-the-counter drug sales in order to detect emerging clusters, and we believe that combination of detectors is an important area for future research. In conclusion, we note that, though both Bayesian modeling [7-8] and (frequentist) spatial scanning [3-4] are common in the spatial statistics literature, this is (to the best of our knowledge) the first model which combines the two techniques into a single framework. In fact, very little work exists on Bayesian methods for spatial cluster detection. One notable exception is the literature on spatial cluster modeling [11-12], which attempts to infer the location of cluster centers by inferring parameters of a Bayesian process model. Our work differs from these methods both in its computational tractability (their models typically have no closed form solution, so computationally expensive MCMC approximations are used) and its easy interpretability (their models give no indication as to statistical significance or posterior probability of clusters found). Thus we believe that this is the first Bayesian spatial cluster detection method which is powerful and useful, yet computationally tractable. We are currently running the Bayesian and frequentist scan statistics on daily OTC sales data from over 10000 stores, searching for emerging disease outbreaks on a daily basis nationwide. Additionally, we are working to extend the Bayesian statistic to fMRI data, with the goal of discovering regions of brain activity corresponding to given cognitive tasks [13, 6]. We believe that the Bayesian approach has the potential to improve both speed and detection power of the spatial scan in this domain as well. References [1] M. Kulldorff. 1999. Spatial scan statistics: models, calculations, and applications. In J. Glaz and M. Balakrishnan, eds., Scan Statistics and Applications, Birkhauser, 303-322. [2] D. B. Neill and A. W. Moore. 2004. Rapid detection of significant spatial clusters. In Proc. 10th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining, 256-265. [3] M. Kulldorff and N. Nagarwalla. 1995. Spatial disease clusters: detection and inference. Statistics in Medicine 14, 799-810. [4] M. Kulldorff. 1997. A spatial scan statistic. Communications in Statistics: Theory and Methods 26(6), 1481-1496. [5] D. B. Neill and A. W. Moore. 2004. A fast multi-resolution method for detection of significant spatial disease clusters. In Advances in Neural Information Processing Systems 16, 651-658. [6] D. B. Neill, A. W. Moore, F. Pereira, and T. Mitchell. 2005. Detecting significant multidimensional spatial clusters. In Advances in Neural Information Processing Systems 17, 969-976. [7] D. G. Clayton and J. Kaldor. 1987. Empirical Bayes estimates of age-standardized relative risks for use in disease mapping. Biometrics 43, 671-681. [8] A. Molli´e. 1999. Bayesian and empirical Bayes approaches to disease mapping. In A. B. Lawson, et al., eds. Disease Mapping and Risk Assessment for Public Health. Wiley, Chichester. [9] W. Hogan, G. Cooper, M. Wagner, and G. Wallstrom. 2004. A Bayesian anthrax aerosol release detector. Technical Report, RODS Laboratory, University of Pittsburgh. [10] D. B. Neill, A. W. Moore, M. Sabhnani, and K. Daniel. 2005. Detection of emerging space-time clusters. In Proc. 11th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining. [11] R. E. Gangnon and M. K. Clayton. 2000. Bayesian detection and modeling of spatial disease clustering. Biometrics 56, 922-935. [12] A. B. Lawson and D. G. T. Denison, eds. 2002. Spatial Cluster Modelling. Chapman & Hall/CRC, Boca Raton, FL. [13] X. Wang, R. Hutchinson, and T. Mitchell. 2004. Training fMRI classifiers to detect cognitive states across multiple human subjects. In Advances in Neural Information Processing Systems 16, 709-716.
|
2005
|
32
|
2,847
|
Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker-Planck Operators Boaz Nadler∗ St´ephane Lafon Ronald R. Coifman Department of Mathematics, Yale University, New Haven, CT 06520. {boaz.nadler,stephane.lafon,ronald.coifman}@yale.edu Ioannis G. Kevrekidis Department of Chemical Engineering and Program in Applied Mathematics Princeton University, Princeton, NJ 08544 yannis@princeton.edu Abstract This paper presents a diffusion based probabilistic interpretation of spectral clustering and dimensionality reduction algorithms that use the eigenvectors of the normalized graph Laplacian. Given the pairwise adjacency matrix of all points, we define a diffusion distance between any two data points and show that the low dimensional representation of the data by the first few eigenvectors of the corresponding Markov matrix is optimal under a certain mean squared error criterion. Furthermore, assuming that data points are random samples from a density p(x) = e−U(x) we identify these eigenvectors as discrete approximations of eigenfunctions of a Fokker-Planck operator in a potential 2U(x) with reflecting boundary conditions. Finally, applying known results regarding the eigenvalues and eigenfunctions of the continuous Fokker-Planck operator, we provide a mathematical justification for the success of spectral clustering and dimensional reduction algorithms based on these first few eigenvectors. This analysis elucidates, in terms of the characteristics of diffusion processes, many empirical findings regarding spectral clustering algorithms. Keywords: Algorithms and architectures, learning theory. 1 Introduction Clustering and low dimensional representation of high dimensional data are important problems in many diverse fields. In recent years various spectral methods to perform these tasks, based on the eigenvectors of adjacency matrices of graphs on the data have been developed, see for example [1]-[10] and references therein. In the simplest version, known as the normalized graph Laplacian, given n data points {xi}n i=1 where each xi ∈Rp, we define a pairwise similarity matrix between points, for example using a Gaussian kernel ∗Corresponding author. Currently at Weizmann Institute of Science, Rehovot, Israel. http://www.wisdom.weizmann.ac.il/∼nadler with width ε, Li,j = k(xi, xj) = exp −∥xi −xj∥2 2ε (1) and a diagonal normalization matrix Di,i = P j Li,j. Many works propose to use the first few eigenvectors of the normalized eigenvalue problem Lφ = λDφ, or equivalently of the matrix M = D−1L, either as a low dimensional representation of data or as good coordinates for clustering purposes. Although eq. (1) is based on a Gaussian kernel, other kernels are possible. While for actual datasets the choice of a kernel k(xi, xj) is crucial, it does not qualitatively change our asymptotic analysis [11]. The use of the first few eigenvectors of M as good coordinates is typically justified with heuristic arguments or as a relaxation of a discrete clustering problem [3]. In [4, 5] Belkin and Niyogi showed that when data is uniformly sampled from a low dimensional manifold of Rp the first few eigenvectors of M are discrete approximations of the eigenfunctions of the Laplace-Beltrami operator on the manifold, thus providing a mathematical justification for their use in this case. A different theoretical analysis of the eigenvectors of the matrix M, based on the fact that M is a stochastic matrix representing a random walk on the graph was described by Meilˇa and Shi [12], who considered the case of piecewise constant eigenvectors for specific lumpable matrix structures. Additional notable works that considered the random walk aspects of spectral clustering are [8, 13], where the authors suggest clustering based on the average commute time between points, and [14] which considered the relaxation process of this random walk. In this paper we provide a unified probabilistic framework which combines these results and extends them in two different directions. First, in section 2 we define a distance function between any two points based on the random walk on the graph, which we naturally denote the diffusion distance. We then show that the low dimensional description of the data by the first few eigenvectors, denoted as the diffusion map, is optimal under a mean squared error criterion based on this distance. In section 3 we consider a statistical model, in which data points are iid random samples from a probability density p(x) in a smooth bounded domain Ω⊂Rp and analyze the asymptotics of the eigenvectors as the number of data points tends to infinity. This analysis shows that the eigenvectors of the finite matrix M are discrete approximations of the eigenfunctions of a Fokker-Planck (FP) operator with reflecting boundary conditions. This observation, coupled with known results regarding the eigenvalues and eigenfunctions of the FP operator provide new insights into the properties of these eigenvectors and on the performance of spectral clustering algorithms, as described in section 4. 2 Diffusion Distances and Diffusion Maps The starting point of our analysis, as also noted in other works, is the observation that the matrix M is adjoint to a symmetric matrix Ms = D1/2MD−1/2. (2) Thus, M and Ms share the same eigenvalues. Moreover, since Ms is symmetric it is diagonalizable and has a set of n real eigenvalues {λj}n−1 j=0 whose corresponding eigenvectors {vj} form an orthonormal basis of Rn. The left and right eigenvectors of M, denoted φj and ψj are related to those of Ms according to φj = vjD1/2, ψj = vjD−1/2 (3) Since the eigenvectors vj are orthonormal under the standard dot product in Rn, it follows that the vectors φj and ψk are bi-orthonormal ⟨φi, ψj⟩= δi,j (4) where ⟨u, v⟩is the standard dot product between two vectors in Rn. We now utilize the fact that by construction M is a stochastic matrix with all row sums equal to one, and can thus be interpreted as defining a random walk on the graph. Under this view, Mi,j denotes the transition probability from the point xi to the point xj in one time step. Furthermore, based on the similarity of the Gaussian kernel (1) to the fundamental solution of the heat equation, we define our time step as ∆t = ε. Therefore, Pr{x(t + ε) = xj | x(t) = xi} = Mi,j (5) Note that ε has therefore a dual interpretation in this framework. The first is that ε is the (squared) radius of the neighborhood used to infer local geometric and density information for the construction of the adjacency matrix, while the second is that ε is the discrete time step at which the random walk jumps from point to point. We denote by p(t, y|x) the probability distribution of a random walk landing at location y at time t, given a starting location x at time t = 0. For t = k ε, p(t, y|xi) = eiM k, where ei is a row vector of zeros with a single one at the i-th coordinate. For ε large enough, all points in the graph are connected so that M has a unique eigenvalue equal to 1. The other eigenvalues form a non-increasing sequence of non-negative numbers: λ0 = 1 > λ1 ≥λ2 ≥. . . ≥λn−1 ≥0. Then, regardless of the initial starting point x, lim t→∞p(t, y|x) = φ0(y) (6) where φ0 is the left eigenvector of M with eigenvalue λ0 = 1, explicitly given by φ0(xi) = Di,i P j Dj,j (7) This eigenvector also has a dual interpretation. The first is that φ0 is the stationary probability distribution on the graph, while the second is that φ0(x) is a density estimate at the point x. Note that for a general shift invariant kernel K(x−y) and for the Gaussian kernel in particular, φ0 is simply the well known Parzen window density estimator. For any finite time t, we decompose the probability distribution in the eigenbasis {φj} p(t, y|x) = φ0(y) + X j≥1 aj(x)λt jφj(y) (8) where the coefficients aj depend on the initial location x. Using the bi-orthonormality condition (4) gives aj(x) = ψj(x), with a0(x) = ψ0(x) = 1 already implicit in (8). Given the definition of the random walk on the graph it is only natural to quantify the similarity between any two points according to the evolution of their probability distributions. Specifically, we consider the following distance measure at time t, D2 t (x0, x1) = ∥p(t, y|x0) −p(t, y|x1)∥2 w (9) = X y (p(t, y|x0) −p(t, y|x1))2w(y) with the specific choice w(y) = 1/φ0(y) for the weight function, which takes into account the (empirical) local density of the points. Since this distance depends on the random walk on the graph, we quite naturally denote it as the diffusion distance at time t. We also denote the mapping between the original space and the first k eigenvectors as the diffusion map Ψt(x) = λt 1ψ1(x), λt 2ψ2(x), . . . , λt kψk(x) (10) The following theorem relates the diffusion distance and the diffusion map. Theorem: The diffusion distance (9) is equal to Euclidean distance in the diffusion map space with all (n −1) eigenvectors. D2 t (x0, x1) = X j≥1 λ2t j (ψj(x0) −ψj(x1))2 = ∥Ψt(x0) −Ψt(x1)∥2 (11) Proof: Combining (8) and (9) gives D2 t (x0, x1) = X y X j λt j(ψj(x0) −ψj(x1))φj(y) 2 1/φ0(y) (12) Expanding the brackets, exchanging the order of summation and using relations (3) and (4) between φj and ψj yields the required result. Note that the weight factor 1/φ0 is essential for the theorem to hold. □. This theorem provides a justification for using Euclidean distance in the diffusion map space for spectral clustering purposes. Therefore, geometry in diffusion space is meaningful and can be interpreted in terms of the Markov chain. In particular, as shown in [18], quantizing this diffusion space is equivalent to lumping the random walk. Moreover, since in many practical applications the spectrum of the matrix M has a spectral gap with only a few eigenvalues close to one and all additional eigenvalues much smaller than one, the diffusion distance at a large enough time t can be well approximated by only the first few k eigenvectors ψ1(x), . . . , ψk(x), with a negligible error of the order of O((λk+1/λk)t). This observation provides a theoretical justification for dimensional reduction with these eigenvectors. In addition, the following theorem shows that this k-dimensional approximation is optimal under a certain mean squared error criterion. Theorem: Out of all k-dimensional approximations of the form ˆp(t, y|x) = φ0(y) + k X j=1 aj(t, x)wj(y) for the probability distribution at time t, the one that minimizes the mean squared error Ex{∥p(t, y|x) −ˆp(t, y|x)∥2 w} where averaging over initial points x is with respect to the stationary density φ0(x), is given by wj(y) = φj(y) and aj(t, x) = λt jψj(x). Therefore, the optimal k-dimensional approximation is given by the truncated sum ˆp(y, t|x) = φ0(y) + k X j=1 λt jψj(x)φj(y) (13) Proof: The proof is a consequence of a weighted principal component analysis applied to the matrix M, taking into account the biorthogonality of the left and right eigenvectors. We note that the first few eigenvectors are also optimal under other criteria, for example for data sampled from a manifold as in [4], or for multiclass spectral clustering [15]. 3 The Asymptotics of the Diffusion Map The analysis of the previous section provides a mathematical explanation for the success of the diffusion maps for dimensionality reduction and spectral clustering. However, it does not provide any information regarding the structure of the computed eigenvectors. To this end, and similar to the framework of [16], we introduce a statistical model and assume that the data points {xi} are i.i.d. random samples from a probability density p(x) confined to a compact connected subset Ω⊂Rp with smooth boundary ∂Ω. Following the statistical physics notation, we write the density in Boltzmann form, p(x) = e−U(x), where U(x) is the (dimensionless) potential or energy of the configuration x. As shown in [11], in the limit n →∞the random walk on the discrete graph converges to a random walk on the continuous space Ω. Then, it is possible to define forward and backward operators Tf and Tb as follows, Tf[φ](x) = Z Ω M(x|y)φ(y)p(y)dy, Tb[ψ](x) = Z Ω M(y|x)ψ(y)p(y)dy (14) where M(x|y) = exp(−∥x −y∥2/2ε)/D(y) is the transition probability from y to x in time ε, and D(y) = R exp(−∥x −y∥2/2ε)p(x)dx. The two operators Tf and Tb have probabilistic interpretations. If φ(x) is a probability distribution on the graph at time t = 0, then Tf[φ] is the probability distribution at time t = ε. Similarly, Tb[ψ](x) is the mean of the function ψ at time t = ε, for a random walk that started at location x at time t = 0. The operators Tf and Tb are thus the continuous analogues of the left and right multiplication by the finite matrix M. We now take this analysis one step further and consider the limit ε →0. This is possible, since when n = ∞each data point contains an infinite number of nearby neighbors. In this limit, since ε also has the interpretation of a time step, the random walk converges to a diffusion process, whose probability density evolves continuously in time, according to ∂p(x, t) ∂t = lim ε→0 p(x, t + ε) −p(x, t) ε = lim ε→0 Tf −I ε p(x, t) (15) in which case it is customary to study the infinitesimal generators (propagators) Hf = lim ε→0 Tf −I ε , Hb = lim ε→0 Tb −I ε (16) Clearly, the eigenfunctions of Tf and Tb converge to those of Hf and Hb, respectively. As shown in [11], the backward generator is given by the following Fokker-Planck operator Hbψ = ∆ψ −2∇ψ · ∇U (17) which corresponds to a diffusion process in a potential field of 2U(x) ˙x(t) = −∇(2U) + √ 2D ˙w(t) (18) where w(t) is standard Brownian motion in p dimensions and D is the diffusion coefficient, equal to one in equation (17). The Langevin equation (18) is a common model to describe stochastic dynamical systems in physics, chemistry and biology [19, 20]. As such, its characteristics as well as those of the corresponding FP equation have been extensively studied, see [19]-[22] and many others. The term ∇ψ · ∇U in (17) is interpreted as a drift term towards low energy (high-density) regions, and as discussed in the next section, may play a crucial part in the definition of clusters. Note that when data is uniformly sampled from Ω, ∇U = 0 so the drift term vanishes and we recover the Laplace-Beltrami operator on Ω. The connection between the discrete matrix M and the (weighted) Laplace-Beltrami or Fokker-Planck operator, as well as rigorous convergence proofs of the eigenvalues and eigenvectors of M to those of the integral operator Tb or infinitesimal generator Hb were considered in many recent works [4, 23, 17, 9, 24]. However, it seems that the important issue of boundary conditions was not considered. Since (17) is defined in the bounded domain Ω, the eigenvalues and eigenfunctions of Hb depend on the boundary conditions imposed on ∂Ω. As shown in [9], in the limit ε →0, the random walk satisfies reflecting boundary conditions on ∂Ω, which translate into ∂ψ(x) ∂n ∂Ω= 0 (19) Table 1: Random Walks and Diffusion Processes Case Operator Stochastic Process ε > 0 finite n × n R.W. discrete in space n < ∞ matrix M discrete in time ε > 0 operators R.W. in continuous space n →∞ Tf, Tb discrete in time ε →0 infinitesimal diffusion process n = ∞ generator Hf continuous in time & space where n is a unit normal vector at the point x ∈∂Ω. To conclude, the left and right eigenvectors of the finite matrix M can be viewed as discrete approximations to those of the operators Tf and Tb, which in turn can be viewed as approximations to those of Hf and Hb. Therefore, if there are enough data points for accurate statistical sampling, the structure and characteristics of the eigenvalues and eigenfunctions of Hb are similar to the corresponding eigenvalues and discrete eigenvectors of M. For convenience, the three different stochastic processes are shown in table 1. 4 Fokker-Planck eigenfunctions and spectral clustering According to (16), if λε is an eigenvalue of the matrix M or of the integral operator Tb based on a kernel with parameter ε, then the corresponding eigenvalue of Hb is µ ≈(λε −1)/ε. Therefore the largest eigenvalues of M correspond to the smallest eigenvalues of Hb. These eigenvalues and their corresponding eigenfunctions have been extensively studied in the literature under various settings. In general, the eigenvalues and eigenfunctions depend both on the geometry of the domain Ωand on the profile of the potential U(x). For clarity and due to lack of space we briefly analyze here two extreme cases. In the first case Ω= Rp so geometry plays no role, while in the second U(x) = const so density plays no role. Yet we show that in both cases there can still be well defined clusters, with the unifying probabilistic concept being that the mean exit time from one cluster to another is much larger than the characteristic equilibration time inside each cluster. Case I: Consider diffusion in a smooth potential U(x) in Ω= Rp, where U has a few local minima, and U(x) →∞as ∥x∥→∞fast enough so that R e−Udx = 1 < ∞. Each such local minimum thus defines a metastable state, with transitions between metastable states being relatively rare events, depending on the barrier heights separating them. As shown in [21, 22] (and in many other works) there is an intimate connection between the smallest eigenvalues of Hb and mean exit times out of these metastable states. Specifically, in the asymptotic limit of small noise D ≪1, exit times are exponentially distributed and the first non-trivial eigenvalue (after µ0 = 0) is given by µ1 = 1/¯τ where ¯τ is the mean exit time to overcome the highest potential barrier on the way to the deepest potential well. For the case of two potential wells, for example, the corresponding eigenfunction is roughly constant in each well with a sharp transition near the saddle point between the wells. In general, in the case of k local minima there are asymptotically only k eigenvalues very close to zero. Apart from µ0 = 0, each of the other k −1 eigenvalues corresponds to the mean exit time from one of the wells into the deepest one, with the corresponding eigenfunctions being almost constant in each well. Therefore, for a finite dataset the presence of only k eigenvalues close to 1 with a spectral gap, e.g. a large difference between λk and λk+1 is indicative of k well defined global clusters. In figure 1 (left) an example of this case is shown, where p(x) is the sum of two well separated Gaussian clouds leading to a double well potential. Indeed there are only two eigenvalues close or equal to 1 with a distinct spectral gap and the first eigenfunction being almost piecewise constant in each well. −2 −1 0 1 −1 0 1 2 Gaussians 2 4 6 0.6 0.8 1 −2 −1 0 1 −1 0 1 −2−1 0 1 −5 0 5 −101 3 Gaussians 2 4 6 0.2 0.4 0.6 0.8 −5 0 5 −5 0 5 −1 0 1 −1 0 1 Uniform density 2 4 6 0.8 0.9 1 −1 0 1 −1 0 1 Figure 1: Diffusion map results on different datasets. Top - the datasets. Middle - the eigenvalues. Bottom - the first eigenvector vs. x1 or the first and second eigenvectors for the case of three Gaussians. In stochastic dynamical systems a spectral gap corresponds to a separation of time scales between long transition times from one well or metastable state to another as compared to short equilibration times inside each well. Therefore, clustering and identification of metastable states are very similar tasks, and not surprisingly algorithms similar to the normalized graph Laplacian have been independently developed in the literature [25]. The above mentioned results are asymptotic in the small noise limit. In practical datasets, there can be clusters of different scales, where a global analysis with a single ε is not suitable. As an example consider the second dataset in figure 1, with three clusters. While the first eigenvector distinguishes between the large cluster and the two smaller ones, the second eigenvector captures the equilibration inside the large cluster instead of further distinguishing the two small clusters. While a theoretical explanation is beyond the scope of this paper, a possible solution is to choose a location dependent ε, as proposed in [26]. Case II: Consider a uniform density in a region Ω⊂R3 composed of two large containers connected by a narrow circular tube, as in the top right frame in figure 1. In this case U(x) = const, so the second term in (17) vanishes. As shown in [27], the second eigenvalue of the FP operator is extremely small, of the order of a/V where a is the radius of the connecting tube and V is the volume of the containers, thus showing an interesting connection to the Cheeger constant on graphs. The corresponding eigenfunction is almost piecewise constant in each container with a sharp transition in the connecting tube. Even though in this case the density is uniform, there still is a spectral gap with two well defined clusters (the two containers), defined entirely by the geometry of Ω. An example of such a case and the results of the diffusion map are shown in figure 1 (right). In summary the eigenfunctions and eigenvalues of the FP operator, and thus of the corresponding finite Markov matrix, depend on both geometry and density. The diffusion distance and its close relation to mean exit times between different clusters is the quantity that incorporates these two features. This provides novel insight into spectral clustering algorithms, as well as a theoretical justification for the algorithm in [13], which defines clusters according to mean travel times between points on the graph. A similar analysis could also be applied to semi-supervised learning based on spectral methods [28]. Finally, these eigenvectors may be used to design better search and data collection protocols [29]. Acknowledgments: The authors thank Mikhail Belkin and Partha Niyogi for interesting discussions. This work was partially supported by DARPA through AFOSR. References [1] B. Sch¨olkopf, A. Smola and K.R. M¨uller. Nonlinear component analysis as a kernel eigenvalue problem, Neural Computation 10, 1998. [2] Y. Weiss. Segmentation using eigenvectors: a unifying view. ICCV 1999. [3] J. Shi and J. Malik. Normalized cuts and image segmentation, PAMI, Vol. 22, 2000. [4] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering, NIPS Vol. 14, 2002. [5] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation, Neural Computation 15:1373-1396, 2003. [6] A.Y. Ng, M. Jordan and Y. Weiss. On spectral clustering, analysis and an algorithm, NIPS Vol. 14, 2002. [7] X. Zhu, Z. Ghahramani, J. Lafferty, Semi-supervised learning using Gaussian fields and harmonic functions, Proceedings of the 20th international conference on machine learning, 2003. [8] M. Saerens, F. Fouss, L. Yen and P. Dupont, The principal component analysis of a graph and its relationships to spectral clustering. ECML 2004. [9] R.R. Coifman, S. Lafon, Diffusion Maps, to appear in Appl. Comp. Harm. Anal. [10] R.R. Coifman & al., Geometric diffusion as a tool for harmonic analysis and structure definition of data, parts I and II, Proc. Nat. Acad. Sci., 102(21):7426-37 (2005). [11] B. Nadler, S. Lafon, R.R. Coifman, I. G. Kevrekidis, Diffusion maps, spectral clustering, and the reaction coordinates of dynamical systems, to appear in Appl. Comp. Harm. Anal., available at http://arxiv.org/abs/math.NA/0503445. [12] M. Meila, J. Shi. A random walks view of spectral segmentation, AI and Statistics, 2001. [13] L. Yen L., Vanvyve D., Wouters F., Fouss F., Verleysen M. and Saerens M. , Clustering using a random-walk based distance measure. ESANN 2005, pp 317-324. [14] N. Tishby, N. Slonim, Data Clustering by Markovian Relaxation and the information bottleneck method, NIPS, 2000. [15] S. Yu and J. Shi. Multiclass spectral clustering. ICCV 2003. [16] Y. Bengio et. al, Learning eigenfunctions links spectral embedding and kernel PCA, Neural Computation, 16:2197-2219 (2004). [17] U. von Luxburg, O. Bousquet, M. Belkin, On the convergence of spectral clustering on random samples: the normalized case, NIPS, 2004. [18] S. Lafon, A.B. Lee, Diffusion maps: A unified framework for dimension reduction, data partitioning and graph subsampling, submitted. [19] C.W. Gardiner, Handbook of stochastic methods, third edition, Springer NY, 2004. [20] H. Risken, The Fokker Planck equation, 2nd edition, Springer NY, 1999. [21] B.J. Matkowsky and Z. Schuss, Eigenvalues of the Fokker-Planck operator and the approach to equilibrium for diffusions in potential fields, SIAM J. App. Math. 40(2):242-254 (1981). [22] M. Eckhoff, Precise asymptotics of small eigenvalues of reversible diffusions in the metastable regime, Annals of Prob. 33:244-299, 2005. [23] M. Belkin and P. Niyogi, Towards a theoeretical foundation for Laplacian-based manifold methods, COLT 2005 (to appear). [24] M. Hein, J. Audibert, U. von Luxburg, From graphs to manifolds - weak and strong pointwise consistency of graph Laplacians, COLT 2005 (to appear). [25] W. Huisinga, C. Best, R. Roitzsch, C. Sch¨utte, F. Cordes, From simulation data to conformational ensembles, structure and dynamics based methods, J. Comp. Chem. 20:1760-74, 1999. [26] L. Zelnik-Manor, P. Perona, Self-Tuning spectral clustering, NIPS, 2004. [27] A. Singer, Z. Schuss, D. Holcman and R.S. Eisenberg, narrow escape, part I, submitted. [28] D. Zhou & al., Learning with local and global consistency, NIPS Vol. 16, 2004. [29] I.G. Kevrekidis, C.W. Gear, G. Hummer, Equation-free: The computer-aided analysis of complex multiscale systems, Aiche J. 50:1346-1355, 2004.
|
2005
|
33
|
2,848
|
Scaling Laws in Natural Scenes and the Inference of 3D Shape Brian Potetz Department of Computer Science Center for the Neural Basis of Cognition Carnegie Mellon University Pittsburgh, PA 15213 bpotetz@cs.cmu.edu Tai Sing Lee Department of Computer Science Center for the Neural Basis of Cognition Carnegie Mellon University Pittsburgh, PA 15213 tai@cnbc.cmu.edu Abstract This paper explores the statistical relationship between natural images and their underlying range (depth) images. We look at how this relationship changes over scale, and how this information can be used to enhance low resolution range data using a full resolution intensity image. Based on our findings, we propose an extension to an existing technique known as shape recipes [3], and the success of the two methods are compared using images and laser scans of real scenes. Our extension is shown to provide a two-fold improvement over the current method. Furthermore, we demonstrate that ideal linear shape-from-shading filters, when learned from natural scenes, may derive even more strength from shadow cues than from the traditional linear-Lambertian shading cues. 1 Introduction The inference of depth information from single images is typically performed by devising models of image formation based on the physics of light interaction and then inverting these models to solve for depth. Once inverted, these models are highly underconstrained, requiring many assumptions such as Lambertian surface reflectance, smoothness of surfaces, uniform albedo, or lack of cast shadows. Little is known about the relative merits of these assumptions in real scenes. A statistical understanding of the joint distribution of real images and their underlying 3D structure would allow us to replace these assumptions and simplifications with probabilistic priors based on real scenes. Furthermore, statistical studies may uncover entirely new sources of information that are not obvious from physical models. Real scenes are affected by many regularities in the environment, such as the natural geometry of objects, the arrangements of objects in space, natural distributions of light, and regularities in the position of the observer. Few current shape inference algorithms make use of these trends. Despite the potential usefulness of statistical models and the growing success of statistical methods in vision, few studies have been made into the statistical relationship between images and range (depth) images. Those studies that have examined this relationship in nature have uncovered meaningful and exploitable statistical trends in real scenes which may be useful for designing new algorithms in surface inference, and also for understanding how humans perceive depth in real scenes [6, 4, 8]. In this paper, we explore some of the properties of the statistical relationship between images and their underlying range (depth) images in real scenes, using images acquired by laser scanner in natural environments. Specifically, we will examine the cross-covariance between images and range images, and how this structure changes over scale. We then illustrate how our statistical findings can be applied to inference problems by analyzing and extending the shape recipe depth inference algorithm. 2 Shape recipes We will motivate our statistical study with an application. Often, we may have a highresolution color image of a scene, but only a low spatial resolution range image (range images record the 3D distance between the scene and the camera for each pixel). This often happens if our range image was acquired by applying a stereo depth inference algorithm. Stereo algorithms rely on smoothness constraints, either explicitly or implicitly, and so the high-frequency components of the resulting range image are not reliable [1, 7]. Lowresolution range data may also be the output of a laser range scanner, if the range scanner is inexpensive, or if the scan must be acquired quickly (range scanners typically acquire each pixel sequentially, taking up to several minutes for a high-resolution scan). It should be possible to improve our estimate of the high spatial frequencies of the range image by using monocular cues from the high-resolution intensity (or color) image. Shape recipes [3, 9] provide one way of doing this. The basic principle of shape recipes is that a relationship between shape and light intensity could be learned from the low resolution image pair, and then extrapolated and applied to the high resolution intensity image to infer the high spatial frequencies of the range image. One advantage of this approach is that hidden variables important to inference from monocular cues, such as illumination direction and material reflectance properties, might be implicitly learned from the lowresolution range and intensity images. However, for this approach to work, we require some model of how the relationship between shape and intensity changes over scale, which we discuss below. For shape recipes, both the high resolution intensity image and the low resolution range image are decomposed into steerable wavelet filter pyramids, linearly breaking the image down according to scale and orientation [2]. Linear regression is then used between the highest frequency band of the available low-resolution range image and the corresponding band of the intensity image, to learn a linear filter that best predicts the range band from the image band. The hypothesis of the model is that this filter can then be used to predict high frequency range bands from the high frequency image bands. We describe the implementation in more detail below. Let im,φ and zm,φ be steerable filter pyramid subbands of the intensity and range image respectively, at spatial resolution m and orientation φ (both are integers). Number the band levels so that m=0 is the highest frequency subband of the intensity image, and m=n is the highest available frequency subband of the low-resolution range image. Thus, higher level numbers correspond to lower spatial frequencies. Shape recipes work by learning a linear filter kn,φ at level n by minimizing sum-squared error P(zn,φ −kn,φ ⋆in,φ)2, where ⋆denotes convolution. Higher resolution subbands of the range image are inferred by: ˆzm,φ = 1 cn−m (kn,φ ⋆im,φ) (1) where c = 2. The choice of c = 2 in the shape recipe model is motivated by the linear Lambertian shading model [9]. We will discuss this choice of constant in section 3. The underlying assumption of shape recipes is that the convolution kernel km,φ should be roughly constant over the four highest resolution bands of the steerable filter pyramid. This is based on the idea that shape recipe kernels should vary slowly over scale. In this section, we show mathematically that this model is internally inconsistent. To do this, we first reexpress the shape recipe process in the Fourier domain. The operations of shape recipes (pyramid decomposition, convolution, and image reconstruction) are all linear operations, and so they can be combined into a single linear convolution. In other words, we can think of shape recipes as inferring the high resolution range data zhigh via a single convolution Zhigh(u, v) = I(u, v) · Krecipe(u, v) (2) where I is the Fourier transform of the intensity image i. (In general, we will use capital letters to denote functions in the Fourier domain). Krecipe is a filter in the Fourier domain, of the same size as the image, whose construction is discussed below. Note that Krecipe is zero in the low frequency bands where Zlow is available. Once zhigh (the inverse Fourier transform of Zhigh) is estimated, it can be combined with the known low-resolution range data simply by adding them together: zrecipe(x, y) = zlow(x, y) + zhigh(x, y). For shorthand, we will write I(u, v)I∗(u, v) as II(u, v) and Z(u, v)I∗(u, v) as ZI(u, v). II is also known as the power spectrum, and it is the Fourier transform of the autocorrelation of the intensity image. ZI is the Fourier transform of the cross-correlation between the intensity and range images, and it has both real and imaginary parts. Let K = ZI/II. Observe that I · K is a perfect reconstruction of the original high resolution range image (as long as II(u, v) ̸= 0). Because we do not have the full-resolution range image, we can only compute the low spatial frequencies of ZI(u, v). Let Klow = ZIlow/II, where ZIlow is the Fourier transform of the cross-correlation between the low-resolution range image, and a low-resolution version of the intensity image. Klow is zero in the high frequency bands. We can then think of Krecipe as an approximation of K = ZI/II formed by extrapolating Klow into the higher spatial frequencies. In the appendix, we show that shape recipes implicitly perform this extrapolation by learning the highest available frequency octave of Klow, and duplicating this octave into all successive octaves of Krecipe, multiplied by a scale factor. However, there is a problem with this approach. First, there is no reason to expect that features in the range/intensity relationship should repeat once every octave. Figure 1a shows a plot of ZI from a scene in our database of ground-truth range data (to be described in section 3). The fine structures in real[K] do not duplicate themselves every octave. Second and more importantly, octave duplication violates Freeman and Torralba’s assumption that shape recipe kernels should change slowly over scale, which we take to mean over all scales, not just over successive octaves. Even if octave 2 of K is made identical to octave 1, it is mathematically impossible for fractional octaves of K like 1.5 to also be identical unless ZI/II is completely smooth and devoid of fine structure. The fine structures in K therefore cannot possibly generalize over all scales. In the next section, we use laser scans of real scenes to study the joint statistics of range and intensity images in greater detail, and use our results to form a statistically-motivated model of ZI. We believe that a greater understanding of the joint distribution of natural images and their underlying 3D structure will have a broad impact on the development of robust depth inference algorithms, and also on understanding human depth perception. More immediately, our statistical observations lead to a more accurate way to extrapolate Klow, which in turn results in a more accurate shape recipe method. 3 Scaling laws in natural scene statistics To study the correlational structures between depth and intensity in natural scenes, we have collected a database of coregistered intensity and high-resolution range images (corresponding pixels of the two images correspond to the same point in space). Scans were collected using the Riegl LMS-Z360 laser range scanner with integrated color photosensor. a) |real[ZI]| b) Example BK(θ) vs degrees counter-clockwise from horizontal −2 0 2 x 10 −3 0° 90° 180° 270° 360° B(θ) Figure 1: a) A log-log polar plot of |real[ZI]| from a scene in our database. ZI contains extensive fine structures that do not repeat at each octave. However, along all orientations, the general form of |real[ZI]| is a power-law. |imag[ZI]| similarly obeys a power-law. b) A plot of BK(θ) for the scene in figure 2. real[BK(θ)] is drawn in black and imag[BK(θ)] in grey. This plot is typical of most scenes in our database. As predicted by equation 4, imag[BK(θ)] reaches its minima at the illumination direction (in this case, to the extreme left, almost 180◦). Also typical is that real[BK(θ)] is uniformly negative, most likely caused by cast shadows in object concavities [6]. Scans were taken of a variety of rural and urban scenes. All images were taken outdoors, under sunny conditions, while the scanner was level with ground. The shape recipe model was intended for scenes with homogenous albedo and surface material. To test this algorithm in real scenes of this type, we selected 28 single-texture image sections from our database. These textures include statue surfaces and faceted building exteriors, such as archways and church facades (12 scenes), rocky terrain and rock piles (8), and leafy foliage (8). No logarithm or other transformation was applied to the intensity or range data (measured in meters), as this would interfere with the Lambertian model that motivates the shape recipe technique. Average size of these textures was 172,669 pixels per image. We show a log-log polar plot of |real[ZI(r, θ)]| from one image in our database in figure 1a. As can be seen in the figure, this structure appears to closely follow a power law. We claim that ZI can be reasonably modeled by B(θ)/rα, where r is spatial frequency in polar coordinates, and B(θ) is a parameter of the model (with both real and imaginary parts) that depends only on polar angle θ. We test this claim by dividing the Fourier plane into four 45◦octants (vertical, forward diagonal, horizontal, and backward diagonal), and measuring the drop-off rate in each octant separately. For each octant, we average over the octant’s included orientations and fit the result to a power-law. The resulting values of α (averaged over all 28 images) are listed in the table below: orientation II real[ZI] imag[ZI] ZZ horizontal 2.47 ±0.10 3.61 ±0.18 3.84 ±0.19 2.84 ±0.11 forward diagonal 2.61 ±0.11 3.67 ±0.17 3.95 ±0.17 2.92 ±0.11 vertical 2.76 ±0.11 3.62 ±0.15 3.61 ±0.24 2.89 ±0.11 backward diagonal 2.56 ±0.09 3.69 ±0.17 3.84 ±0.23 2.86 ±0.10 mean 2.60 ±0.10 3.65 ±0.14 3.87 ±0.16 2.88 ±0.10 For each octant, the correlation coefficient between the power-law fit and the actual spectrum ranged from 0.91 to 0.99, demonstrating that each octant is well-fit by a power-law (Note that averaging over orientation smooths out some fine structures in each spectrum). Furthermore, α varies little across orientations, showing that our model fits ZI closely. The above findings predict that K = ZI/II also obeys a power-law. Subtracting αII from αreal[ZI] and αimag[ZI], we find that real[K] drops off at 1/r1.1 and imag[K] drops off at 1/r1.2. Thus, we have that K(r, θ) ≈BK(θ)/r. Now that we know that K can be fit (roughly) by a 1/r power-law, we can offer some insight into why K tends to approximate this general form. The 1/r drop-off in the imaginary part of K can be explained by the linear Lambertian model of shading, with oblique lighting conditions. This argument was used by Freeman and Torralba [9] in their theoretical motivation for choosing c = 2. The linear Lambertian model is obtained by taking only the linear terms of the Taylor series of the Lambertian equation. Under this model, if constant albedo is assumed, and no occlusion is present, then with lighting from above, i(x, y) = a ∂z/∂y, where a is some constant. In the Fourier domain, I(u, v) = a2πjvZ(u, v), where j = √−1. Thus, we have that ZI(r, θ) = − j a2π r sin(θ)II(r, θ) (3) K(r, θ) = −j 1 r 1 a2π sin(θ) (4) In other words, under this model, K obeys a 1/r power-law. This means that each octave of K is half of the octave before it. Our empirical finding that the imaginary part of K obeys a 1/r power-law confirms Freeman and Torralba’s reasoning behind choosing c = 2 for shape recipes. However, the linear Lambertian shading model predicts that only the imaginary part of ZI should obey a power-law. In fact, according to equation 3, this model predicts that the real part of ZI should be zero. Yet, in our database, the real part of ZI was typically stronger than the imaginary part. The real part of ZI is the Fourier transform of the even-symmetric part of the cross-correlation function, and it includes the direct correlation cov[i, z]. In a previous study of the statistics of natural range images [6], we have found that darker pixels in the image tend to be farther away, resulting in significantly negative cov[i, z]. We attributed this phenomenon to cast shadows in complex scenes: object interiors and concavities are farther away than object exteriors, and these regions are the most likely to be in shadow. This effect can be observed wherever shadows are found, such as the crevices of figure 2a. However, the effect appears strongest in complex objects with many shadows and concavities, like folds of cloth, or foliage. We found that the real part of ZI is especially likely to be strongly negative in images of foliage. Such correlation between depth and darkness has been predicted theoretically for diffuse lighting conditions, such as cloudy days, when viewed from directly above [5]. The fact that all of our images were taken under cloudless, sunny conditions and with oblique lighting from above suggests that this cue may be more important than at first realized. Psychophysical experiments have demonstrated that in the absence of all other cues, darker image regions appear farther, suggesting that the human visual system makes use of this cue for depth inference (see [6] for a review, also [10]). We believe that the 1/r drop-off rate observed in real[K] is due to the fact that concavities with smaller apertures but equal depths tend to be darker. In other words, for a given level of darkness, a smaller aperture corresponds to a more shallow hole. 4 Inference using power-law models Armed with a better understanding of the statistics of real scenes, we are better prepared to develop successful depth inference algorithms. We now know that fine details in ZI/II do not generalize across scales, but that its coarse structure roughly follows a 1/r power-law. We can exploit this statistical trend directly. We can simply fit our BK(θ)/r power law to ZIlow/II, and then use this estimate of K to reconstruct the high frequency range data. Specifically, from the low-resolution range and intensity image, we compute low resolution spectra of ZI and II. From the highest frequency octave of the low-resolution images, we estimate BII(θ) and BZI(θ). Any standard interpolation method will work to estimate these functions. We chose a cos3(θ + πφ/4) basis function based on steerable filters [2]. a) Original Intensity Image b) Low-Resolution Range Data c) Power-law Shape Recipe d) Krecipe e) Kpowerlaw Figure 2: a) An example intensity image from our database. b) A Lambertian rendering of the corresponding low resolution range image. c) Power-law method output. Shape recipe reconstructions show a similar amount of texture, but tests show that texture generated by the power-law method is more highly correlated with the true texture. d) The imaginary parts of Krecipe and e) Kpowerlaw for the same scene. Dark regions are negative, light regions are positive. The grey center region in each estimate of K corresponds to the low spatial frequencies, where range data is not inferred because it is already known. Notice that Krecipe oscillates over scale. We now can estimate the high spatial frequencies of the range image, z. Define Kpowerlaw(r, θ) = Fhigh(r) · (BZI(θ)/BII(θ))/r (5) Zpowerlaw = Zlow + I · Kpowerlaw (6) where Fhigh is the high-pass filter associated with the two highest resolution bands of the steerable filter pyramid of the full-resolution image. 5 Empirical evaluation In this section, we compare the performance of shape recipes with our new approach, using our ground-truth database of high-resolution range and intensity image pairs described in section 3. For each range image in our database, a low-resolution (but still full-sized) range image, zlow, was generated by setting to zero the top two steerable filter pyramid layers. Both algorithms accepted as input the low-resolution range image and high-resolution intensity image, and the output was compared with the original high-resolution range image. The high resolution output corresponds to a 4-fold increase in spatial resolution (or a 16-fold increase in total size). Although encouraging enhancements of stereo output were given by the authors, shape recipes has not been evaluated with real, ground-truth high resolution range data. To maximize its performance, we implemented shape recipes using ridge regression, with the ridge coefficient obtained using cross-validation. Linear kernels were learned (and the output evaluated) over a region of the image at least 21 pixels from the image border. For each high-resolution output, we measured the sum squared error between the reconstruction (zrecipe or zpowerlaw) and the original range image (z). We compared this with the sum-squared error of the low-resolution range image zlow to get the percent reduction in sum-squared error: error reductionrecipe = errlow−errrecipe errlow . This measure of error reflects the performance of the method independently of the variance or absolute depth of the range image. On average, shape recipe reconstructions had 1.3% less mean-squared error than zlow. Shape recipes improved 21 of the 28 images. Our new approach had 2.2% less mean-squared error than zlow, and improved 26 of the 28 images. We cannot expect the error reduction values to be very high, partly because our images are highly complex natural scenes, and also because some noise was present in both the range and intensity images. Therefore, it is difficult to assess how much of the remaining error could be recovered by a superior algorithm, and how much is simply due to sensor noise. As a comparison, we generated an optimal linear reconstruction, zoptlin, by learning 11 × 11 shape recipe kernels for the two high resolution pyramid bands directly from the ground-truth high resolution range image. This reconstruction provides a loose upper bound on the degree of improvement possible by linear shape methods. We then measured the percentage of linearly achievable improvement for each image: improvementrecipe = errlow−errrecipe errlow−erroptlin Shape recipes yielded an average improvement of 23%. Our approach achieved an improvement of 44%, nearly a two-fold enhancement over shape recipes. 6 The relative strengths of shading and shadow cues Earlier we showed that Lambertian shading alone predicts that the real part of ZI in natural scenes is empty of useful correlations between images and range images. Yet in our database, the real part of ZI, which we believe is related to shadow cues, was often stronger than the imaginary component. Our depth-inference algorithm offers an opportunity to compare the performance of shading cues versus shadow cues. We ran our algorithm again, except that we set the real part of Kpowerlaw to zero. This yielded only a 12% improvement. However, when we ran the algorithm after setting imag[K] to zero, 32% improvement was achieved. Thus, 72% of the algorithm’s total improvement was due to shadow cues. When the database is broken down into categories, the real part of ZI is responsible for 96% of total improvement in foliage scenes, 76% in rocky terrain scenes, and 35% in urban scenes (statue surfaces and building facades). As expected, the algorithm relies more heavily on the real part of ZI in environments rich in cast shadows. These results show that shadow cues are far more useful than was previously expected, and also that they can be exploited more easily than was previously thought possible, using only simple linear relationships that might easily be incorporated into linear shape-from-shading techniques. We feel that these insights into natural scene statistics are the most important contributions of this paper. 7 Discussion The power-law extension to shape recipes not only offers a substantial improvement in performance, but it also greatly reduces the number of parameters that must be learned. The original shape recipes required one 11×11 kernel, or 121 parameters, for each orientation of the steerable filters. The new algorithm requires only two parameters for each orientation (the real and the imaginary parts of BK(θ)). This suggests that the new approach has captured only those components of K that generalize across scales, disregarding all others. While it is encouraging that the power-law algorithm is highly parsimonious, it also means that fewer scene properties are encoded in the shape recipe kernels than was previously hoped [3]. For example, complex properties of the material and surface reflectance cannot be encoded. We believe that the B(θ) parameter of the power-law model can be determined almost entirely by the direction of illumination and the prominence of cast shadows (see figure 1b). This suggests that the power-law algorithm of this paper would work equally well for scenes with multiple materials. To capture more complex material properties, nonlinear methods and probabilistic methods may achieve greater success. However, when designing these more sophisticated methods, care must be taken to avoid the same pitfall encountered by shape recipes: not all properties of a scene can be scale-invariant simultaneously. 8 Appendix Shape recipes infer each high resolution band of the range using equation 1. Let σ = 2n−m. If we take the Fourier transform of equation 1, we get Zhigh · Fm,φ = 1 cn−m Kn,φ u σ , v σ · (I · Fm,φ) (7) where Fm,φ is the Fourier transform of the steerable filter at level m and orientation φ, and Zhigh is the inferred high spatial frequency components of the range image. If we take the steerable pyramid decomposition of Zhigh and then transform it back, we get Zhigh again, and so: I · Krecipe = Zhigh = m<n X m,φ ZhighFm,φF ∗ m,φ (8) = I m<n X m,φ 1 cn−m Kn,φ u σ , v σ · Fm,φ · F ∗ m,φ (9) The steerable filters at each level are simply a dilation of the steerable filters of preceding levels: Fm,φ(u, v) = Fn,φ u σ, v σ . Thus, recalling that σ = 2n−m, we have Krecipe = m<n X m,φ 1 cn−m Kn,φ(u σ , v σ ) · Fn,φ(u σ , v σ ) · F ∗ n,φ(u σ , v σ ) (10) The steerable filters Fn,φ are band-pass filters, and they are essentially zero outside of octave n. Thus, each octave of Krecipe is identical to the octave before it, except reduced by a constant scale factor c. In other words, shape recipes extrapolate Klow by copying the highest available octave of Klow (or some estimation of it) into each successive octave. An example of Krecipe can be seen in figure 2d. This research was funded in part by NSF IIS-0413211, Penn Dept of Health-MPC 05-06-2. Brian Potetz is supported by an NSF Graduate Research Fellowship. References [1] J. E. Cryer, P. S. Tsai and M. Shah, “Integration of shape from shading and stereo,” Pattern Recognition, 28(7):1033–1043, 1995. [2] W. T. Freeman, E. H. Adelson, “The design and use of steerable filters,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 13, 891–906 1991. [3] W. T. Freeman and A. Torralba, “Shape Recipes: Scene representations that refer to the image,” Advances in Neural Information Processing Systems 15 (NIPS), MIT Press, 2003. [4] C. Q. Howe and D. Purves, “Range image statistics can explain the anomalous perception of length,” Proc. Nat. Acad. Sci. U.S.A. 99 13184–13188 2002. [5] M. S. Langer and S. W. Zucker, “Shape-from-shading on a cloudy day,” J. Opt. Soc. Am. A 11, 467–478 (1994). [6] B. Potetz, T. S. Lee, “Statistical correlations between two-dimensional images and threedimensional structures in natural scenes,” J. Opt. Soc. Amer. A, 20, 1292–1303 2003. [7] D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” IJCV 47(1/2/3):7–42, April-June 2002. [8] A. Torralba, A. Oliva, “Depth estimation from image structure,” IEEE Transactions on Pattern Analysis and Machine Intelligence. 24(9): 1226–1238 2002. [9] A. Torralba and W. T. Freeman, “Properties and applications of shape recipes,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. [10] C. W. Tyler, “Diffuse illumination as a default assumption for shape-from-shading in the absence of shadows,” J. Imaging Sci. Technol. 42, 319–325 1998.
|
2005
|
34
|
2,849
|
On the Convergence of Eigenspaces in Kernel Principal Component Analysis Laurent Zwald D´epartement de Math´ematiques, Universit´e Paris-Sud, Bˆat. 425, F-91405 Orsay, France Laurent.Zwald@math.u-psud.fr Gilles Blanchard Fraunhofer First (IDA), K´ekul´estr. 7, D-12489 Berlin, Germany blanchar@first.fhg.de Abstract This paper presents a non-asymptotic statistical analysis of Kernel-PCA with a focus different from the one proposed in previous work on this topic. Here instead of considering the reconstruction error of KPCA we are interested in approximation error bounds for the eigenspaces themselves. We prove an upper bound depending on the spacing between eigenvalues but not on the dimensionality of the eigenspace. As a consequence this allows to infer stability results for these estimated spaces. 1 Introduction. Principal Component Analysis (PCA for short in the sequel) is a widely used tool for data dimensionality reduction. It consists in finding the most relevant lower-dimension projection of some data in the sense that the projection should keep as much of the variance of the original data as possible. If the target dimensionality of the projected data is fixed in advance, say D – an assumption that we will make throughout the present paper – the solution of this problem is obtained by considering the projection on the span SD of the first D eigenvectors of the covariance matrix. Here by ’first D eigenvectors’ we mean eigenvectors associated to the D largest eigenvalues counted with multiplicity; hereafter with some abuse the span of the first D eigenvectors will be called “D-eigenspace” for short when there is no risk of confusion. The introduction of the ’Kernel trick’ has allowed to extend this methodology to data mapped in a kernel feature space, then called KPCA [8]. The interest of this extension is that, while still linear in feature space, it gives rise to nonlinear interpretation in original space – vectors in the kernel feature space can be interpreted as nonlinear functions on the original space. For PCA as well as KPCA, the true covariance matrix (resp. covariance operator) is not known and has to be estimated from the available data, an procedure which in the case of Kernel spaces is linked to the so-called Nystr¨om approximation [13]. The subspace given as an output is then obtained as D-eigenspace bSD of the empirical covariance matrix or operator. An interesting question from a statistical or learning theoretical point of view is then, how reliable this estimate is. This question has already been studied [10, 2] from the point of view of the reconstruction error of the estimated subspace. What this means is that (assuming the data is centered in Kernel space for simplicity) the average reconstruction error (square norm of the distance to the projection) of bSD converges to the (optimal) reconstruction error of SD and that bounds are known about the rate of convergence. However, this does not tell us much about the convergence of SD to bSD – since two very different subspaces can have a very similar reconstruction error, in particular when some eigenvalues are very close to each other (the gap between the eigenvalues will actually appear as a central point of the analysis to come). In the present work, we set to study the behavior of these D-eigenspaces themselves: we provide finite sample bounds describing the closeness of the D-eigenspaces of the empirical covariance operator to the true one. There are several broad motivations for this analysis. First, the reconstruction error alone is a valid criterion only if one really plans to perform dimensionality reduction of the data and stop there. However, PCA is often used merely as a preprocessing step and the projected data is then submitted to further processing (which could be classification, regression or something else). In particular for KPCA, the projection subspace in the kernel space can be interpreted as a subspace of functions on the original space; one then expects these functions to be relevant for the data at hand and for some further task (see e.g. [3]). In these cases, if we want to analyze the full procedure (from a learning theoretical sense), it is desirable to have a more precise information on the selected subspace than just its reconstruction error. In particular, from a learning complexity point of view, it is important to ensure that functions used for learning stay in a set of limited complexity, which is ensured if the selected subspace is stable (which is a consequence of its convergence). The approach we use here is based on perturbation bounds and we essentially walk in the steps pioneered by Kolchinskii and Gin´e [7] (see also [4]) using tools of operator perturbation theory [5]. Similar methods have been used to prove consistency of spectral clustering [12, 11]. An important difference here is that we want to study directly the convergence of the whole subspace spanned by the first D eigenvectors instead of the separate convergence of the individual eigenvectors; in particular we are interested in how D acts as a complexity parameter. The important point in our main result is that it does not: only the gap between the D-th and the (D + 1)-th eigenvalue comes into account. This means that there in no increase in complexity (as far as this bound is concerned: of course we cannot exclude that better bounds can be obtained in the future) between estimating the D-th eigenvector alone or the span of the first D eigenvectors. Our contribution in the present work is thus • to adapt the operator perturbation result of [7] to D-eigenspaces. • to get non-asymptotic bounds on the approximation error of Kernel-PCA eigenspaces thanks to the previous tool. In section 2 we introduce shortly the notation, explain the main ingredients used and obtain a first bound based on controlling separately the first D eigenvectors, and depending on the dimension D. In section 3 we explain why the first bound is actually suboptimal and derive an improved bound as a consequence of an operator perturbation result that is more adapted to our needs and deals directly with the D-eigenspace as a whole. Section 4 concludes and discusses the obtained results. Mathematical proofs are found in the appendix. 2 First result. Notation. The interest variable X takes its values in some measurable space X, following the distribution P. We consider KPCA and are therefore primarily interested in the mapping of X into a reproducing kernel Hilbert space H with kernel function k through the feature mapping ϕ(x) = k(x, ·). The objective of the kernel PCA procedure is to recover a D-dimensional subspace SD of H such that the projection of ϕ(X) on SD has maximum averaged squared norm. All operators considered in what follows are Hilbert-Schmidt and the norm considered for these operators will be the Hilbert-Schmidt norm unless precised otherwise. Furthermore we only consider symmetric nonnegative operators, so that they can be diagonalized and have a discrete spectrum. Let C denote the covariance operator of variable ϕ(X). To simplify notation we assume that nonzero eigenvalues λ1 > λ2 > . . . of C are all simple (This is for convenience only. In the conclusion we discuss what changes have to be made if this is not the case). Let φ1, φ2, . . . be the associated eigenvectors. It is well-known that the optimal D-dimensional reconstruction space is SD = span{φ1, . . . , φD}. The KPCA procedure approximates this objective by considering the empirical covariance operator, denoted Cn, and the subspace bSD spanned by its first D eigenvectors. We denote PSD, PbSD the orthogonal projectors on these spaces. A first bound. Broadly speaking, the main steps required to obtain the type of result we are interested in are 1. A non-asympotic bound on the (Hilbert-Schmidt) norm of the difference between the empirical and the true covariance operators; 2. An operator perturbation result bounding the difference between spectral projectors of two operators by the norm of their difference. The combination of these two steps leads to our goal. The first step consists in the following Lemma coming from [9]: Lemma 1 (Corollary 5 of [9]) Supposing that supx∈X k(x, x) ≤M, with probability greater than 1 −e−ξ, ∥Cn −C∥≤2M √n 1 + r ξ 2 ! . As for the second step, [7] provides the following perturbation bound (see also e.g. [12]): Theorem 2 (Simplified Version of [7], Theorem 5.2 ) Let A be a symmetric positive Hilbert-Schmidt operator of the Hilbert space H with simple positive eigenvalues λ1 > λ2 > . . . For an integer r such that λr > 0, let eδr = δr ∧δr−1 where δr = 1 2(λr −λr+1). Let B ∈HS(H) be another symmetric operator such that ∥B∥< eδr/2 and (A + B) is still a positive operator with simple nonzero eigenvalues. Let Pr(A) (resp. Pr(A + B)) denote the orthogonal projector onto the subspace spanned by the r-th eigenvector of A (resp. (A + B)). Then, these projectors satisfy: ∥Pr(A) −Pr(A + B)∥≤2∥B∥ eδr . Remark about the Approximation Error of the Eigenvectors: let us recall that a control over the Hilbert-Schmidt norm of the projections onto eigenspaces imply a control on the approximation errors of the eigenvectors themselves. Indeed, let φr, ψr denote the (normalized) r-th eigenvectors of the operators above with signs chosen so that ⟨φr, ψr⟩> 0. Then ∥Pφr −Pψr∥2 = 2(1 −⟨φr, ψr⟩2) ≥2(1 −⟨φr, ψr⟩) = ∥φr −ψr∥2 . Now, the orthogonal projector on the direct sum of the first D eigenspaces is the sum PD r=1 Pr. Using the triangle inequality, and combining Lemma 1 and Theorem 2, we conclude that with probability at least 1 −e−ξ the following holds:
PSD −PbSD
≤ D X r=1 eδ−1 r ! 4M √n 1 + r ξ 2 ! , provided that n ≥16M 2 1 + q ξ 2 2 (sup1≤r≤D eδ−2 r ) . The disadvantage of this bound is that we are penalized on the one hand by the (inverse) gaps between the eigenvalues, and on the other by the dimension D (because we have to sum the inverse gaps from 1 to D). In the next section we improve the operator perturbation bound to get an improved result where only the gap δD enters into account. 3 Improved Result. We first prove the following variant on the operator perturbation property which better corresponds to our needs by taking directly into account the projection on the first D eigenvectors at once. The proof uses the same kind of techniques as in [7]. Theorem 3 Let A be a symmetric positive Hilbert-Schmidt operator of the Hilbert space H with simple nonzero eigenvalues λ1 > λ2 > . . . Let D > 0 be an integer such that λD > 0, δD = 1 2(λD −λD+1). Let B ∈HS(H) be another symmetric operator such that ∥B∥< δD/2 and (A + B) is still a positive operator. Let P D(A) (resp. P D(A + B)) denote the orthogonal projector onto the subspace spanned by the first D eigenvectors A (resp. (A + B)). Then these satisfy: ∥P D(A) −P D(A + B)∥≤∥B∥ δD . (1) This then gives rise to our main result on KPCA: Theorem 4 Assume that supx∈X k(x, x) ≤M. Let SD, bSD be the subspaces spanned by the first D eigenvectors of C, resp. Cn defined earlier. Denoting λ1 > λ2 > . . . the eigenvalues of C, if D > 0 is such that λD > 0, put δD = 1 2(λD −λD+1) and BD = 2M δD 1 + r ξ 2 ! . Then provided that n ≥B2 D, the following bound holds with probability at least 1 −e−ξ:
PSD −PbSD
≤BD √n . (2) This entails in particular bSD ⊂ n g + h, g ∈SD, h ∈S⊥ D, ∥h∥Hk ≤2BD n−1 2 ∥g∥Hk o . (3) The important point here is that the approximation error now only depends on D through the (inverse) gap between the D-th and (D + 1)-th eigenvalues. Note that using the results of section 2, we would have obtained exactly the same bound for estimating the D-th eigenvector only – or even a worse bound since eδD = δD ∧δD−1 appears in this case. Thus, at least from the point of view of this technique (which could still yield suboptimal bounds), there is no increase of complexity between estimating the D-th eigenvector alone and estimating the span of the first D eigenvectors. Note that the inclusion (3) can be interpreted geometrically by saying that for any vector in bSD, the tangent of the angle between this vector and its projection on SD is upper bounded by BD/√n, which we can interpret as a stability property. Comment about the Centered Case. In the actual (K)PCA procedure, the data is actually first empirically recentered, so that one has to consider the centered covariance operator C and its empirical counterpart Cn. A result similar to Theorem 4 also holds in this case (up to some additional constant factors). Indeed, a result similar to Lemma 1 holds for the recentered operators [2]. Combined again with Theorem 3, this allows to come to similar conclusions for the “true” centered KPCA. 4 Conclusion and Discussion In this paper, finite sample size confidence bounds of the eigenspaces of Kernel-PCA (the D-eigenspaces of the empirical covariance operator) are provided using tools of operator perturbation theory. This provides a first step towards an in-depth complexity analysis of algorithms using KPCA as pre-processing, and towards taking into account the randomness of the obtained models (e.g. [3]). We proved a bound in which the complexity factor for estimating the eigenspace SD by its empirical counterpart depends only on the inverse gap between the D-th and (D + 1)-th eigenvalues. In addition to the previously cited works, we take into account the centering of the data and obtain comparable rates. In this work we assumed for simplicity of notation the eigenvalues to be simple. In the case the covariance operator C has nonzero eigenvalues with multiplicities m1, m2, . . . possibly larger than one, the analysis remains the same except for one point: we have to assume that the dimension D of the subspaces considered is of the form m1 + · · · + mr for a certain r. This could seem restrictive in comparison with the results obtained for estimating the sum of the first D eigenvalues themselves [2] (which is linked to the reconstruction error in KPCA) where no such restriction appears. However, it should be clear that we need this restriction when considering D−eigenspaces themselves since the target space has to be unequivocally defined, otherwise convergence cannot occur. Thus, it can happen in this special case that the reconstruction error converges while the projection space itself does not. Finally, a common point of the two analyses (over the spectrum and over the eigenspaces) lies in the fact that the bounds involve an inverse gap in the eigenvalues of the true covariance operator. Finally, how tight are these bounds and do they at least carry some correct qualitative information about the behavior of the eigenspaces? Asymptotic results (central limit Theorems) in [6, 4] always provide the correct goal to shoot for since they actually give the limit distributions of these quantities. They imply that there is still important ground to cover before bridging the gap between asymptotic and non-asymptotic. This of course opens directions for future work. Acknowledgements: This work was supported in part by the PASCAL Network of Excellence (EU # 506778). A Appendix: proofs. Proof of Lemma 1. This lemma is proved in [9]. We give a short proof for the sake of completness. ∥Cn −C∥= ∥1 n Pn i=1 CXi −E [CX] ∥with ∥CX∥= ∥ϕ(X) ⊗ϕ(X)∗∥= k(X, X) ≤M. We can apply the bounded difference inequality to the variable ∥Cn −C∥, so that with probability greater than 1 −e−ξ, ∥Cn −C∥≤E [∥Cn −C∥] + 2M q ξ 2n . Moreover, by Jensen’s inequality E [∥Cn −C∥] ≤E ∥1 n Pn i=1 CXi −E [CX] ∥2 1 2 , and simple calculations leads to E ∥1 n Pn i=1 CXi −E [CX] ∥2 = 1 nE ∥CX −E [CX] ∥2 ≤ 4M 2 n . This concludes the proof of lemma 1. □ Proof of Theorem 3. The variation of this proof with respect to Theorem 5.2 in [7] is (a) to work directly in a (infinite-dimensional) Hilbert space, requiring extra caution for some details and (b) obtaining an improved bound by considering D-eigenspaces at once. The key property of Hilbert-Schmidt operators allowing to work directly in a infinite dimensional setting is that HS(H) is a both right and left ideal of Lc(H, H), the Banach space of all continuous linear operators of H endowed with the operator norm ∥.∥op. Indeed, ∀T ∈HS(H), ∀S ∈Lc(H, H), TS and ST belong to HS(H) with ∥TS∥≤∥T∥∥S∥op and ∥ST∥≤∥T∥∥S∥op . (4) The spectrum of an Hilbert-Schmidt operator T is denoted Λ(T) and the sequence of eigenvalues in non-increasing order is denoted λ(T) = (λ1(T) ≥λ2(T) ≥. . .) . In the following, P D(T) denotes the orthogonal projector onto the D-eigenspace of T. The Hoffmann-Wielandt inequality in infinite dimensional setting[1] yields that: ∥λ(A) −λ(A + B)∥ℓ2 ≤∥B∥≤δD 2 . (5) implying in particular that ∀i > 0, |λi(A) −λi(A + B)| ≤δD 2 . (6) Results found in [5] p.39 yield the formula P D(A) −P D(A + B) = −1 2iπ Z γ (RA(z) −RA+B(z))dz ∈Lc(H, H) . (7) where RA(z) = (A −z Id)−1 is the resolvent of A, provided that γ is a simple closed curve in C enclosing exactly the first D eigenvalues of A and (A+B). Moreover, the same reference (p.60) states that for ξ in the complementary of Λ(A), ∥RA(ξ)∥op = dist(ξ, Λ(A))−1 . (8) The proof of the theorem now relies on the simple choice for the closed curve γ in (7), drawn in the picture below and consisting of three straight lines and a semi-circle of radius L. For all L > δD 2 , γ intersect neither the eigenspectrum of A (by equation (6)) nor the eigenspectrum of A + B. Moreover, the eigenvalues of A (resp. A + B) enclosed by γ are exactly λ1(A), . . . , λD(A) (resp. λ1(A + B), . . . , λD(A + B)). Moreover, for z ∈γ, T(z) = RA(z) −RA+B(z) = −RA+B(z)BRA(z) belongs to HS(H) and depends continuously on z by (4). Consequently, ∥P D(A) −P D(A + B)∥≤1 2π Z b a ∥(RA −RA+B)(γ(t))∥|γ′(t)|dt . Let SN = PN n=0(−1)n(RA(z)B)nRA(z). RA+B(z) = (Id + RA(z)B)−1RA(z) and, for z ∈γ and L > δD, ∥RA(z)B∥op ≤∥RA(z)∥op∥B∥≤ δD 2 dist(z, Λ(A)) ≤1 2 , 0 L L L λ γ D δ δ D 2 D λ D+1 δ D δ δ D D 2 2 λ 1 λ 2 imply that SN ∥.∥op −→RA+B(z) (uniformly for z ∈γ). Using property (4), since B ∈ HS(H), SNBRA(z) ∥.∥ −→RA+B(z)BRA(z) = RA+B(z) −RA(z) . Finally, RA(z) −RA+B(z) = X n≥1 (−1)n(RA(z)B)nRA(z) where the series converges in HS(H), uniformly in z ∈γ. Using again property (4) and (8) implies ∥(RA −RA+B)(γ(t))∥ ≤ X n≥1 ∥RA(γ(t))∥n+1 op ∥B∥n ≤ X n≥1 ∥B∥n distn+1(γ(t), Λ(A)) Finally, since for L > δD, ∥B∥≤δD 2 ≤dist(γ(t),Λ(A)) 2 , ∥P D(A) −P D(A + B)∥≤∥B∥ π Z b a 1 dist2(γ(t), Λ(A))|γ′(t)|dt . Splitting the last integral into four parts according to the definition of the contour γ, we obtain Z b a 1 dist2(γ(t), Λ(A))|γ′(t)|dt ≤ 2arctan( L δD ) δD + π L + 2µ1(A) −(µD(A) −δD) L2 , and letting L goes to infinity leads to the result. □ Proof of Theorem 4. Lemma 1 and Theorem 3 yield inequality (2). Together with assumption n ≥B2 D it implies ∥PSD −PbSD∥≤1 2. Let f ∈bSD: f = PSD(f) + PS⊥ D(f) . Lemma 5 below with F = SD and G = bSD, and the fact that the operator norm is bounded by the Hilbert-Schmidt norm imply that ∥PS⊥ D(f)∥2 Hk ≤4 3∥PSD −PbSD∥2∥PSD(f)∥2 Hk . Gathering the different inequalities, Theorem 4 is proved. □ Lemma 5 Let F and G be two vector subspaces of H such that ∥PF −PG∥op ≤1 2. Then the following bound holds: ∀f ∈G , ∥PF ⊥(f)∥2 H ≤4 3∥PF −PG∥2 op∥PF (f)∥2 H . Proof of Lemma 5. For f ∈G, we have PG(f) = f , hence ∥PF ⊥(f)∥2 = ∥f −PF (f)∥2 = ∥(PG −PF )(f)∥2 ≤∥PF −PG∥2 op∥f∥2 = ∥PF −PG∥2 op ∥PF (f)∥2 + ∥PF ⊥(f)∥2 gathering the terms containing ∥PF ⊥(f)∥2 on the left-hand side and using ∥PF −PG∥2 op ≤ 1/4 leads to the conclusion. □ References [1] R. Bhatia and L. Elsner. The Hoffman-Wielandt inequality in infinite dimensions. Proc.Indian Acad.Sci(Math. Sci.) 104 (3), p. 483-494, 1994. [2] G. Blanchard, O. Bousquet, and L. Zwald. Statistical Properties of Kernel Principal Component Analysis. Proceedings of the 17th. Conference on Learning Theory (COLT 2004), p. 594–608. Springer, 2004. [3] G. Blanchard, P. Massart, R. Vert, and L. Zwald. Kernel projection machine: a new tool for pattern recognition. Proceedings of the 18th. Neural Information Processing System (NIPS 2004), p. 1649–1656. MIT Press, 2004. [4] J. Dauxois, A. Pousse, and Y. Romain. Asymptotic theory for the Principal Component Analysis of a vector random function: some applications to statistical inference. Journal of multivariate analysis 12, 136-154, 1982. [5] T. Kato. Perturbation Theory for Linear Operators. New-York: Springer-Verlag, 1966. [6] V. Koltchinskii. Asymptotics of spectral projections of some random matrices approximating integral operators. Progress in Probability, 43:191–227, 1998. [7] V. Koltchinskii and E. Gin´e. Random matrix approximation of spectra of integral operators. Bernoulli, 6(1):113–167, 2000. [8] B. Sch¨olkopf, A. J. Smola, and K.-R. M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299–1319, 1998. [9] J. Shawe-Taylor and N. Cristianini. Estimating the moments of a random vector with applications. Proceedings of the GRETSI 2003 Conference, p. 47-52, 2003. [10] J. Shawe-Taylor, C. Williams, N. Cristianini, and J. Kandola. On the eigenspectrum of the Gram matrix and the generalisation error of Kernel PCA. IEEE Transactions on Information Theory 51 (7), p. 2510-2522, 2005. [11] U. von Luxburg, M. Belkin, and O. Bousquet. Consistency of spectral clustering. Technical Report 134, Max Planck Institute for Biological Cybernetics, 2004. [12] U. von Luxburg, O. Bousquet, and M. Belkin. On the convergence of spectral clustering on random samples: the normalized case. Proceedings of the 17th Annual Conference on Learning Theory (COLT 2004), p. 457–471. Springer, 2004. [13] C. K. I. Williams and M. Seeger. The effect of the input density distribution on kernel-based classifiers. Proceedings of the 17th International Conference on Machine Learning (ICML), p. 1159–1166. Morgan Kaufmann, 2000.
|
2005
|
35
|
2,850
|
Layered Dynamic Textures Antoni B. Chan Nuno Vasconcelos Department of Electrical and Computer Engineering University of California, San Diego abchan@ucsd.edu, nuno@ece.ucsd.edu Abstract A dynamic texture is a video model that treats a video as a sample from a spatio-temporal stochastic process, specifically a linear dynamical system. One problem associated with the dynamic texture is that it cannot model video where there are multiple regions of distinct motion. In this work, we introduce the layered dynamic texture model, which addresses this problem. We also introduce a variant of the model, and present the EM algorithm for learning each of the models. Finally, we demonstrate the efficacy of the proposed model for the tasks of segmentation and synthesis of video. 1 Introduction Traditional motion representations, based on optical flow, are inherently local and have significant difficulties when faced with aperture problems and noise. The classical solution to this problem is to regularize the optical flow field [1, 2, 3, 4], but this introduces undesirable smoothing across motion edges or regions where the motion is, by definition, not smooth (e.g. vegetation in outdoors scenes). More recently, there have been various attempts to model video as a superposition of layers subject to homogeneous motion. While layered representations exhibited significant promise in terms of combining the advantages of regularization (use of global cues to determine local motion) with the flexibility of local representations (little undue smoothing), this potential has so far not fully materialized. One of the main limitations is their dependence on parametric motion models, such as affine transforms, which assume a piece-wise planar world that rarely holds in practice [5, 6]. In fact, layers are usually formulated as “cardboard” models of the world that are warped by such transformations and then stitched to form the frames in a video stream [5]. This severely limits the types of video that can be synthesized: while layers showed most promise as models for scenes composed of ensembles of objects subject to homogeneous motion (e.g. leaves blowing in the wind, a flock of birds, a picket fence, or highway traffic), very little progress has so far been demonstrated in actually modeling such scenes. Recently, there has been more success in modeling complex scenes as dynamic textures or, more precisely, samples from stochastic processes defined over space and time [7, 8, 9, 10]. This work has demonstrated that modeling both the dynamics and appearance of video as stochastic quantities leads to a much more powerful generative model for video than that of a “cardboard” figure subject to parametric motion. In fact, the dynamic texture model has shown a surprising ability to abstract a wide variety of complex patterns of motion and appearance into a simple spatio-temporal model. One major current limitation of the dynamic texture framework, however, is its inability to account for visual processes consisting of multiple, co-occurring, dynamic textures. For example, a flock of birds flying in front of a water fountain, highway traffic moving at different speeds, video containing both trees in the background and people in the foreground, and so forth. In such cases, the existing dynamic texture model is inherently incorrect, since it must represent multiple motion fields with a single dynamic process. In this work, we address this limitation by introducing a new generative model for video, which we denote by the layered dynamic texture (LDT). This consists of augmenting the dynamic texture with a discrete hidden variable, that enables the assignment of different dynamics to different regions of the video. Conditioned on the state of this hidden variable, the video is then modeled as a simple dynamic texture. By introducing a shared dynamic representation for all the pixels in the same region, the new model is a layered representation. When compared with traditional layered models, it replaces the process of layer formation based on “warping of cardboard figures” with one based on sampling from the generative model (for both dynamics and appearance) provided by the dynamic texture. This enables a much richer video representation. Since each layer is a dynamic texture, the model can also be seen as a multi-state dynamic texture, which is capable of assigning different dynamics and appearance to different image regions. We consider two models for the LDT, that differ in the way they enforce consistency of layer dynamics. One model enforces stronger consistency but has no closed-form solution for parameter estimates (which require sampling), while the second enforces weaker consistency but is simpler to learn. The models are applied to the segmentation and synthesis of sequences that are challenging for traditional vision representations. It is shown that stronger consistency leads to superior performance, demonstrating the benefits of sophisticated layered representations. The paper is organized as follows. In Section 2, we introduce the two layered dynamic texture models. In Section 3 we present the EM algorithm for learning both models from training data. Finally, in Section 4 we present an experimental evaluation in the context of segmentation and synthesis. 2 Layered dynamic textures We start with a brief review of dynamic textures, and then introduce the layered dynamic texture model. 2.1 Dynamic texture A dynamic texture [7] is a generative model for video, based on a linear dynamical system. The basic idea is to separate the visual component and the underlying dynamics into two processes. While the dynamics are represented as a time-evolving state process xt ∈Rn, the appearance of frame yt ∈RN is a linear function of the current state vector, plus some observation noise. Formally, the system is described by xt = Axt−1 + Bvt yt = Cxt + √rwt (1) where A ∈Rn×n is a transition matrix, C ∈RN×n a transformation matrix, Bvt ∼iid N(0, Q,) and √rwt ∼iid N(0, rIN) the state and observation noise processes parameterized by B ∈Rn×n and r ∈R, and the initial state x0 ∈Rn is a constant. One interpretation of the dynamic texture model is that the columns of C are the principal components of the video frames, and the state vectors the PCA coefficients for each video frame. This is the case when the model is learned with the method of [7]. Figure 1: The layered dynamic texture (left), and the approximate layered dynamic texture (right). yi is an observed pixel over time, xj is a hidden state process, and Z is the collection of layer assignment variables zi that assigns each pixels to one of the state processes. An alternative interpretation considers a single pixel as it evolves over time. Each coordinate of the state vector xt defines a one-dimensional random trajectory in time. A pixel is then represented as a weighted sum of random trajectories, where the weighting coefficients are contained in the corresponding row of C. This is analogous to the discrete Fourier transform in signal processing, where a signal is represented as a weighted sum of complex exponentials although, for the dynamic texture, the trajectories are not necessarily orthogonal. This interpretation illustrates the ability of the dynamic texture to model the same motion under different intensity levels (e.g. cars moving from the shade into sunlight) by simply scaling the rows of C. Regardless of interpretation, the simple dynamic texture model has only one state process, which restricts the efficacy of the model to video where the motion is homogeneous. 2.2 Layered dynamic textures We now introduce the layered dynamic texture (LDT), which is shown in Figure 1 (left). The model addresses the limitations of the dynamic texture by relying on a set of state processes X = {x(j)}K j=1 to model different video dynamics. The layer assignment variable zi assigns pixel yi to one of the state processes (layers), and conditioned on the layer assignments, the pixels in the same layer are modeled as a dynamic texture. In addition, the collection of layer assignments Z = {zi}N i=1 is modeled as a Markov random field (MRF) to ensure spatial layer consistency. The linear system equations for the layered dynamic texture are ( x(j) t = A(j)x(j) t−1 + B(j)v(j) t j ∈{1, · · · , K} yi,t = C(zi) i x(zi) t + √ r(zi)wi,t i ∈{1, · · · , N} (2) where C(j) i ∈R1×n is the transformation from the hidden state to the observed pixel domain for each pixel yi and each layer j, the noise parameters are B(j) ∈Rn×n and r(j) ∈ R, the iid noise processes are wi,t ∼iid N(0, 1) and v(j) t ∼iid N(0, In), and the initial states are drawn from x(j) 1 ∼N(µ(j), S(j)). As a generative model, the layered dynamic texture assumes that the state processes X and the layer assignments Z are independent, i.e. layer motion is independent of layer location, and vice versa. As will be seen in Section 3, this makes the expectation-step of the EM algorithm intractable to compute in closed-form. To address this issue, we also consider a slightly different model. 2.3 Approximate layered dynamic texture We now consider a different model, the approximate layered dynamic texture (ALDT), shown in Figure 1 (right). Each pixel yi is associated with its own state process xi, and a different dynamic texture is defined for each pixel. However, dynamic textures associated with the same layer share the same set of dynamic parameters, which are assigned by the layer assignment variable zi. Again, the collection of layer assignments Z is modeled as an MRF but, unlike the first model, conditioning on the layer assignments makes all the pixels independent. The model is described by the following linear system equations xi,t = A(zi)xi,t−1 + B(zi)vi,t i ∈{1, · · · , N} yi,t = C(zi) i xi,t + √ r(zi)wi,t (3) where the noise processes are wi,t ∼iid N(0, 1) and vi,t ∼iid N(0, In), and the initial states are given by xi,1 ∼N(µ(zi), S(zi)). This model can also be seen as a video extension of the popular image MRF models [11], where class variables for each pixel form an MRF grid and each class (e.g. pixels in the same segment) has some class-conditional distribution (in our case a linear dynamical system). The main difference between the two proposed models is in the enforcement of consistency of dynamics within a layer. With the LDT, consistency of dynamics is strongly enforced by requiring each pixel in the layer to be associated with the same state process. On the other hand, for the ALDT, consistency within a layer is weakly enforced by allowing the pixels to be associated with many instantiations of the state process (instantiations associated with the same layer sharing the same dynamic parameters). This weaker dependency structure enables a more efficient learning algorithm. 2.4 Modeling layer assignments The MRF which determines layer assignments has the following distribution p(Z) = 1 Z Y i ψi(zi) Y (i,j)∈E ψi,j(zi, zj) (4) where E is the set of edges in the MRF grid, Z a normalization constant (partition function), and ψi and ψi,j potential functions of the form ψi(zi) = α1 , zi = 1 ... ... αK , zi = K ψi,j(zi, zj) = γ1 , zi = zj γ2 , zi ̸= zj (5) The potential function ψi defines a prior likelihood for each layer, while ψi,j attributes higher probability to configurations where neighboring pixels are in the same layer. While the parameters for the potential functions could be learned for each model, we instead treat them as constants that can be estimated from a database of manually segmented training video. 3 Parameter estimation The parameters of the model are learned using the Expectation-Maximization (EM) algorithm [12], which iterates between estimating hidden state variables X and hidden layer assignments Z from the current parameters, and updating the parameters given the current hidden variable estimates. One iteration of the EM algorithm contains the following two steps • E-Step: Q(Θ; ˆΘ) = EX,Z|Y ; ˆΘ(log p(X, Y, Z; Θ)) • M-Step: ˆΘ∗= argmaxΘ Q(Θ; ˆΘ) In the remainder of this section, we briefly describe the EM algorithm for the two proposed models. Due to the limited space available, we refer the reader to the companion technical report [13] for further details. 3.1 EM for the layered dynamic texture The E-step for the layered dynamic texture computes the conditional mean and covariance of x(j) t given the observed video Y . These expectations are intractable to compute in closed-form since it is not known to which state process each of the pixels yi is assigned, and it is therefore necessary to marginalize over all configurations of Z. This problem also appears for the computation of the posterior layer assignment probability p(zi = j|Y ). The method of approximating these expectations which we currently adopt is to simply average over draws from the posterior p(X, Z|Y ) using a Gibbs sampler. Other approximations, e.g. variational methods or belief propagation, could be used as well. We plan to consider them in the future. Once the expectations are known, the M-step parameter updates are analogous to those required to learn a regular linear dynamical system [15, 16], with a minor modification in the updates if the transformation matrices C(j) i . See [13] for details. 3.2 EM for the approximate layered dynamic texture The ALDT model is similar to the mixture of dynamic textures [14], a video clustering model that treats a collection of videos as a sample from a collection of dynamic textures. Since, for the ALDT model, each pixel is sampled from a set of one-dimensional dynamic textures, the EM algorithm is similar to that of the mixture of dynamic textures. There are only two differences. First, the E-step computes the posterior assignment probability p(zi|Y ) given all the observed data, rather than conditioned on a single data point p(zi|yi). The posterior p(zi|Y ) can be approximated by sampling from the full posterior p(Z|Y ) using Markov-Chain Monte Carlo [11], or with other methods, such as loopy belief propagation. Second, the transformation matrix C(j) i is different for each pixel, and the E and M steps must be modified accordingly. Once again, the details are available in [13]. 4 Experiments In this section, we show the efficacy of the proposed model for segmentation and synthesis of several videos with multiple regions of distinct motion. Figure 2 shows the three video sequences used in testing. The first (top) is a composite of three distinct video textures of water, smoke, and fire. The second (middle) is of laundry spinning in a dryer. The laundry in the bottom left of the video is spinning in place in a circular motion, and the laundry around the outside is spinning faster. The final video (bottom) is of a highway [17] where the traffic in each lane is traveling at a different speed. The first, second and fourth lanes (from left to right) move faster than the third and fifth. All three videos have multiple regions of motion and are therefore properly modeled by the models proposed in this paper, but not by a regular dynamic texture. Four variations of the video models were fit to each of the three videos. The four models were the layered dynamic texture and the approximate layered dynamic texture models (LDT and ALDT), and those two models without the MRF layer assignment (LDT-iid and ALDT-iid). In the latter two cases, the layers assignments zi are distributed as iid multinomials. In all the experiments, the dimension of the state space was n = 10. The MRF grid was based on the eight-neighbor system (with cliques of size 2), and the parameters of the potential functions were γ1 = 0.99, γ2 = 0.01, and αj = 1/K. The expectations required by the EM algorithm were approximated using Gibbs sampling for the LDT and LDT-iid models and MCMC for the ALDT model. We first present segmentation results, to show that the models can effectively separate layers with different dynamics, and then discuss results relative to video synthesis from the learned models. 4.1 Segmentation The videos were segmented by assigning each of the pixels to the most probable layer conditioned on the observed video, i.e. z∗ i = argmax j p(zi = j|Y ) (6) Another possibility would be to assign the pixels by maximizing the posterior of all the pixels p(Z|Y ). While this maximizes the true posterior, in practice we obtained similar results with the two methods. The former method was chosen because the individual posterior distributions are already computed during the E-step of EM. The columns of Figure 3 show the segmentation results obtained with for the four models: LDT and LDT-iid in columns (a) and (b), and ALDT and ALDT-iid in columns (c) and (d). The segmented video is also available at [18]. From the segmentations produced by the iid models, it can be concluded that the composite and laundry videos can be reasonably well segmented without the MRF prior. This confirms the intuition that the various video regions contain very distinct dynamics, which can only be modeled with separate state processes. Otherwise, the pixels should be either randomly assigned among the various layers, or uniformly assigned to one of them. The segmentations of the traffic video using the iid models are poor. While the dynamics are different, the differences are significantly more subtle, and segmentation requires stronger enforcement of layer consistency. In general, the segmentations using LDT-iid are better than to those of the ALDT-iid, due to the weaker form of layer consistency imposed by the ALDT model. While this deficiency is offset by the introduction of the MRF prior, the stronger consistency enforced by the LDT model always results in better segmentations. This illustrates the need for the design of sophisticated layered representations when the goal is to model video with subtle inter-layer variations. As expected, the introduction of the MRF prior improves the segmentations produced by both models. For example, in the composite sequence all erroneous segments in the water region are removed, and in the traffic sequence, most of the speckled segmentation also disappears. In terms of the overall segmentation quality, both LDT and ALDT are able to segment the composite video perfectly. The segmentation of the laundry video by both models is plausible, as the laundry tumbling around the edge of the dryer moves faster than that spinning in place. The two models also produce reasonable segmentations of the traffic video, with the segments roughly corresponding to the different lanes of traffic. Much of the errors correspond to regions that either contain intermittent motion (e.g. the region between the lanes) or almost no motion (e.g. truck in the upper-right corner and flat-bed truck in the third lane). Some of these errors could be eliminated by filtering the video before segmentation, but we have attempted no pre or post-processing. Finally, we note that the laundry and traffic videos are not trivial to segment with standard computer vision techniques, namely methods based on optical flow. This is particularly true in the case of the traffic video where the abundance of straight lines and flat regions makes computing the correct optical flow difficult due to the aperture problem. 4.2 Synthesis The layered dynamic texture is a generative model, and hence a video can be synthesized by drawing a sample from the learned model. A synthesized composite video using the LDT, ALDT, and the normal dynamic texture can be found at [18]. When modeling a video with multiple motions, the regular dynamic texture will average different dynamics. Figure 2: Frames from the test video sequences: (top) composite of water, smoke, and fire video textures; (middle) spinning laundry in a dryer; and (bottom) highway traffic with lanes traveling at different speeds. (a) (b) (c) (d) Figure 3: Segmentation results for each of the test videos using: (a) the layered dynamic texture, and (b) the layered dynamic texture without MRF; (c) the approximate layered dynamic texture, and (d) the approximate LDT without MRF. This is noticeable in the synthesized video, where the fire region does not flicker at the same speed as in the original video. Furthermore, the motions in different regions are coupled, e.g. when the fire begins to flicker faster, the water region ceases to move smoothly. In contrast, the video synthesized from the layered dynamic texture is more realistic, as the fire region flickers at the correct speed, and the different regions follow their own motion patterns. The video synthesized from the ALDT appears noisy because the pixels evolve from different instantiations of the state process. Once again this illustrates the need for sophisticated layered models. References [1] B. K. P. Horn. Robot Vision. McGraw-Hill Book Company, New York, 1986. [2] B. Horn and B. Schunk. Determining optical flow. Artificial Intelligence, vol. 17, 1981. [3] B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. Proc. DARPA Image Understanding Workshop, 1981. [4] J. Barron, D. Fleet, and S. Beauchemin. Performance of optical flow techniques. International Journal of Computer Vision, vol. 12, 1994. [5] J. Wang and E. Adelson. Representing moving images with layers. IEEE Trans. on Image Processing, vol. 3, September 1994. [6] B. Frey and N. Jojic. Estimating mixture models of images and inferring spatial transformations using the EM algorithm. In IEEE Conference on Computer Vision and Pattern Recognition, 1999. [7] G. Doretto, A. Chiuso, Y. N. Wu, and S. Soatto. Dynamic textures. International Journal of Computer Vision, vol. 2, pp. 91-109, 2003. [8] G. Doretto, D. Cremers, P. Favaro, and S. Soatto. Dynamic texture segmentation. In IEEE International Conference on Computer Vision, vol. 2, pp. 1236-42, 2003. [9] P. Saisan, G. Doretto, Y. Wu, and S. Soatto. Dynamic texture recognition. In IEEE Conference on Computer Vision and Pattern Recognition, Proceedings, vol. 2, pp. 58-63, 2001. [10] A. B. Chan and N. Vasconcelos. Probabilistic kernels for the classification of auto-regressive visual processes. In IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 846-51, 2005. [11] S. Geman and D. Geman. Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 6(6), pp. 72141, 1984. [12] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, vol. 39, pp. 1-38, 1977. [13] A. B. Chan and N. Vasconcelos. The EM algorithm for layered dynamic textures. Technical Report SVCL-TR-2005-03, June 2005. http://www.svcl.ucsd.edu/. [14] A. B. Chan and N. Vasconcelos. Mixtures of dynamic textures. In IEEE International Conference on Computer Vision, vol. 1, pp. 641-47, 2005. [15] R. H. Shumway and D. S. Stoffer. An approach to time series smoothing and forecasting using the EM algorithm. Journal of Time Series Analysis, vol. 3(4), pp. 253-64, 1982. [16] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation, vol. 11, pp. 305-45, 1999. [17] http://www.wsdot.wa.gov [18] http://www.svcl.ucsd.edu/∼abc/nips05/
|
2005
|
36
|
2,851
|
Subsequence Kernels for Relation Extraction Razvan C. Bunescu Department of Computer Sciences University of Texas at Austin 1 University Station C0500 Austin, TX 78712 razvan@cs.utexas.edu Raymond J. Mooney Department of Computer Sciences University of Texas at Austin 1 University Station C0500 Austin, TX 78712 mooney@cs.utexas.edu Abstract We present a new kernel method for extracting semantic relations between entities in natural language text, based on a generalization of subsequence kernels. This kernel uses three types of subsequence patterns that are typically employed in natural language to assert relationships between two entities. Experiments on extracting protein interactions from biomedical corpora and top-level relations from newspaper corpora demonstrate the advantages of this approach. 1 Introduction Information Extraction (IE) is an important task in natural language processing, with many practical applications. It involves the analysis of text documents, with the aim of identifying particular types of entities and relations among them. Reliably extracting relations between entities in natural-language documents is still a difficult, unsolved problem. Its inherent difficulty is compounded by the emergence of new application domains, with new types of narrative that challenge systems developed for other, well-studied domains. Traditionally, IE systems have been trained to recognize names of people, organizations, locations and relations between them (MUC [1], ACE [2]). For example, in the sentence “protesters seized several pumping stations”, the task is to identify a LOCATED AT relationship between protesters (a PERSON entity) and stations (a LOCATION entity). Recently, substantial resources have been allocated for automatically extracting information from biomedical corpora, and consequently much effort is currently spent on automatically identifying biologically relevant entities, as well as on extracting useful biological relationships such as protein interactions or subcellular localizations. For example, the sentence “TR6 specifically binds Fas ligand”, asserts an interaction relationship between the two proteins TR6 and Fas ligand. As in the case of the more traditional applications of IE, systems based on manually developed extraction rules [3, 4] were soon superseded by information extractors learned through training on supervised corpora [5, 6]. One challenge posed by the biological domain is that current systems for doing part-of-speech (POS) tagging or parsing do not perform as well on the biomedical narrative as on the newspaper corpora on which they were originally trained. Consequently, IE systems developed for biological corpora need to be robust to POS or parsing errors, or to give reasonable performance using shallower but more reliable information, such as chunking instead of parsing. Motivated by the task of extracting protein-protein interactions from biomedical corpora, we present a generalization of the subsequence kernel from [7] that works with sequences containing combinations of words and word classes. This generalized kernel is further tailored for the task of relation extraction. Experimental results show that the new relation kernel outperforms two previous rule-based methods for interaction extraction. With a small modification, the same kernel is used for extracting top-level relations from ACE corpora, providing better results than a recent approach based on dependency tree kernels. 2 Background One of the first approaches to extracting protein interactions is that of Blaschke et al., described in [3, 4]. Their system is based on a set of manually developed rules, where each rule (or frame) is a sequence of words (or POS tags) and two protein-name tokens. Between every two adjacent words is a number indicating the maximum number of intervening words allowed when matching the rule to a sentence. An example rule is “interaction of (3) <P> (3) with (3) <P>”, where ’<P>’ is used to denote a protein name. A sentence matches the rule if and only if it satisfies the word constraints in the given order and respects the respective word gaps. In [6] the authors described a new method ELCS (Extraction using Longest Common Subsequences) that automatically learns such rules. ELCS’ rule representation is similar to that in [3, 4], except that it currently does not use POS tags, but allows disjunctions of words. An example rule learned by this system is “- (7) interaction (0) [between | of] (5) <P> (9) <P> (17) .”. Words in square brackets separated by ‘|’ indicate disjunctive lexical constraints, i.e. one of the given words must match the sentence at that position. The numbers in parentheses between adjacent constraints indicate the maximum number of unconstrained words allowed between the two. 3 Extraction using a Relation Kernel Both Blaschke and ELCS do interaction extraction based on a limited set of matching rules, where a rule is simply a sparse (gappy) subsequence of words or POS tags anchored on the two protein-name tokens. Therefore, the two methods share a common limitation: either through manual selection (Blaschke), or as a result of the greedy learning procedure (ELCS), they end up using only a subset of all possible anchored sparse subsequences. Ideally, we would want to use all such anchored sparse subsequences as features, with weights reflecting their relative accuracy. However explicitly creating for each sentence a vector with a position for each such feature is infeasible, due to the high dimensionality of the feature space. Here we can exploit dual learning algorithms that process examples only via computing their dot-products, such as the Support Vector Machines (SVMs) [8]. Computing the dot-product between two such vectors amounts to calculating the number of common anchored subsequences between the two sentences. This can be done very efficiently by modifying the dynamic programming algorithm used in the string kernel from [7] to account only for common sparse subsequences constrained to contain the two protein-name tokens. We further prune down the feature space by utilizing the following property of natural language statements: when a sentence asserts a relationship between two entity mentions, it generally does this using one of the following three patterns: • [FB] Fore–Between: words before and between the two entity mentions are simultaneously used to express the relationship. Examples: ‘interaction of ⟨P1⟩with ⟨P2⟩‘, ‘activation of ⟨P1⟩by ⟨P2⟩‘. • [B] Between: only words between the two entities are essential for asserting the relationship. Examples: ‘⟨P1⟩interacts with ⟨P2⟩‘, ‘⟨P1⟩is activated by ⟨P2⟩‘. • [BA] Between–After: words between and after the two entity mentions are simultaneously used to express the relationship. Examples: ‘⟨P1⟩– ⟨P2⟩complex‘, ‘⟨P1⟩and ⟨P2⟩ interact‘. Another observation is that all these patterns use at most 4 words to express the relationship (not counting the two entity names). Consequently, when computing the relation kernel, we restrict the counting of common anchored subsequences only to those having one of the three types described above, with a maximum word-length of 4. This type of feature selection leads not only to a faster kernel computation, but also to less overfitting, which results in increased accuracy (see Section 5 for comparative experiments). The patterns enumerated above are completely lexicalized and consequently their performance is limited by data sparsity. This can be alleviated by categorizing words into classes with varying degrees of generality, and then allowing patterns to use both words and their classes. Examples of word classes are POS tags and generalizations over POS tags such as Noun, Active Verb or Passive Verb. The entity type can also be used, if the word is part of a known named entity, as well as the type of the chunk containing the word, when chunking information is available. Content words such as nouns and verbs can also be related to their synsets via WordNet. Patterns then will consist of sparse subsequences of words, POS tags, general POS (GPOS) tags, entity and chunk types, or WordNet synsets. For example, ‘Noun of ⟨P1⟩by ⟨P2⟩‘ is an FB pattern based on words and general POS tags. 4 Subsequence Kernels for Relation Extraction We are going to show how to compute the relation kernel described in the previous section in two steps. First, in Section 4.1 we present a generalization of the subsequence kernel from [7]. This new kernel works with patterns construed as mixtures of words and word classes. Based on this generalized subsequence kernel, in Section 4.2 we formally define and show the efficient computation of the relation kernel used in our experiments. 4.1 A Generalized Subsequence Kernel Let Σ1, Σ2, ..., Σk be some disjoint feature spaces. Following the example in Section 3, Σ1 could be the set of words, Σ2 the set of POS tags, etc. Let Σ× = Σ1 × Σ2 × ... × Σk be the set of all possible feature vectors, where a feature vector would be associated with each position in a sentence. Given two feature vectors x, y ∈Σ×, let c(x, y) denote the number of common features between x and y. The next notation follows that introduced in [7]. Thus, let s, t be two sequences over the finite set Σ×, and let |s| denote the length of s = s1...s|s|. The sequence s[i : j] is the contiguous subsequence si...sj of s. Let i = (i1, ..., i|i|) be a sequence of |i| indices in s, in ascending order. We define the length l(i) of the index sequence i in s as i|i| −i1 + 1. Similarly, j is a sequence of |j| indices in t. Let Σ∪= Σ1 ∪Σ2 ∪... ∪Σk be the set of all possible features. We say that the sequence u ∈Σ∗ ∪is a (sparse) subsequence of s if there is a sequence of |u| indices i such that uk ∈sik, for all k = 1, ..., |u|. Equivalently, we write u ≺s[i] as a shorthand for the component-wise ‘∈‘ relationship between u and s[i]. Finally, let Kn(s, t, λ) (Equation 1) be the number of weighted sparse subsequences u of length n common to s and t (i.e. u ≺s[i], u ≺t[j]), where the weight of u is λl(i)+l(j), for some λ ≤1. Kn(s, t, λ) = X u∈Σn ∪ X i:u≺s[i] X j:u≺t[j] λl(i)+l(j) (1) Because for two fixed index sequences i and j, both of length n, the size of the set {u ∈Σn ∪|u ≺s[i], u ≺t[j]} is Qn k=1 c(sik, tjk), then we can rewrite Kn(s, t, λ) as in Equation 2: Kn(s, t, λ) = X i:|i|=n X j:|j|=n n Y k=1 c(sik, tjk)λl(i)+l(j) (2) We use λ as a decaying factor that penalizes longer subsequences. For sparse subsequences, this means that wider gaps will be penalized more, which is exactly the desired behavior for our patterns. Through them, we try to capture head-modifier dependencies that are important for relation extraction; for lack of reliable dependency information, the larger the word gap is between two words, the less confident we are in the existence of a headmodifier relationship between them. To enable an efficient computation of Kn, we use the auxiliary function K ′ n with a similar definition as Kn, the only difference being that it counts the length from the beginning of the particular subsequence u to the end of the strings s and t, as illustrated in Equation 3: K ′ n(s, t, λ) = X u∈Σn ∪ X i:u≺s[i] X j:u≺t[j] λ|s|+|t|−i1−j1+2 (3) An equivalent formula for K ′ n(s, t, λ) is obtained by changing the exponent of λ from Equation 2 to |s| + |t| −i1 −j1 + 2. Based on all definitions above, Kn can be computed in O(kn|s||t|) time, by modifying the recursive computation from [7] with the new factor c(x, y), as shown in Figure 1. In this figure, the sequence sx is the result of appending x to s (with ty defined in a similar way). To avoid clutter, the parameter λ is not shown in the argument list of K and K′, unless it is instantiated to a specific constant. K ′ 0(s, t) = 1, for all s, t K ′′ i (sx, ty) = λK ′′ i (sx, t) + λ2K ′ i−1(s, t) · c(x, y) K ′ i(sx, t) = λK ′ i(s, t) + K ′′ i (sx, t) Kn(sx, t) = Kn(s, t) + X j λ2K ′ n−1(s, t[1 : j −1]) · c(x, t[j]) Figure 1: Computation of subsequence kernel. 4.2 Computing the Relation Kernel As described in Section 2, the input consists of a set of sentences, where each sentence contains exactly two entities (protein names in the case of interaction extraction). In Figure 2 we show the segments that will be used for computing the relation kernel between two example sentences s and t. In sentence s for instance, x1 and x2 are the two entities, sf is the sentence segment before x1, sb is the segment between x1 and x2, and sa is the sentence segment after x2. For convenience, we also include the auxiliary segment s ′ b = x1sbx2, whose span is computed as l(s ′ b) = l(sb) + 2 (in all length computations, we consider x1 and x2 as contributing one unit only). sf ft ta sa 1 2 y y t t’ b b 1 2 x x s s’b b s = t = Figure 2: Sentence segments. The relation kernel computes the number of common patterns between two sentences s and t, where the set of patterns is restricted to the three types introduced in Section 3. Therefore, the kernel rK(s, t) is expressed as the sum of three sub-kernels: fbK(s, t) counting the rK(s, t) = fbK(s, t) + bK(s, t) + baK(s, t) bKi(s, t) = Ki(sb, tb, 1) · c(x1, y1) · c(x2, y2) · λl(s ′ b)+l(t ′ b) fbK(s, t) = X i,j bKi(s, t) · K ′ j(sf, tf), 1 ≤i, 1 ≤j, i + j < fbmax bK(s, t) = X i bKi(s, t), 1 ≤i ≤bmax baK(s, t) = X i,j bKi(s, t) · K ′ j(s− a , t− a ), 1 ≤i, 1 ≤j, i + j < bamax Figure 3: Computation of relation kernel. number of common fore–between patterns, bK(s, t) for between patterns, and baK(s, t) for between–after patterns, as in Figure 3. All three sub-kernels include in their computation the counting of common subsequences between s ′ b and t ′ b. In order to speed up the computation, all these common counts can be calculated separately in bKi, which is defined as the number of common subsequences of length i between s ′ b and t ′ b, anchored at x1/x2 and y1/y2 respectively (i.e. constrained to start at x1/y1 and to end at x2/y2). Then fbK simply counts the number of subsequences that match j positions before the first entity and i positions between the entities, constrained to have length less than a constant fbmax. To obtain a similar formula for baK we simply use the reversed (mirror) version of segments sa and ta (e.g. s− a and t− a ). In Section 3 we observed that all three subsequence patterns use at most 4 words to express a relation, therefore we set constants fbmax, bmax and bamax to 4. Kernels K and K ′ are computed using the procedure described in Section 4.1. 5 Experimental Results The relation kernel (ERK) is evaluated on the task of extracting relations from two corpora with different types of narrative, which are described in more detail in the following sections. In both cases, we assume that the entities and their labels are known. All preprocessing steps – sentence segmentation, tokenization, POS tagging and chunking – were performed using the OpenNLP1 package. If a sentence contains n entities (n ≥2), it is replicated into n 2 sentences, each containing only two entities. If the two entities are known to be in a relationship, then the replicated sentence is added to the set of corresponding positive sentences, otherwise it is added to the set of negative sentences. During testing, a sentence having n entities (n ≥2) is again replicated into n 2 sentences in a similar way. The relation kernel is used in conjunction with SVM learning in order to find a decision hyperplane that best separates the positive examples from negative examples. We modified the LibSVM2 package by plugging in the kernel described above. In all experiments, the decay factor λ is set to 0.75. The performance is measured using precision (percentage of correctly extracted relations out of total extracted) and recall (percentage of correctly extracted relations out of total number of relations annotated in the corpus). When PR curves are reported, the precision and recall are computed using output from 10-fold cross-validation. The graph points are obtained by varying a threshold on the minimum acceptable extraction confidence, based on the probability estimates from LibSVM. 1URL: http://opennlp.sourceforge.net 2URL:http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ 5.1 Interaction Extraction from AImed We did comparative experiments on the AImed corpus, which has been previously used for training the protein interaction extraction systems in [6]. It consists of 225 Medline abstracts, of which 200 are known to describe interactions between human proteins, while the other 25 do not refer to any interaction. There are 4084 protein references and around 1000 tagged interactions in this dataset. We compare the following three systems on the task of retrieving protein interactions from AImed (assuming gold standard proteins): • [Manual]: We report the performance of the rule-based system of [3, 4]. • [ELCS]: We report the 10-fold cross-validated results from [6] as a PR graph. • [ERK]: Based on the same splits as ELCS, we compute the corresponding precisionrecall graph. In order to have a fair comparison with the other two systems, which use only lexical information, we do not use any word classes here. The results, summarized in Figure 4(a), show that the relation kernel outperforms both ELCS and the manually written rules. 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Precision (%) Recall (%) ERK Manual ELCS 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Precision (%) Recall (%) ERK ERK-A (a) ERK vs. ELCS (b) ERK vs. ERK-A Figure 4: PR curves for interaction extractors. To evaluate the impact that the three types of patterns have on performance, we compare ERK with an ablated system (ERK-A) that uses all possible patterns, constrained only to be anchored on the two entity names. As can be seen in Figure 4(b), the three patterns (FB, B, BA) do lead to a significant increase in performance, especially for higher recall levels. 5.2 Relation Extraction from ACE To evaluate how well this relation kernel ports to other types of narrative, we applied it to the problem of extracting top-level relations from the ACE corpus [2], the version used for the September 2002 evaluation. The training part of this dataset consists of 422 documents, with a separate set of 97 documents allocated for testing. This version of the ACE corpus contains three types of annotations: coreference, named entities and relations. There are five types of entities – PERSON, ORGANIZATION, FACILITY, LOCATION, and GEO-POLITICAL ENTITY – which can participate in five general, top-level relations: ROLE, PART, LOCATED, NEAR, and SOCIAL. A recent approach to extracting relations is described in [9]. The authors use a generalized version of the tree kernel from [10] to compute a kernel over relation examples, where a relation example consists of the smallest dependency tree containing the two entities of the relation. Precision and recall values are reported for the task of extracting the 5 top-level relations in the ACE corpus under two different scenarios: – [S1] This is the classic setting: one multi-class SVM is learned to discriminate among the 5 top-level classes, plus one more class for the no-relation cases. – [S2] One binary SVM is trained for relation detection, meaning that all positive relation instances are combined into one class. The thresholded output of this binary classifier is used as training data for a second multi-class SVM, trained for relation classification. We trained our relation kernel, under the first scenario, to recognize the same 5 top-level relation types. While for interaction extraction we used only the lexicalized version of the kernel, here we utilize more features, corresponding to the following feature spaces: Σ1 is the word vocabulary, Σ2 is the set of POS tags, Σ3 is the set of generic POS tags, and Σ4 contains the 5 entity types. We also used chunking information as follows: all (sparse) subsequences were created exclusively from the chunk heads, where a head is defined as the last word in a chunk. The same criterion was used for computing the length of a subsequence – all words other than head words were ignored. This is based on the observation that in general words other than the chunk head do not contribute to establishing a relationship between two entities outside of that chunk. One exception is when both entities in the example sentence are contained in the same chunk. This happens very often due to nounnoun (’U.S. troops’) or adjective-noun (’Serbian general’) compounds. In these cases, we let one chunk contribute both entity heads. Also, an important difference from the interaction extraction case is that often the two entities in a relation do not have any words separating them, as for example in noun-noun compounds. None of the three patterns from Section 3 capture this type of dependency, therefore we introduced a fourth type of pattern, the modifier pattern M. This pattern consists of a sequence of length two formed from the head words (or their word classes) of the two entities. Correspondingly, we updated the relation kernel from Figure 3 with a new kernel term mK, as illustrated in Equation 4. rK(s, t) = fbK(s, t) + bK(s, t) + baK(s, t) + mK(s, t) (4) The sub-kernel mK corresponds to a product of counts, as shown in Equation 5. mK(s, t) = c(x1, y1) · c(x2, y2) · λ2+2 (5) We present in Table 1 the results of using our updated relation kernel to extract relations from ACE, under the first scenario. We also show the results presented in [9] for their best performing kernel K4 (a sum between a bag-of-words kernel and the dependency kernel) under both scenarios. Table 1: Extraction Performance on ACE. Method Precision Recall F-measure (S1) ERK 73.9 35.2 47.7 (S1) K4 70.3 26.3 38.0 (S2) K4 67.1 35.0 45.8 Even though it uses less sophisticated syntactic and semantic information, ERK in S1 significantly outperforms the dependency kernel. Also, ERK already performs a few percentage points better than K4 in S2. Therefore we expect to get an even more significant increase in performance by training our relation kernel in the same cascaded fashion. 6 Related Work In [10], a tree kernel is defined over shallow parse representations of text, together with an efficient algorithm for computing it. Experiments on extracting PERSON-AFFILIATION and ORGANIZATION-LOCATION relations from 200 news articles show the advantage of using this new type of tree kernels over three feature-based algorithms. The same kernel was slightly generalized in [9] and applied on dependency tree representations of sentences, with dependency trees being created from head-modifier relationships extracted from syntactic parse trees. Experimental results show a clear win of the dependency tree kernel over a bag-of-words kernel. However, in a bag-of-words approach the word order is completely lost. For relation extraction, word order is important, and our experimental results support this claim – all subsequence patterns used in our approach retain the order between words. The tree kernels used in the two methods above are opaque in the sense that the semantics of the dimensions in the corresponding Hilbert space is not obvious. For subsequence kernels, the semantics is known by definition: each subsequence pattern corresponds to a dimension in the Hilbert space. This enabled us to easily restrict the types of patterns counted by the kernel to the three types that we deemed relevant for relation extraction. 7 Conclusion and Future Work We have presented a new relation extraction method based on a generalization of subsequence kernels. When evaluated on a protein interaction dataset, the new method showed better performance than two previous rule-based systems. After a small modification, the same kernel was evaluated on the task of extracting top-level relations from the ACE corpus, showing better performance when compared with a recent dependency tree kernel. An experiment that we expect to lead to better performance was already suggested in Section 5.2 – using the relation kernel in a cascaded fashion, in order to improve the low recall caused by the highly unbalanced data distribution. Another performance gain may come from setting the factor λ to a more appropriate value based on a development dataset. Currently, the method assumes the named entities are known. A natural extension is to integrate named entity recognition with relation extraction. Recent research [11] indicates that a global model that captures the mutual influences between the two tasks can lead to significant improvements in accuracy. 8 Acknowledgements This work was supported by grants IIS-0117308 and IIS-0325116 from the NSF. We would like to thank Rohit J. Kate and the anonymous reviewers for helpful observations. References [1] R. Grishman, Message Understanding Conference 6, http://cs.nyu.edu/cs/faculty/grishman/ muc6.html (1995). [2] NIST, ACE – Automatic Content Extraction, http://www.nist.gov/speech/tests/ace (2000). [3] C. Blaschke, A. Valencia, Can bibliographic pointers for known biological data be found automatically? protein interactions as a case study, Comparative and Functional Genomics 2 (2001) 196–206. [4] C. Blaschke, A. Valencia, The frame-based module of the Suiseki information extraction system, IEEE Intelligent Systems 17 (2002) 14–20. [5] S. Ray, M. Craven, Representing sentence structure in hidden Markov models for information extraction, in: Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI-2001), Seattle, WA, 2001, pp. 1273–1279. [6] R. Bunescu, R. Ge, R. J. Kate, E. M. Marcotte, R. J. Mooney, A. K. Ramani, Y. W. Wong, Comparative experiments on learning information extractors for proteins and their interactions, Artificial Intelligence in Medicine (special issue on Summarization and Information Extraction from Medical Documents) 33 (2) (2005) 139–155. [7] H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, C. Watkins, Text classification using string kernels, Journal of Machine Learning Research 2 (2002) 419–444. [8] V. N. Vapnik, Statistical Learning Theory, John Wiley & Sons, 1998. [9] A. Culotta, J. Sorensen, Dependency tree kernels for relation extraction, in: Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), Barcelona, Spain, 2004, pp. 423–429. [10] D. Zelenko, C. Aone, A. Richardella, Kernel methods for relation extraction, Journal of Machine Learning Research 3 (2003) 1083–1106. [11] D. Roth, W. Yih, A linear programming formulation for global inference in natural language tasks, in: Proceedings of the Annual Conference on Computational Natural Language Learning (CoNLL), Boston, MA, 2004, pp. 1–8.
|
2005
|
37
|
2,852
|
Optimal cue selection strategy Vidhya Navalpakkam Department of Computer Science USC, Los Angeles navalpak@usc.edu Laurent Itti Department of Computer Science USC, Los Angeles itti@usc.edu Abstract Survival in the natural world demands the selection of relevant visual cues to rapidly and reliably guide attention towards prey and predators in cluttered environments. We investigate whether our visual system selects cues that guide search in an optimal manner. We formally obtain the optimal cue selection strategy by maximizing the signal to noise ratio (SNR) between a search target and surrounding distractors. This optimal strategy successfully accounts for several phenomena in visual search behavior, including the effect of target-distractor discriminability, uncertainty in target’s features, distractor heterogeneity, and linear separability. Furthermore, the theory generates a new prediction, which we verify through psychophysical experiments with human subjects. Our results provide direct experimental evidence that humans select visual cues so as to maximize SNR between the targets and surrounding clutter. 1 Introduction Detecting a yellow tiger among distracting foliage in different shades of yellow and brown requires efficient top-down strategies that select relevant visual cues to enable rapid and reliable detection of the target among several distractors. For simple scenarios such as searching for a red target, the Guided Search theory [17] predicts that search efficiency can be improved by boosting the red feature in a top-down manner. But for more complex and natural scenarios such as detecting a tiger in the jungle or looking for a face in a crowd, finding the optimum amount of top-down enhancement to be applied to each low-level feature dimension encoded by the early visual system is non-trivial. It must not only consider features present in the target, but also those present in the distractors. In this paper, we formally obtain the optimal cue selection strategy and investigate whether our visual system has evolved to deploy it. In section 2, we formulate cue selection as an optimization problem where the relevant goal is to maximize the signal to noise ratio (SNR) of the saliency map, so that the target becomes most salient and quickly draws attention, thereby minimizing search time. Next, we show through simulations that this optimal top-down guided search theory successfully accounts for several observed phenomena in visual search behavior, such as the effect of target-distractor discriminability, uncertainty in target’s features, distractor heterogeneity, linear separability, and more. In section 4, we describe the design and analysis of psychophysics experiments to test new, counter-intuitive predictions of the theory. The results of our study suggest that humans deploy optimal cue selection strategies to detect targets in cluttered and distracting environments. 2 Formalizing visual search as an optimization problem To quickly find a target among distractors, we wish to maximize the salience of the target relative to the distractors. Thus we can define the signal to noise ratio (SNR) as the ratio of salience of the target to the distractors. Assuming that visual cues or features are encoded by populations of neurons in early visual areas, we define the optimal cue selection strategy as the best choice of neural response gain that maximizes the signal to noise ratio (SNR). In the rest of this section, we formally obtain the optimal choice of gain in neural responses that will maximize SNR. SNR in a visual search paradigm: In a typical visual search paradigm, the salience of the target and distractors is a random variable that depends on their location in the search array, their features, the spatial configuration of target and distractors, and that varies between identical repeated trials due to internal noise in neural response to the visual input. Hence, we express SNR as the ratio of expected salience of the target over expected salience of the distractors, with the expectation taken over all possible target and distractor locations, their features and spatial configurations, and over several repeated trials. SNR = Mean salience of the Target Mean salience of the distractor Search array and its stimuli: Let search array A be a two-dimensional display that consists of one target T and several distractors Dj (j = 1...N 2-1). Let the display be divided into an invisible N × N grid, with one item occuring at each cell (x, y) in the grid. Let the color, contrast, orientation and other target parameters θT be chosen from a distribution P(θ|T ). Similarly, for each distractor Dj, let its parameters θDj be sampled independently from a distribution P(θ|D). Thus, search array A has a fixed choice of target and distractor parameters. Next, the spatial configuration C is decided by a random permutation of some assignment of the target and distractors to the N 2 cells in A (such that there is exactly one item in each cell). Thus, for a given search array A, the spatial configuration as well as stimulus parameters are fixed. Finally, given a choice of parameter θ and its spatial location (x, y), we generate an image pattern R(θ) (a set of pixels and their values) and embed it at location (x, y) in search array A. Thus, we generate search array A. Saliency computation: Let the input search array A be processed by a population of neurons with gaussian tuning curves tuned to different stimulus parameters such as µ1, µ2, ...µn. The output of this early visual processing stage is used to compute saliency maps si(x, y, A) of search array A, that consist of the visual salience at every location (x, y) for feature-values µi(i = 1...n). Let si(x, y, A) be combined linearly to form S(x, y, A), the overall salience at location (x, y). Further, assuming a multiplicative gain gi on the ith saliency map, we obtain: S(x, y, A) = X i gisi(x, y, A) (1) Salience of the target and distractors: Let ST (A) be a random variable representing the salience of the target T in search array A. To factor out the variability due to internal noise η, we consider Eη[ST (A)], which is the mean salience of the target over repeated identical presentations of A. Further, let EC[ST (A)] be the mean salience of the target averaged over all spatial configurations of a given set of target and distractor parameters. Similarly, Eθ|T[ST (A)] is the mean salience of the target over all target parameters. The mean salience of the target combined over several repeated presentations of the search array A (to factor out internal noise η), over all spatial configurations C, and over all choices of target parameters θ|T is given below. Further, since η, C and θ are independent random variables, we can rewrite the joint expectation as follows: E[ST (A)] = Eθ|T [EC[Eη[ST (A)]]] (2) Let SD(A) represent the mean salience of distractors Dj (j = 1...N 2-1) in search array A. Similar to computing the mean salience of the target, we find the mean salience of distractors over all η, C and θ|D. SD(A) = EDj[siDj(A)] (3) E[SD(A)] = Eθ|D[EC[Eη[SD(A)]]] (4) SNR and its optimization: The additive salience and multiplicative gain hypothesis in eqn. 1 yields the following: E[ST (A)] = n X i=1 giEΘ|T [EC[Eη[siT (A)]]] (5) E[SD(A)] = n X i=1 giEΘ|T [EC[Eη[siT (A)]]] (similarly) (6) SNR can be expressed in terms of salience as: SNR = Pn i=1 giEΘ|T [EC[Eη[siT (A)]]] Pn i=1 giEΘ|D[EC[Eη[siD(A)]]] (7) We wish to find the optimal choice of gi that maximises SNR. Hence, we differentiate SNR wrt gi to get the following: ∂ ∂gi SNR = EΘ|T [EC[Eη[siT (A)]]] EΘ|D[EC[Eη[siD(A)]]] − Pn j=1 gjEΘ|T [EC[Eη[sjT (A)]]] Pn j=1 gjEΘ|D[EC[Eη[sjD(A)]]] Pn j=1 gjEΘ|D[EC[Eη[sjD(A)]]] EΘ|D[EC[Eη[siD(A)]]] (8) = SNRi SNR −1 αi (9) where αi is a normalization term and SNRi is the signal-to-noise ratio of the ith saliency map. SNRi = EΘ|T [EC[Eη[siT (A)]]]/EΘ|D[EC[Eη[siD(A)]]] (10) The sign of the derivative, d dgi SNR gi=1 tells us whether gi should be increased, decreased or maintained at the baseline activation 1 in order to maximize SNR. SNRi SNR < 1 ⇒ d dgi SNR < 0 ⇒SNR increases as gi decreases ⇒gi < 1 (11) = 1 ⇒ d dgi SNR = 0 ⇒SNR does not change with gi ⇒gi = 1 (12) > 1 ⇒ d dgi SNR > 0 ⇒SNR increases as gi increases ⇒gi > 1 (13) Thus, we obtain an intuitive result that gi increases as SNRi SNR increases. We simplify this monotonic relationship assuming proportionality. Further, if we impose a restriction that the gains cannot be increased indiscriminately, but must sum to some constant, say the total number of saliency maps (n), we have the following: let gi ∝SNRi SNR (14) if X i gi = n ⇒ gi = SNRi P i SNRi n (15) Thus the gain of a saliency map tuned to a band of feature-values depends on the strength of the signal-to-noise ratio in that band compared to the mean signal-to-noise ratio in all bands in that feature dimension. 3 Predictions of the optimal cue selection strategy To understand the implications of biasing features according to the optimal cue selection strategy, we simulate a simple model of early visual cortex. We assume that each feature dimension is encoded by a population of neurons with overlapping gaussian tuning curves that are broadly tuned to different features in that dimension. Let fi(θ) represent the tuning curve of the ith neuron in a population of broadly tuned neurons with overlapping tuning curves. Let the tuning width σ and amplitude a be equal for all neurons, and µi represent the preferred stimulus parameter (or feature) of the ith neuron. fi(θ) = a σ exp −(θ −µi)2 2σ2 (16) Let ⃗r(Θ(x, y, A)) = {r1(Θ(x, y, A))...rn(Θ(x, y, A))} be the population response to a stimulus parameter Θ(x, y, A) at a location (x, y) in search array A, where ri refers to the response of the ith neuron and n is the total number of neurons in the population. Let the neural response ri(Θ(x, y, A)) be a Poisson random variable. P(ri(Θ(x, y, A)) = z) = Pfi(Θ(x,y,A))(z) (17) For simplicity, let’s assume that the local neural response ri(Θ(x, y, A)) is a measure of salience si(x, y, A). Using eqns. 2, 4, 10, 16, 17, we can derive the mean salience of the target and distractor, and use it to compute SNRi. si(x, y, A) = ri(Θ(x, y, A)) (18) E[siT (A)] = Eθ|T [fi(θ)] (19) E[siD(A)] = Eθ|D[fi(θ)] (20) SNRi = Eθ|T [fi(θ)] Eθ|D[fi(θ)] (21) Finally, the gains gi on each saliency map can be found using eqn. 15. Thus, for a given distribution of stimulus parameters for the target P(θ|T ) and distractors P(θ|D), we simulate the above model of early visual cortex, compute salience of target and distractors, compute SNRi and obtain gi. In the rest of this section, we plot the distribution of optimal choice of gains gi for an exhaustive list of conditions where knowledge of the target and distractors varies from complete certainty to uncertainty. Unknown target and distractors: In the trivial case where there is no knowledge of the target and distractors, all cues are equally relevant and the optimal choice of gains is the same as baseline activation (unity). SNR is minimum leading to a slow search. This prediction is consistent with visual search experiments that observe slow search when the target and distractors are unknown due to reversal between trials [1, 2]. Search for a known target: During search for a known target, the optimal strategy predicts that SNR can be maximised by boosting neurons according to how strongly they respond to the target feature (as shown in figure 1, predicted SNR is 12.2 dB). Thus, a neuron that is optimally tuned to the target feature receives maximal gain. This prediction is consistent with single unit recordings on feature-based attention which show that the gain in neural response depends on the similarity between the neuron’s preferred feature and the target feature [3, 4]. Role of uncertainty in target features: When there’s uncertainty in the target’s features, i.e., when the target’s parameter assumes multiple values according to some probability distribution P(θ|T ), the optimal strategy predicts that SNR decreases, leading to a slower search (as shown in figure 1, SNR decreases from 12.2 dB to 9 dB ). This result is consistent with psychophysics experiments which suggest that better knowledge of the target leads to faster search [5, 6]. Distractor heterogeneity: While searching for an unknown target among known distractors, the optimal strategy predicts that SNR can be maximised by suppressing the neurons tuned to the distractors (see figure 1). But as we increase distractor heterogeneity or the number of distractor types, it predicts a decrease in SNR (from 36 dB to 17 dB, figure 1). This result is consistent with experimental data [10]. Discriminability between target and distractors: Several experiments and theories have studied the effect of target-distractor discriminability [10]-[17]. The optimal cue selection strategy also shows that if the target and distractors are very different or highly discriminable, SNR is high and the search is efficient (SNR = 51.4 dB, see figure 1). Otherwise, if they are similar and not well separated in feature space, SNR is low and the search is hard (SNR = 16.3 dB, see figure 1). Moreover, during search for a less discriminable target from distractors, the optimal strategy predicts that the neuron optimally tuned to the target may not be boosted maximally. Instead, a neuron that is sub-optimally tuned to the target and farther away from the distractors receives maximal gain. This new and counterintuitive prediction is tested by visual search experiments described in the next section. Linear separability effect: The optimal strategy also predicts the linear separability effect [18, 19] which suggests that when the target and distractors are less discriminable, search is easier if the target and distractors can be separated by a line in feature space (see figure 1). This effect has been demonstrated in size (e.g., search for the smallest or largest item is faster than search for a medium-sized item in the display)[20], chromaticity and luminance [21, 19], and orientation [22, 23]. 4 Testing new predictions of the optimal cue selection strategy In this section, we describe the design and analysis of psychophysics experiments to verify the counter-intuitive prediction mentioned in the previous section, i.e., during searching for a target that is less discriminable from the distractors, a neuron that is sub-optimally tuned to the target’s feature will be boosted more than a neuron that is optimally tuned to the target’s feature. 4.1 Design of psychophysics experiments Our experiments are designed in two phases: phase 1 to set up the top-down bias and phase 2 to measure the bias. Phase 1 - Setup the top-down bias: Subjects perform the primary task T1 which is a visual search for the target among distractors. This task sets the top-down bias on cues so that the target becomes the most salient item in the display, thus accelerating target detection. Subjects are trained on T1 trials until their performance stabilises with at least 80% accuracy. They are instructed to find the target (55◦tilt) among several distractors (50◦tilt). The target and distractors are the same for all T1 trials. To avoid false reports (which may occur due to boredom or lack of attention) and to verify that subjects indeed find the target, we introduce a novel no cheat scheme as follows: After finding the target among distractors, subjects press any key. Following the key press, we flash a grid of fineprint random numbers briefly (120ms) and ask subjects to report the number at the target’s location. Online feedback on accuracy of report is provided. Thus, the top-down bias is set up by performing T1 trials. Parameter P( | T) and P( | D) Probability Mean firing rate Neuron's preferred Mean response to T and D Response gain Neuron's preferred Optimal response gain a) b) c) d) e) f) g) h) Figure 1: a) Search for a known target – left: Prior knowledge P(θ|T) has a peak at the known target feature and P(θ|D) is flat as the distractor is unknown, middle: The expected responses of a population of neurons to the target is highest for neurons tuned around the target’s θ while the expected response to the distractors is flat, right: The optimal response gain in this situation is to boost the gain of the neurons that are tuned around the target’s θ; b) Search for an uncertain target; c) Unknown target among a known distractor; d) Presence of heterogeneous distractors; e) High discriminability between target and distractors; f) Low discriminability; g) Search for an extreme feature (linearly separable) among others; h) Search for a mid feature (nonlinearly separable) among others. P 0 1 2 3 4 5 6 7 * * * Number of reports Cues presented 0 1 2 3 4 5 6 7 8 9 * * * 1 2 3 4 0 1 2 3 4 5 6 7 8 * * * 1 2 3 4 2 0 2 4 6 8 10 12 * * * Subject 1 Subject 4 Subject 3 Subject 2 80 o 60 o 55 o 50 o Figure 2: The results of the T2 trials described in section 4.1 (phase 2) are shown here. For each of the four subjects, the number of reports on the steepest (80◦), relevant (60◦), target (55◦) and distractor (50◦) cues are shown in these bar plots. As predicted by the theory, a paired t-test reveals that the number of reports on the relevant cue is significantly higher (p < 0.05) than the number of reports on the target, distractor and steepest cues, as indicated by the blue star. Phase 2 - Measure the top-down bias: To measure the top-down bias generated by the above task, we randomly insert T2 trials in between T1 trials. Our theory predicts that during search for the target (55◦) among distractors (50◦), the most relevant cue will be around 60◦and not 55◦. To test this, we briefly (200ms) flash four cues - steepest (S, 80◦), relevant as predicted by our theory (R, 60◦), target (T, 55◦) and distractor (D, 50◦). A cue that is biased more appears more salient, attracts a saccade, and gets reported. In other words, the greater the top-down bias on a cue, the higher the number of its reports. According to our theory, there should be higher number of reports on R than T. Experimental details: We ran 4 na¨ıve subjects. All were aged 22-30, had normal or corrected vision, volunteered or participated for course credit. As mentioned earlier, each subject received training on T1 trials for a few days until the performance (search speed) stabilised with atleast 80% accuracy. To become familiar with the secondary task, they were trained on 50 T2 trials. Finally, each subject performed 10 blocks of 50 trials each, with T2 trials randomly inserted in between T1 trials. 4.2 Results For each of the four subjects, we extracted the number reports on the steepest (NS), relevant (NR), target (NT ) and distractor (ND) cues, for each block. We used a paired t test to check for statistically significant differences between NR and NT , ND, NS. Results are shown in figure 2. As predicted by the theory, we found a significantly higher number of reports on the relevant cue than the target cue. 5 Discussion In this paper, we have investigated whether our visual system has evolved to use optimal top-down strategies to select relevant cues that quickly and reliably detect the target among distracting environments. We formally obtained the optimal cue selection strategy where cues are chosen such that the signal-to-noise ratio (SNR) of the saliency map is maximized, thus maximizing the target’s salience relative to the distractors. The resulting optimal strategy is to boost a cue or feature if it provides higher signal-to-noise ratio than average. Through simulations, we confirmed the predictions of the optimal strategy with existing experimental data on visual search behavior, including the effect of distractor heterogeneity [10], uncertainty in target’s features [5, 6], target-distractor discriminability [10], linear separabilty effect [18, 19]. Our study complements the recent work on optimal eye movement strategies [24]. While we focus on an early stage of visual processing optimal cue selection in order to create a saliency map with maximum SNR, their study focuses on a later stage of visual processing - optimal saccade generation such that for a given saliency map, the probability of subsequent target detection is maximized. Thus, both optimal cue selection and saccade generation are necessary for optimal visual search. Acknowledgements This work was supported by the National Science Foundation, National Eye Institute, National Imagery and Mapping Agency, Zumberge Innovation Fund, and Charles Lee Powell Foundation. References [1] V Maljkovic and K Nakayama. Mem Cognit, 22(6):657–672, Nov 1994. [2] J. M. Wolfe, S. J. Butcher, and M. Hyle. J Exp Psychol Hum Percept Perform, 29(2):483–502, 2003. [3] S Treue and J C Martinez Trujillo. Nature, 399(6736):575–579, Jun 1999. [4] J. C. Martinez-Trujillo and S. Treue. Curr Biol, 14(9):744–751, May 2004. [5] J. M. Wolfe, T. S. Horowitz, N. Kenner, M. Hyle, and N. Vasan. Vision Res, 44(12):1411–1426, Jun 2004. [6] Timothy J Vickery, Li-Wei King, and Yuhong Jiang. J Vis, 5(1):81–92, Feb 2005. [7] A. Triesman and J. Souther. Journal of Experimental Psychology: Human Perception and Performance, 14:107–141, 1986. [8] A. Treisman and S. Gormican. Psychological Review 95, 1:15–48, 1988. [9] R. Rosenholtz. Percept Psychophys, 63(3):476–489, Apr 2001. [10] J Duncan and G W Humphreys. Psychological Rev, 96:433–458, 1989. [11] A. L. Nagy and R. R. Sanchez. Journal of the Optical Society of America A 7, 7:1209–1217, 1990. [12] H. Pashler. Percept Psychophys, 41(4):385–392, Apr 1987. [13] K. Rayner and D. L. Fisher. Percept Psychophys, 42(1):87–100, Jul 1987. [14] A. Treisman. J Exp Psychol Hum Percept Perform, 17(3):652–676, Aug 1991. [15] J. Palmer, P. Verghese, and M. Pavel. Vision Res, 40(10-12):1227–1268, 2000. [16] J. M. Wolfe, K. R. Cave, and S. L. Franzel. J. Exper. Psychol., 15:419–433, 1989. [17] J. M. Wolfe. Psyonomic Bulletin and Review, 1(2):202–238, 1994. [18] M. D’Zmura. Vision Research 31, 6:951–966, 1991. [19] B. Bauer, P. Jolicoeur, and W. B. Cowan. Vision Research 36, 10:1439–1465, 1996. [20] A. Treisman and G. Gelade. Cognitive Psychology, 12:97–136, 1980. [21] B. Bauer, P. Jolicoeur, and W. B. Cowan. Vision Res, 36(10):1439–1465, May 1996. [22] J. M. Wolfe, S. R. Friedman-Hill, M. I. Stewart, and K. M. O’ Connell. J Exp Psychol Hum Percept Perform, 18(1):34–49, Feb 1992. [23] W. F. Alkhateeb, R. J. Morris, and K. H. Ruddock. Spat Vis, 5(2):129–141, 1990. [24] J. Najemnik, W. S. Geisler. Nature, 434(7031):387–391, Mar 2005.
|
2005
|
38
|
2,853
|
Infinite Latent Feature Models and the Indian Buffet Process Thomas L. Griffiths Zoubin Ghahramani Cognitive and Linguistic Sciences Gatsby Computational Neuroscience Unit Brown University, Providence RI University College London, London tom griffiths@brown.edu zoubin@gatsby.ucl.ac.uk Abstract We define a probability distribution over equivalence classes of binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features. We identify a simple generative process that results in the same distribution over equivalence classes, which we call the Indian buffet process. We illustrate the use of this distribution as a prior in an infinite latent feature model, deriving a Markov chain Monte Carlo algorithm for inference in this model and applying the algorithm to an image dataset. 1 Introduction The statistical models typically used in unsupervised learning draw upon a relatively small repertoire of representations. The simplest representation, used in mixture models, associates each object with a single latent class. This approach is appropriate when objects can be partitioned into relatively homogeneous subsets. However, the properties of many objects are better captured by representing each object using multiple latent features. For instance, we could choose to represent each object as a binary vector, with entries indicating the presence or absence of each feature [1], allow each feature to take on a continuous value, representing objects with points in a latent space [2], or define a factorial model, in which each feature takes on one of a discrete set of values [3, 4]. A critical question in all of these approaches is the dimensionality of the representation: how many classes or features are needed to express the latent structure expressed by a set of objects. Often, determining the dimensionality of the representation is treated as a model selection problem, with a particular dimensionality being chosen based upon some measure of simplicity or generalization performance. This assumes that there is a single, finite-dimensional representation that correctly characterizes the properties of the observed objects. An alternative is to assume that the true dimensionality is unbounded, and that the observed objects manifest only a finite subset of classes or features [5]. This alternative is pursued in nonparametric Bayesian models, such as Dirichlet process mixture models [6, 7, 8, 9]. In a Dirichlet process mixture model, each object is assigned to a latent class, and each class is associated with a distribution over observable properties. The prior distribution over assignments of objects to classes is defined in such a way that the number of classes used by the model is bounded only by the number of objects, making Dirichlet process mixture models “infinite” mixture models [10]. The prior distribution assumed in a Dirichlet process mixture model can be specified in terms of a sequential process called the Chinese restaurant process (CRP) [11, 12]. In the CRP, N customers enter a restaurant with infinitely many tables, each with infinite seating capacity. The ith customer chooses an already-occupied table k with probability mk i−1+α, where mk is the number of current occupants, and chooses a new table with probability α i−1+α. Customers are exchangeable under this process: the probability of a particular seating arrangement depends only on the number of people at each table, and not the order in which they enter the restaurant. If we replace customers with objects and tables with classes, the CRP specifies a distribution over partitions of objects into classes. A partition is a division of the set of N objects into subsets, where each object belongs to a single subset and the ordering of the subsets does not matter. Two assignments of objects to classes that result in the same division of objects correspond to the same partition. For example, if we had three objects, the class assignments {c1, c2, c3} = {1, 1, 2} would correspond to the same partition as {2, 2, 1}, since all that differs between these two cases is the labels of the classes. A partition thus defines an equivalence class of assignment vectors. The distribution over partitions implied by the CRP can be derived by taking the limit of the probability of the corresponding equivalence class of assignment vectors in a model where class assignments are generated from a multinomial distribution with a Dirichlet prior [9, 10]. In this paper, we derive an infinitely exchangeable distribution over infinite binary matrices by pursuing this strategy of taking the limit of a finite model. We also describe a stochastic process (the Indian buffet process, akin to the CRP) which generates this distribution. Finally, we demonstrate how this distribution can be used as a prior in statistical models in which each object is represented by a sparse subset of an unbounded number of features. Further discussion of the properties of this distribution, some generalizations, and additional experiments, are available in the longer version of this paper [13]. 2 A distribution on infinite binary matrices In a latent feature model, each object is represented by a vector of latent feature values f i, and the observable properties of that object xi are generated from a distribution determined by its latent features. Latent feature values can be continuous, as in principal component analysis (PCA) [2], or discrete, as in cooperative vector quantization (CVQ) [3, 4]. In the remainder of this section, we will assume that feature values are continuous. Using the matrix F = f T 1 f T 2 · · · f T N T to indicate the latent feature values for all N objects, the model is specified by a prior over features, p(F), and a distribution over observed property matrices conditioned on those features, p(X|F), where p(·) is a probability density function. These distributions can be dealt with separately: p(F) specifies the number of features and the distribution over values associated with each feature, while p(X|F) determines how these features relate to the properties of objects. Our focus will be on p(F), showing how such a prior can be defined without limiting the number of features. We can break F into two components: a binary matrix Z indicating which features are possessed by each object, with zik = 1 if object i has feature k and 0 otherwise, and a matrix V indicating the value of each feature for each object. F is the elementwise product of Z and V, F = Z ⊗V, as illustrated in Figure 1. In many latent feature models (e.g., PCA) objects have non-zero values on every feature, and every entry of Z is 1. In sparse latent feature models (e.g., sparse PCA [14, 15]) only a subset of features take on non-zero values for each object, and Z picks out these subsets. A prior on F can be defined by specifying priors for Z and V, with p(F) = P(Z)p(V), where P(·) is a probability mass function. We will focus on defining a prior on Z, since the effective dimensionality of a latent feature model is determined by Z. Assuming that Z is sparse, we can define a prior for infinite latent feature models by defining a distribution over infinite binary matrices. Our discussion of the Chinese restaurant process provides two desiderata for such a distribution: objects (c) objects N K features objects N K features 0 0 0 0 0 0 −0.1 1.8 −3.2 0.9 0.9 −0.3 0.2 −2.8 1.4 objects N K features 5 0 0 0 0 0 0 2 5 1 1 4 4 3 3 (a) (b) Figure 1: A binary matrix Z, as shown in (a), indicates which features take non-zero values. Elementwise multiplication of Z by a matrix V of continuous values produces a representation like (b). If V contains discrete values, we obtain a representation like (c). should be exchangeable, and posterior inference should be tractable. It also suggests a method by which these desiderata can be satisfied: start with a model that assumes a finite number of features, and consider the limit as the number of features approaches infinity. 2.1 A finite feature model We have N objects and K features, and the possession of feature k by object i is indicated by a binary variable zik. The zik form a binary N × K feature matrix, Z. Assume that each object possesses feature k with probability πk, and that the features are generated independently. Under this model, the probability of Z given π = {π1, π2, . . . , πK}, is P(Z|π) = K Y k=1 N Y i=1 P(zik|πk) = K Y k=1 πmk k (1 −πk)N−mk, (1) where mk = PN i=1 zik is the number of objects possessing feature k. We can define a prior on π by assuming that each πk follows a beta distribution, to give πk | α ∼Beta( α K , 1) zik | πk ∼Bernoulli(πk) Each zik is independent of all other assignments, conditioned on πk, and the πk are generated independently. We can integrate out π to obtain the probability of Z, which is P(Z) = K Y k=1 α K Γ(mk + α K )Γ(N −mk + 1) Γ(N + 1 + α K ) . (2) This distribution is exchangeable, since mk is not affected by the ordering of the objects. 2.2 Equivalence classes In order to find the limit of the distribution specified by Equation 2 as K →∞, we need to define equivalence classes of binary matrices – the analogue of partitions for class assignments. Our equivalence classes will be defined with respect to a function on binary matrices, lof(·). This function maps binary matrices to left-ordered binary matrices. lof(Z) is obtained by ordering the columns of the binary matrix Z from left to right by the magnitude of the binary number expressed by that column, taking the first row as the most significant bit. The left-ordering of a binary matrix is shown in Figure 2. In the first row of the leftordered matrix, the columns for which z1k = 1 are grouped at the left. In the second row, the columns for which z2k = 1 are grouped at the left of the sets for which z1k = 1. This grouping structure persists throughout the matrix. The history of feature k at object i is defined to be (z1k, . . . , z(i−1)k). Where no object is specified, we will use history to refer to the full history of feature k, (z1k, . . . , zNk). We lof Figure 2: Left-ordered form. A binary matrix is transformed into a left-ordered binary matrix by the function lof(·). The entries in the left-ordered matrix were generated from the Indian buffet process with α = 10. Empty columns are omitted from both matrices. will individuate the histories of features using the decimal equivalent of the binary numbers corresponding to the column entries. For example, at object 3, features can have one of four histories: 0, corresponding to a feature with no previous assignments, 1, being a feature for which z2k = 1 but z1k = 0, 2, being a feature for which z1k = 1 but z2k = 0, and 3, being a feature possessed by both previous objects were assigned. Kh will denote the number of features possessing the history h, with K0 being the number of features for which mk = 0 and K+ = P2N −1 h=1 Kh being the number of features for which mk > 0, so K = K0 +K+. Two binary matrices Y and Z are lof-equivalent if lof(Y) = lof(Z). The lofequivalence class of a binary matrix Z, denoted [Z], is the set of binary matrices that are lof-equivalent to Z. lof-equivalence classes play the role for binary matrices that partitions play for assignment vectors: they collapse together all binary matrices (assignment vectors) that differ only in column ordering (class labels). lof-equivalence classes are preserved through permutation of the rows or the columns of a matrix, provided the same permutations are applied to the other members of the equivalence class. Performing inference at the level of lof-equivalence classes is appropriate in models where feature order is not identifiable, with p(X|F) being unaffected by the order of the columns of F. Any model in which the probability of X is specified in terms of a linear function of F, such as PCA or CVQ, has this property. The cardinality of the lof-equivalence class [Z] is K K0...K2N −1 = K! Q2N −1 h=0 Kh!, where Kh is the number of columns with full history h. 2.3 Taking the infinite limit Under the distribution defined by Equation 2, the probability of a particular lof-equivalence class of binary matrices, [Z], is P([Z]) = X Z∈[Z] P(Z) = K! Q2N −1 h=0 Kh! K Y k=1 α K Γ(mk + α K )Γ(N −mk + 1) Γ(N + 1 + α K ) . (3) Rearranging terms, and using the fact that Γ(x) = (x −1)Γ(x −1) for x > 1, we can compute the limit of P([Z]) as K approaches infinity lim K→∞ αK+ Q2N−1 h=1 Kh! · K! K0! KK+ · N! QN j=1(j + α K ) !K · K+ Y k=1 (N −mk)! Qmk−1 j=1 (j + α K ) N! = αK+ Q2N−1 h=1 Kh! · 1 · exp{−αHN} · K+ Y k=1 (N −mk)!(mk −1)! N! , (4) where HN is the Nth harmonic number, HN = PN j=1 1 j . This distribution is infinitely exchangeable, since neither Kh nor mk are affected by the ordering on objects. Technical details of this limit are provided in [13]. 2.4 The Indian buffet process The probability distribution defined in Equation 4 can be derived from a simple stochastic process. Due to the similarity to the Chinese restaurant process, we will also use a culinary metaphor, appropriately adjusted for geography. Indian restaurants in London offer buffets with an apparently infinite number of dishes. We will define a distribution over infinite binary matrices by specifying how customers (objects) choose dishes (features). In our Indian buffet process (IBP), N customers enter a restaurant one after another. Each customer encounters a buffet consisting of infinitely many dishes arranged in a line. The first customer starts at the left of the buffet and takes a serving from each dish, stopping after a Poisson(α) number of dishes. The ith customer moves along the buffet, sampling dishes in proportion to their popularity, taking dish k with probability mk i , where mk is the number of previous customers who have sampled that dish. Having reached the end of all previous sampled dishes, the ith customer then tries a Poisson( α i ) number of new dishes. We can indicate which customers chose which dishes using a binary matrix Z with N rows and infinitely many columns, where zik = 1 if the ith customer sampled the kth dish. Using K(i) 1 to indicate the number of new dishes sampled by the ith customer, the probability of any particular matrix being produced by the IBP is P(Z) = αK+ QN i=1 K(i) 1 ! exp{−αHN} K+ Y k=1 (N −mk)!(mk −1)! N! . (5) The matrices produced by this process are generally not in left-ordered form. These matrices are also not ordered arbitrarily, because the Poisson draws always result in choices of new dishes that are to the right of the previously sampled dishes. Customers are not exchangeable under this distribution, as the number of dishes counted as K(i) 1 depends upon the order in which the customers make their choices. However, if we only pay attention to the lof-equivalence classes of the matrices generated by this process, we obtain the infinitely exchangeable distribution P([Z]) given by Equation 4: QN i=1 K(i) 1 ! Q2N −1 h=1 Kh! matrices generated via this process map to the same left-ordered form, and P([Z]) is obtained by multiplying P(Z) from Equation 5 by this quantity. A similar but slightly more complicated process can be defined to produce left-ordered matrices directly [13]. 2.5 Conditional distributions To define a Gibbs sampler for models using the IBP, we need to know the conditional distribution on feature assignments, P(zik = 1|Z−(ik)). In the finite model, where P(Z) is given by Equation 2, it is straightforward to compute this conditional distribution for any zik. Integrating over πk gives P(zik = 1|z−i,k) = m−i,k + α K N + α K , (6) where z−i,k is the set of assignments of other objects, not including i, for feature k, and m−i,k is the number of objects possessing feature k, not including i. We need only condition on z−i,k rather than Z−(ik) because the columns of the matrix are independent. In the infinite case, we can derive the conditional distribution from the (exchangeable) IBP. Choosing an ordering on objects such that the ith object corresponds to the last customer to visit the buffet, we obtain P(zik = 1|z−i,k) = m−i,k N , (7) for any k such that m−i,k > 0. The same result can be obtained by taking the limit of Equation 6 as K →∞. The number of new features associated with object i should be drawn from a Poisson( α N ) distribution. This can also be derived from Equation 6, using the same kind of limiting argument as that presented above. 3 A linear-Gaussian binary latent feature model To illustrate how the IBP can be used as a prior in models for unsupervised learning, we derived and tested a linear-Gaussian latent feature model in which the features are binary. In this case the feature matrix F reduces to the binary matrix Z. As above, we will start with a finite model and then consider the infinite limit. In our finite model, the D-dimensional vector of properties of an object i, xi is generated from a Gaussian distribution with mean ziA and covariance matrix ΣX = σ2 XI, where zi is a K-dimensional binary vector, and A is a K × D matrix of weights. In matrix notation, E [X] = ZA. If Z is a feature matrix, this is a form of binary factor analysis. The distribution of X given Z, A, and σX is matrix Gaussian with mean ZA and covariance matrix σ2 XI, where I is the identity matrix. The prior on A is also matrix Gaussian, with mean 0 and covariance matrix σ2 AI. Integrating out A, we have p(X|Z, σX, σA) = 1 (2π)ND/2σ(N−K)D X σKD A |ZT Z + σ2 X σ2 A I|D/2 exp{−1 2σ2 X tr(XT (I −Z(ZT Z + σ2 X σ2 A I)−1ZT )X)}. (8) This result is intuitive: the exponentiated term is the difference between the inner product of X and its projection onto the space spanned by Z, regularized to an extent determined by the ratio of the variance of the noise in X to the variance of the prior on A. It follows that p(X|Z, σX, σA) depends only on the non-zero columns of Z, and thus remains welldefined when we take the limit as K →∞(for more details see [13]). We can define a Gibbs sampler for this model by computing the full conditional distribution P(zik|X, Z−(i,k), σX, σA) ∝p(X|Z, σX, σA)P(zik|z−i,k). (9) The two terms on the right hand side can be evaluated using Equations 8 and 7 respectively. The Gibbs sampler is then straightforward. Assignments for features for which m−i,k > 0 are drawn from the distribution specified by Equation 9. The distribution over the number of new features for each object can be approximated by truncation, computing probabilities for a range of values of K(i) 1 up to an upper bound. For each value, p(X|Z, σX, σA) can be computed from Equation 8, and the prior on the number of new features is Poisson( α N ). We will demonstrate this Gibbs sampler for the infinite binary linear-Gaussian model on a dataset consisting of 100 240 × 320 pixel images. We represented each image, xi, using a 100-dimensional vector corresponding to the weights of the mean image and the first 99 principal components. Each image contained up to four everyday objects – a $20 bill, a Klein bottle, a prehistoric handaxe, and a cellular phone. Each object constituted a single latent feature responsible for the observed pixel values. The images were generated by sampling a feature vector, zi, from a distribution under which each feature was present with probability 0.5, and then taking a photograph containing the appropriate objects using a LogiTech digital webcam. Sample images are shown in Figure 3 (a). The Gibbs sampler was initialized with K+ = 1, choosing the feature assignments for the first column by setting zi1 = 1 with probability 0.5. σA, σX, and α were initially set to 0.5, 1.7, and 1 respectively, and then sampled by adding Metropolis steps to the MCMC algorithm. Figure 3 shows trace plots for the first 1000 iterations of MCMC for the number of features used by at least one object, K+, and the model parameters σA, σX, and α. All of these quantities stabilized after approximately 100 iterations, with the algorithm (a) (Positive) (b) (Negative) (Negative) (Negative) 0 0 0 0 (c) 0 1 0 0 1 1 1 0 1 0 1 1 0 100 200 300 400 500 600 700 800 900 1000 0 5 10 K+ 0 100 200 300 400 500 600 700 800 900 1000 0 2 4 α 0 100 200 300 400 500 600 700 800 900 1000 0 1 2 σX 0 100 200 300 400 500 600 700 800 900 1000 0 1 2 σA Iteration Figure 3: Data and results for the demonstration of the infinite linear-Gaussian binary latent feature model. (a) Four sample images from the 100 in the dataset. Each image had 320 × 240 pixels, and contained from zero to four everyday objects. (b) The posterior mean of the weights (A) for the four most frequent binary features from the 1000th sample. Each image corresponds to a single feature. These features perfectly indicate the presence or absence of the four objects. The first feature indicates the presence of the $20 bill, the other three indicate the absence of the Klein bottle, the handaxe, and the cellphone. (c) Reconstructions of the images in (a) using the binary codes inferred for those images. These reconstructions are based upon the posterior mean of A for the 1000th sample. For example, the code for the first image indicates that the $20 bill is absent, while the other three objects are not. The lower panels show trace plots for the dimensionality of the representation (K+) and the parameters α, σX, and σA over 1000 iterations of sampling. The values of all parameters stabilize after approximately 100 iterations. finding solutions with approximately seven latent features. The four most common features perfectly indicated the presence and absence of the four objects (shown in Figure 3 (b)), and three less common features coded for slight differences in the locations of those objects. 4 Conclusion We have shown that the methods that have been used to define infinite latent class models [6, 7, 8, 9, 10, 11, 12] can be extended to models in which objects are represented in terms of a set of latent features, deriving a distribution on infinite binary matrices that can be used as a prior for such models. While we derived this prior as the infinite limit of a simple distribution on finite binary matrices, we have shown that the same distribution can be specified in terms of a simple stochastic process – the Indian buffet process. This distribution satisfies our two desiderata for a prior for infinite latent feature models: objects are exchangeable, and inference remains tractable. Our success in transferring the strategy of taking the limit of a finite model from latent classes to latent features suggests that a similar approach could be applied with other representations, expanding the forms of latent structure that can be recovered through unsupervised learning. References [1] N. Ueda and K. Saito. Parametric mixture models for multi-labeled text. In Advances in Neural Information Processing Systems 15, Cambridge, 2003. MIT Press. [2] I. T. Jolliffe. Principal component analysis. Springer, New York, 1986. [3] R. S. Zemel and G. E. Hinton. Developing population codes by minimizing description length. In Advances in Neural Information Processing Systems 6. Morgan Kaufmann, San Francisco, CA, 1994. [4] Z. Ghahramani. Factorial learning and the EM algorithm. In Advances in Neural Information Processing Systems 7. Morgan Kaufmann, San Francisco, CA, 1995. [5] C. E. Rasmussen and Z. Ghahramani. Occam’s razor. In Advances in Neural Information Processing Systems 13. MIT Press, Cambridge, MA, 2001. [6] C. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. The Annals of Statistics, 2:1152–1174, 1974. [7] M. D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577–588, 1995. [8] T. S. Ferguson. Bayesian density estimation by mixtures of normal distributions. In M. Rizvi, J. Rustagi, and D. Siegmund, editors, Recent advances in statistics, pages 287–302. Academic Press, New York, 1983. [9] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [10] C. Rasmussen. The infinite Gaussian mixture model. In Advances in Neural Information Processing Systems 12. MIT Press, Cambridge, MA, 2000. [11] D. Aldous. Exchangeability and related topics. In ´Ecole d’´et´e de probabilit´es de Saint-Flour, XIII—1983, pages 1–198. Springer, Berlin, 1985. [12] J. Pitman. Combinatorial stochastic processes, 2002. Notes for Saint Flour Summer School. [13] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. Technical Report 2005-001, Gatsby Computational Neuroscience Unit, 2005. [14] A. d’Aspremont, L. El Ghaoui, I. Jordan, and G. R. G. Lanckriet. A direct formulation for sparse PCA using semidefinite programming. In Advances in Neural Information Processing Systems 17. MIT Press, Cambridge, MA, 2005. [15] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of Computational and Graphical Statistics, in press.
|
2005
|
39
|
2,854
|
Recovery of Jointly Sparse Signals from Few Random Projections Michael B. Wakin ECE Department Rice University wakin@rice.edu Marco F. Duarte ECE Department Rice University duarte@rice.edu Shriram Sarvotham ECE Department Rice University shri@rice.edu Dror Baron ECE Department Rice University drorb@rice.edu Richard G. Baraniuk ECE Department Rice University richb@rice.edu Abstract Compressed sensing is an emerging field based on the revelation that a small group of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intra- and inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem in information theory for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays. 1 Introduction Distributed communication, sensing, and computing [13, 17] are emerging fields with numerous promising applications. In a typical setup, large groups of cheap and individually unreliable nodes may collaborate to perform a variety of data processing tasks such as sensing, data collection, classification, modeling, tracking, and so on. As individual nodes in such a network are often battery-operated, power consumption is a limiting factor, and the reduction of communication costs is crucial. In such a setting, distributed source coding [8, 13, 14, 17] may allow the sensors to save on communication costs. In the Slepian-Wolf framework for lossless distributed coding [8, 14], the availability of correlated side information at the decoder enables the source encoder to communicate losslessly at the conditional entropy rate, rather than the individual entropy. Because sensor networks and arrays rely on data that often exhibit strong spatial correlations [13, 17], distributed compression can reduce the communication costs substantially, thus enhancing battery life. Unfortunately, distributed compression schemes for sources with memory are not yet mature [8, 13, 14, 17]. We propose a new approach for distributed coding of correlated sources whose signal correlations take the form of a sparse structure. Our approach is based on another emerging field known as compressed sensing (CS) [4, 9]. CS builds upon the groundbreaking work of Cand`es et al. [4] and Donoho [9], who showed that signals that are sparse relative to a known basis can be recovered from a small number of nonadaptive linear projections onto a second basis that is incoherent with the first. (A random basis provides such incoherence with high probability. Hence CS with random projections is universal — the signals can be reconstructed if they are sparse relative to any known basis.) The implications of CS for signal acquisition and compression are very promising. With no a priori knowledge of a signal’s structure, a sensor node could simultaneously acquire and compress that signal, preserving the critical information that is extracted only later at a fusion center. In our framework for distributed compressed sensing (DCS), this advantage is particularly compelling. In a typical DCS scenario, a number of sensors measure signals that are each individually sparse in some basis and also correlated from sensor to sensor. Each sensor independently encodes its signal by projecting it onto another, incoherent basis (such as a random one) and then transmits just a few of the resulting coefficients to a single collection point. Under the right conditions, a decoder at the collection point can reconstruct each of the signals precisely. The DCS theory rests on a concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose tractable algorithms for joint recovery of signal ensembles from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for reconstruction. While the sensors operate entirely without collaboration, joint decoding can recover signals using far fewer measurements per sensor than would be required for separable CS recovery. This paper presents our specific results for one of the three models; the other two are highlighted in our papers [1, 2, 11]. 2 Sparse Signal Recovery from Incoherent Projections In the traditional CS setting, we consider a single signal x ∈RN, which we assume to be sparse in a known orthonormal basis or frame Ψ = [ψ1, ψ2, . . . , ψN]. That is, x = Ψθ for some θ, where ∥θ∥0 = K holds.1 The signal x is observed indirectly via an M × N measurement matrix Φ, where M < N. We let y = Φx be the observation vector, consisting of the M inner products of the measurement vectors against the signal. The M rows of Φ are the measurement vectors, against which the signal is projected. These rows are chosen to be incoherent with Ψ — that is, they each have non-sparse expansions in the basis Ψ [4, 9]. In general, Φ meets the necessary criteria when its entries are drawn randomly, for example independent and identically distributed (i.i.d.) Gaussian. Although the equation y = Φx is underdetermined, it is possible to recover x from y under certain conditions. In general, due to the incoherence between Φ and Ψ, θ can be recovered by solving the ℓ0 optimization problem bθ = arg min ∥θ∥0 s.t. y = ΦΨθ. In principle, remarkably few random measurements are required to recover a K-sparse signal via ℓ0 minimization. Clearly, more than K measurements must be taken to avoid ambiguity; in theory, K + 1 random measurements will suffice [2]. Unfortunately, solving this ℓ0 optimization problem appears to be NP-hard [6], requiring a combinatorial enumeration of the N K possible sparse subspaces for θ. The amazing revelation that supports the CS theory is that a much simpler problem yields an equivalent solution (thanks again to the incoherence of the bases): we need only solve 1The ℓ0 “norm” ∥θ∥0 merely counts the number of nonzero entries in the vector θ. CS theory also applies to signals for which ∥θ∥p ≤K, where 0 < p ≤1; such extensions for DCS are a topic of ongoing research. for the ℓ1-sparsest vector θ that agrees with the observed coefficients y [4, 9] bθ = arg min ∥θ∥1 s.t. y = ΦΨθ. This optimization problem, known also as Basis Pursuit (BP) [7], is significantly more tractable and can be solved with traditional linear programming techniques. There is no free lunch, however; more than K + 1 measurements will be required in order to recover sparse signals. In general, there exists a constant oversampling factor c = c(K, N) such that cK measurements suffice to recover x with very high probability [4, 9]. Commonly quoted as c = O(log(N)), we have found that c ≈log2(1 + N/K) provides a useful rule-of-thumb [2]. At the expense of slightly more measurements, greedy algorithms have also been developed to recover x from y. One example, known as Orthogonal Matching Pursuit (OMP) [15], requires c ≈2 ln(N). We exploit both BP and greedy algorithms for recovering jointly sparse signals. 3 Joint Sparsity Models In this section, we generalize the notion of a signal being sparse in some basis to the notion of an ensemble of signals being jointly sparse. We consider three different joint sparsity models (JSMs) that apply in different situations. In most cases, each signal is itself sparse, and so we could use the CS framework from above to encode and decode each one separately. However, there also exists a framework wherein a joint representation for the ensemble uses fewer total vectors. We use the following notation for our signal ensembles and measurement model. Denote the signals in the ensemble by xj, j ∈{1, 2, . . . , J}, and assume that each signal xj ∈RN. We assume that there exists a known sparse basis Ψ for RN in which the xj can be sparsely represented. Denote by Φj the measurement matrix for signal j; Φj is Mj × N and, in general, the entries of Φj are different for each j. Thus, yj = Φjxj consists of Mj < N incoherent measurements of xj. JSM-1: Sparse common component + innovations. In this model, all signals share a common sparse component while each individual signal contains a sparse innovation component; that is, xj = zC + zj, j ∈{1, 2, . . . , J} with zC = ΨθC, ∥θC∥0 = K and zj = Ψθj, ∥θj∥0 = Kj. Thus, the signal zC is common to all of the xj and has sparsity K in basis Ψ. The signals zj are the unique portions of the xj and have sparsity Kj in the same basis. A practical situation well-modeled by JSM-1 is a group of sensors measuring temperatures at a number of outdoor locations throughout the day. The temperature readings xj have both temporal (intra-signal) and spatial (inter-signal) correlations. Global factors, such as the sun and prevailing winds, could have an effect zC that is both common to all sensors and structured enough to permit sparse representation. More local factors, such as shade, water, or animals, could contribute localized innovations zj that are also structured (and hence sparse). Similar scenarios could be imagined for a network of sensors recording other phenomena that change smoothly in time and in space and thus are highly correlated. JSM-2: Common sparse supports. In this model, all signals are constructed from the same sparse set of basis vectors, but with different coefficients; that is, xj = Ψθj, j ∈{1, 2, . . . , J}, where each θj is supported only on the same Ω⊂{1, 2, . . . , N} with |Ω| = K. Hence, all signals have ℓ0 sparsity of K, and all are constructed from the same K basis elements, but with arbitrarily different coefficients. A practical situation well-modeled by JSM-2 is where multiple sensors acquire the same signal but with phase shifts and attenuations caused by signal propagation. In many cases it is critical to recover each one of the sensed signals, such as in many acoustic localization and array processing algorithms. Another useful application for JSM-2 is MIMO communication [16]. JSM-3: Nonsparse common + sparse innovations. This model extends JSM-1 so that the common component need no longer be sparse in any basis; that is, xj = zC + zj, j ∈{1, 2, . . . , J} with zC = ΨθC and zj = Ψθj, ∥θj∥0 = Kj, but zC is not necessarily sparse in the basis Ψ. We also consider the case where the supports of the innovations are shared for all signals, which extends JSM-2. A practical situation well-modeled by JSM-3 is where several sources are recorded by different sensors together with a background signal that is not sparse in any basis. Consider, for example, a computer vision-based verification system in a device production plant. Cameras acquire snapshots of components in the production line; a computer system then checks for failures in the devices for quality control purposes. While each image could be extremely complicated, the ensemble of images will be highly correlated, since each camera is observing the same device with minor (sparse) variations. JSM-3 could also be useful in some non-distributed scenarios. For example, it motivates the compression of data such as video, where the innovations or differences between video frames may be sparse, even though a single frame may not be very sparse. In general, JSM-3 may be invoked for ensembles with significant inter-signal correlations but insignificant intra-signal correlations. 4 Recovery of Jointly Sparse Signals In a setting where a network or array of sensors may encounter a collection of jointly sparse signals, and where a centralized reconstruction algorithm is feasible, the number of incoherent measurements required by each sensor can be reduced. For each JSM, we propose algorithms for joint signal recovery from incoherent projections and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We focus in particular on JSM-3 in this paper but also overview our results for JSMs 1 and 2, which are discussed in further detail in our papers [1, 2, 11]. 4.1 JSM-1: Sparse common component + innovations For this model (see also [1, 2]), we have proposed an analytical framework inspired by the principles of information theory. This allows us to characterize the measurement rates Mj required to jointly reconstruct the signals xj. The measurement rates relate directly to the signals’ conditional sparsities, in parallel with the Slepian-Wolf theory. More specifically, we have formalized the following intuition. Consider the simple case of J = 2 signals. By employing the CS machinery, we might expect that (i) (K + K1)c coefficients suffice to reconstruct x1, (ii) (K+K2)c coefficients suffice to reconstruct x2, yet only (iii) (K+K1+ K2)c coefficients should suffice to reconstruct both x1 and x2, since we have K +K1 +K2 nonzero elements in x1 and x2. In addition, given the (K + K1)c measurements for x1 as side information, and assuming that the partitioning of x1 into zC and z1 is known, cK2 measurements that describe z2 should allow reconstruction of x2. Formalizing these arguments allows us to establish theoretical lower bounds on the required measurement rates at each sensor; Fig.1(a) shows such a bound for the case of J = 2 signals. We have also established upper bounds on the required measurement rates Mj by proposing a specific algorithm for reconstruction [1]. The algorithm uses carefully designed measurement matrices Φj (in which some rows are identical and some differ) so that the resulting measurements can be combined to allow step-by-step recovery of the sparse components. The theoretical rates Mj are below those required for separable CS recovery of each signal xj (see Fig. 1(a)). We also proposed a reconstruction technique based on a single execution of a linear program, which seeks the sparsest components [zC; z1; . . . zJ] that (a) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Simulation R1 R2 Converse Anticipated Achievable Separate (b) 0 5 10 15 20 25 30 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of measurements per sensor Prob. of exact reconstruction n = 50, k = 5 1 2 2 4 4 8 8 16 16 32 32 Figure 1: (a) Converse bounds and achievable measurement rates for J = 2 signals with common sparse component and sparse innovations (JSM-1). We fix signal lengths N = 1000 and sparsities K = 200, K1 = K2 = 50. The measurement rates Rj := Mj/N reflect the number of measurements normalized by the signal length. Blue curves indicate our theoretical and anticipated converse bounds; red indicates a provably achievable region, and pink denotes the rates required for separable CS signal reconstruction. (b) Reconstructing a signal ensemble with common sparse supports (JSM2). We plot the probability of perfect reconstruction via DCS-SOMP (solid lines) and independent CS reconstruction (dashed lines) as a function of the number of measurements per signal M and the number of signals J. We fix the signal length to N = 50 and the sparsity to K = 5. An oracle encoder that knows the positions of the large coefficients would use 5 measurements per signal. account for the observed measurements. Numerical simulations support such an approach (see Fig.1(a)). Future work will extend JSM-1 to ℓp-compressible signals, 0 < p ≤1. 4.2 JSM-2: Common sparse supports Under the JSM-2 signal ensemble model (see also [2, 11]), independent recovery of each signal via ℓ1 minimization would require cK measurements per signal. However, algorithms inspired by conventional greedy pursuit algorithms (such as OMP [15]) can substantially reduce this number. In the single-signal case, OMP iteratively constructs the sparse support set Ω; decisions are based on inner products between the columns of ΦΨ and a residual. In the multi-signal case, there are more clues available for determining the elements of Ω. To establish a theoretical justification for our approach, we first proposed a simple OneStep Greedy Algorithm (OSGA) [11] that combines all of the measurements and seeks the largest correlations with the columns of the ΦjΨ. We established that, assuming that Φj has i.i.d. Gaussian entries and that the nonzero coefficients in the θj are i.i.d. Gaussian, then with M ≥1 measurements per signal, OSGA recovers Ωwith probability approaching 1 as J →∞. Moreover, with M ≥K measurements per signal, OSGA recovers all xj with probability approaching 1 as J →∞. This meets the theoretical lower bound for Mj. In practice, OSGA can be improved using an iterative greedy algorithm. We proposed a simple variant of Simultaneous Orthogonal Matching Pursuit (SOMP) [16] that we term DCS-SOMP [11]. For this algorithm, Fig. 1(b) plots the performance as the number of sensors varies from J = 1 to 32. We fix the signal lengths at N = 50 and the sparsity of each signal to K = 5. With DCS-SOMP, for perfect reconstruction of all signals the average number of measurements per signal decreases as a function of J. The trend suggests that, for very large J, close to K measurements per signal should suffice. On the contrary, with independent CS reconstruction, for perfect reconstruction of all signals the number of measurements per sensor increases as a function of J. This surprise is due to the fact that each signal will experience an independent probability p ≤1 of successful reconstruction; therefore the overall probability of complete success is pJ. Consequently, each sensor must compensate by making additional measurements. 4.3 JSM-3: Nonsparse common + sparse innovations The JSM-3 signal ensemble model provides a particularly compelling motivation for joint recovery. Under this model, no individual signal xj is sparse, and so separate signal recovery would require fully N measurements per signal. As in the other JSMs, however, the commonality among the signals makes it possible to substantially reduce this number. Our recovery algorithms are based on the observation that if the common component zC were known, then each innovation zj could be estimated using the standard single-signal CS machinery on the adjusted measurements yj −ΦjzC = Φjzj. While zC is not known in advance, it can be estimated from the measurements. In fact, across all J sensors, a total of P j Mj random projections of zC are observed (each corrupted by a contribution from one of the zj). Since zC is not sparse, it cannot be recovered via CS techniques, but when the number of measurements is sufficiently large (P j Mj ≫N), zC can be estimated using standard tools from linear algebra. A key requirement for such a method to succeed in recovering zC is that each Φj be different, so that their rows combine to span all of RN. In the limit, zC can be recovered while still allowing each sensor to operate at the minimum measurement rate dictated by the {zj}. A prototype algorithm, which we name Transpose Estimation of Common Component (TECC), is listed below, where we assume that each measurement matrix Φj has i.i.d. N(0, σ2 j ) entries. TECC Algorithm for JSM-3 1. Estimate common component: Define the matrix bΦ as the concatenation of the regularized individual measurement matrices bΦj = 1 Mjσ2 j Φj, that is, bΦ = [bΦ1, bΦ2, . . . , bΦJ]. Calculate the estimate of the common component as c zC = 1 J bΦT y. 2. Estimate measurements generated by innovations: Using the previous estimate, subtract the contribution of the common part on the measurements and generate estimates for the measurements caused by the innovations for each signal: byj = yj −Φj c zC. 3. Reconstruct innovations: Using a standard single-signal CS reconstruction algorithm, obtain estimates of the innovations bzj from the estimated innovation measurements byj. 4. Obtain signal estimates: Sum the above estimates, letting bxj = c zC + bzj. The following theorem shows that asymptotically, by using the TECC algorithm, each sensor need only measure at the rate dictated by the sparsity Kj. Theorem 1 [2] Assume that the nonzero expansion coefficients of the sparse innovations zj are i.i.d. Gaussian random variables and that their locations are uniformly distributed on {1, 2, ..., N}. Then the following statements hold: 1. Let the measurement matrices Φj contain i.i.d. N(0, σ2 j ) entries with Mj ≥Kj + 1. Then each signal xj can be recovered using the TECC algorithm with probability approaching 1 as J →∞. 2. Let Φj be a measurement matrix with Mj ≤Kj for some j ∈{1, 2, ..., J}. Then with probability 1, the signal xj cannot be uniquely recovered by any algorithm for any J. For large J, the measurement rates permitted by Statement 1 are the lowest possible for any reconstruction strategy on JSM-3 signals, even neglecting the presence of the nonsparse component. Thus, Theorem 1 provides a tight achievable and converse for JSM-3 signals. The CS technique employed in Theorem 1 involves combinatorial searches for estimating the innovation components. More efficient techniques could also be employed (including several proposed for CS in the presence of noise [3, 5, 7, 10, 12]). While Theorem 1 suggests the theoretical gains from joint recovery as J →∞, practical gains can also be realized with a moderate number of sensors. For example, suppose in the TECC algorithm that the initial estimate c zC is not accurate enough to enable correct identification of the sparse innovation supports {Ωj}. In such a case, it may still be possible for a rough approximation of the innovations {zj} to help refine the estimate c zC. This in turn could help to refine the estimates of the innovations. Since each component helps to estimate the others, we propose an iterative algorithm for JSM-3 recovery. The Alternating Common and Innovation Estimation (ACIE) algorithm exploits the observation that once the basis vectors comprising the innovation zj have been identified in the index set Ωj, their effect on the measurements yj can be removed to aid in estimating zC. ACIE Algorithm for JSM-3 1. Initialize: Set bΩj = ∅for each j. Set the iteration counter ℓ= 1. 2. Estimate common component: Let Φj,bΩj be the Mj × |bΩj| submatrix obtained by sampling the columns bΩj from Φj and construct an Mj × (Mj −|bΩj|) matrix Qj = [qj,1 . . . qj,Mj−|bΩj|] having orthonormal columns that span the orthogonal complement of colspan(Φj,bΩj). Remove the projection of the measurements into the aforementioned span to obtain measurements caused exclusively by vectors not in bΩj, letting eyj = QT j yj and eΦj = QT j Φj. Use the modified measurements eY = eyT 1 eyT 2 . . . eyT J T and modified holographic basis eΦ = h eΦT 1 eΦT 2 . . . eΦT J iT to refine the estimate of the measurements caused by the common part of the signal, setting f zC = eΦ† eY , where A† = (AT A)−1AT denotes the pseudoinverse of matrix A. 3. Estimate innovation supports: For each signal j, subtract f zC from the measurements, byj = yj −Φj f zC, and estimate the sparse support of each innovation bΩj. 4. Iterate: If ℓ< L, a preset number of iterations, then increment ℓand return to Step 2. Otherwise proceed to Step 5. 5. Estimate innovation coefficients: For each signal j, estimate the coefficients for the indices in bΩj, setting bθj,bΩj = Φ† j,bΩj(yj −Φj f zC), where bθj,bΩj is a sampled version of the innovation’s sparse coefficient vector estimate bθj. 6. Reconstruct signals: Estimate each signal as bxj = f zC + bzj = f zC + Φjbθj. In the case where the innovation support estimate is correct (bΩj = Ωj), the measurements eyj will describe only the common component zC. If this is true for every signal j and the number of remaining measurements P j Mj−KJ ≥N, then zC can be perfectly recovered in Step 2. Because it may be difficult to correctly obtain all Ωj in the first iteration, we find it preferable to run the algorithm for several iterations. Fig. 2(a) shows that, for sufficiently large J, we can recover all of the signals with significantly fewer than N measurements per signal. We note the following behavior in the graph. First, as J grows, it becomes more difficult to perfectly reconstruct all J signals. We believe this is inevitable, because even if zC were known without error, then perfect ensemble recovery would require the successful execution of J independent runs of OMP. Second, for small J, the probability of success can decrease at high values of M. We believe this is due to the fact that initial errors in estimating zC may tend to be somewhat sparse (since c zC roughly becomes an average of the signals {xj}), and these sparse errors can mislead the subsequent OMP processes. For more moderate M, it seems that the errors in estimating zC (though greater) tend to be less sparse. We expect that a more sophisticated algorithm could alleviate such a problem, and we note that the problem is also mitigated at higher J. Fig. 2(b) shows that when the sparse innovations share common supports we see an even greater savings. As a point of reference, a traditional approach to signal encoding would require 1600 total measurements to reconstruct these J = 32 nonsparse signals of length N = 50. Our approach requires only about 10 per sensor for a total of 320 measurements. (a) 0 10 20 30 40 50 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of Measurements per Signal, M Probability of Exact Reconstruction 8 16 32 (b) 0 5 10 15 20 25 30 35 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of Measurements per Signal, M Probability of Exact Reconstruction 8 16 32 Figure 2: Reconstructing a signal ensemble with nonsparse common component and sparse innovations (JSM-3) using ACIE. (a) Reconstruction using OMP independently on each signal in Step 3 of the ACIE algorithm (innovations have arbitrary supports). (b) Reconstruction using DCS-SOMP jointly on all signals in Step 3 of the ACIE algorithm (innovations have identical supports). Signal length N = 50, sparsity K = 5. The common structure exploited by DCS-SOMP enables dramatic savings in the number of measurements. We average over 1000 simulation runs. Acknowledgments: Thanks to Emmanuel Cand`es, Hyeokho Choi, and Joel Tropp for informative and inspiring conversations. References [1] D. Baron, M. F. Duarte, S. Sarvotham, M. B. Wakin, and R. G. Baraniuk. An informationtheoretic approach to distributed compressed sensing. In Allerton Conf. Comm., Control, Comput., Sept. 2005. [2] D. Baron, M. B. Wakin, M. F. Duarte, S. Sarvotham, and R. G. Baraniuk. Distributed compressed sensing. 2005. Preprint. Available at www.dsp.rice.edu/cs. [3] E. Cand`es, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Applied Mathematics, 2005. To appear. [4] E. Cand`es and T. Tao. Near optimal signal recovery from random projections and universal encoding strategies. 2004. Preprint. [5] E. Cand`es and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than n. 2005. Preprint. [6] E. Cand`es and T. Tao. Error correction via linear programming. 2005. Preprint. [7] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1998. [8] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, 1991. [9] D. Donoho. Compressed sensing. 2004. Preprint. [10] D. Donoho and Y. Tsaig. Extensions of compressed sensing. 2004. Preprint. [11] M. F. Duarte, S. Sarvotham, D. Baron, M. B. Wakin, and R. G. Baraniuk. Distributed compressed sensing of jointly sparse signals. In Asilomar Conf. Signals, Sys., Comput., Nov. 2005. [12] J. Haupt and R. Nowak. Signal reconstruction from noisy random projections. 2005. Preprint. [13] S. Pradhan and K. Ramchandran. Distributed source coding using syndromes (DISCUS): Design and construction. IEEE Trans. Inform. Theory, 49:626–643, March 2003. [14] D. Slepian and J. K. Wolf. Noiseless coding of correlated information sources. IEEE Trans. Inform. Theory, 19:471–480, July 1973. [15] J. Tropp and A. C. Gilbert. Signal recovery from partial information via orthogonal matching pursuit. 2005. Preprint. [16] J. Tropp, A. C. Gilbert, and M. J. Strauss. Simulataneous sparse approximation via greedy pursuit. In IEEE 2005 Int. Conf. Acoustics, Speech, Signal Processing, March 2005. [17] Z. Xiong, A. Liveris, and S. Cheng. Distributed source coding for sensor networks. IEEE Signal Proc. Mag., 21:80–94, September 2004.
|
2005
|
4
|
2,855
|
Non-Gaussian Component Analysis: a Semi-parametric Framework for Linear Dimension Reduction G. Blanchard1, M. Sugiyama1,2, M. Kawanabe1, V. Spokoiny3, K.-R. M¨uller1,4 1 Fraunhofer FIRST.IDA, Kekul´estr. 7, 12489 Berlin, Germany 2 Dept. of CS, Tokyo Inst. of Tech., 2-12-1, O-okayama, Meguro-ku, Tokyo, 152-8552, Japan 3Weierstrass Institute and Humboldt University, Mohrenstr. 39, 10117 Berlin, Germany 4 Dept. of CS, University of Potsdam, August-Bebel-Strasse 89, 14482 Potsdam, Germany spokoiny@wias-berlin.de {blanchar,sugi,nabe,klaus}@first.fhg.de Abstract We propose a new linear method for dimension reduction to identify nonGaussian components in high dimensional data. Our method, NGCA (non-Gaussian component analysis), uses a very general semi-parametric framework. In contrast to existing projection methods we define what is uninteresting (Gaussian): by projecting out uninterestingness, we can estimate the relevant non-Gaussian subspace. We show that the estimation error of finding the non-Gaussian components tends to zero at a parametric rate. Once NGCA components are identified and extracted, various tasks can be applied in the data analysis process, like data visualization, clustering, denoising or classification. A numerical study demonstrates the usefulness of our method. 1 Introduction Suppose {Xi}n i=1 are i.i.d. samples in a high dimensional space Rd drawn from an unknown distribution with density p(x) . A general multivariate distribution is typically too complex to analyze from the data, thus dimensionality reduction is necessary to decrease the complexity of the model (see, e.g., [4, 11, 10, 12, 1]). We will follow the rationale that in most real-world applications the ‘signal’ or ‘information’ contained in the highdimensional data is essentially non-Gaussian while the ‘rest’ can be interpreted as high dimensional Gaussian noise. Thus we implicitly fix what is not interesting (Gaussian part) and learn its orthogonal complement, i.e. what is interesting. We call this approach nonGaussian components analysis (NGCA). We want to emphasize that we do not assume the Gaussian components to be of smaller order of magnitude than the signal components. This setting therefore excludes the use of common (nonlinear) dimensionality reduction methods such as Isomap [12], LLE [10], that are based on the assumption that the data lies, say, on a lower dimensional manifold, up to some small noise distortion. In the restricted setting where the number of Gaussian components is at most one and all the non-Gaussian components are mutually independent, Independent Component Analysis (ICA) techniques (e.g., [9]) are applicable to identify the non-Gaussian subspace. A framework closer in spirit to NGCA is that of projection pursuit (PP) algorithms [5, 7, 9], where the goal is to extract non-Gaussian components in a general setting, i.e., the number of Gaussian components can be more than one and the non-Gaussian components can be dependent. Projection pursuit methods typically proceed by fixing a single index which measures the non-Gaussianity (or ’interestingness’) of a projection direction. This index is then optimized to find a good direction of projection, and the procedure is iterated to find further directions. Note that some projection indices are suitable for finding super-Gaussian components (heavy-tailed distribution) while others are suited for identifying sub-Gaussian components (light-tailed distribution) [9]. Therefore, traditional PP algorithms may not work effectively if the data contains, say, both super- and sub-Gaussian components. Technically, the NGCA approach to identify the non-Gaussian subspace uses a very general semi-parametric framework based on a central property: there exists a linear mapping h 7→β(h) ∈Rd which, to any arbitrary (smooth) nonlinear function h : Rd →R, associates a vector β lying in the non-Gaussian subspace. Using a whole family of different nonlinear functions h then yields a family of different vectors bβ(h) which all approximately lie in, and span, the non-Gaussian subspace. We finally perform PCA on this family of vectors to extract the principal directions and estimate the target space. Our main theoretical contribution in this paper is to prove consistency of the NGCA procedure, i.e. that the above estimation error vanishes at a rate p log(n)/n with the sample size n. In practice, we consider functions of the particular form hω,a(x) = fa(⟨ω, x⟩) where f is a function class parameterized, say, by a parameter a, and ∥ω∥= 1. Apart from the conceptual point, defining uninterestingness as the point of departure instead of interestingness, another way to look at our method is to say that it allows the combination of information coming from different indices h: here the above function fa (for fixed a) plays a role similar to that of a non-Gaussianity index in PP, but we do combine a rich family of such functions (by varying a and even by considering several function classes at the same time). The important point here is while traditional projection pursuit does not provide a well-founded justification for combining directions obtained from different indices, our framework allows to do precisely this – thus implicitly selecting, in a given family of indices, the ones which are the most informative for the data at hand (while always maintaining consistency). In the following section we will outline our main theoretical contribution, a novel semiparametric theory for linear dimension reduction. Section 3 discusses the algorithmic procedures and simulation results underline the usefulness of NGCA; finally a brief conclusion is given. 2 Theoretical framework The model. We assume the unknown probability density function p(x) of the observations in Rd is of the form p(x) = g(Tx)φΓ (x), (1) where T is an unknown linear mapping from Rd to another space Rm with m ≤d , g is an unknown function on Rm , and φΓ is a centered Gaussian density with unknown covariance matrix Γ . The above decomposition may be possible for any density p since g can be any function. Therefore, this decomposition is not restrictive in general. Note that the model (1) includes as particular cases both the pure parametric ( m = 0 ) and pure non-parametric ( m = d ) models. We effectively consider an intermediate case where d is large and m is rather small. In what follows we denote by I the m -dimensional linear subspace in Rd generated by the dual operator T ⊤: I = Ker(T)⊥= Range(T ⊤) . We call I the non-Gaussian subspace. Note how this definition implements the general point of view outlined in the introduction: by this model we define rather what is considered uninteresting, i.e. the null space of T; the target space is defined indirectly as the orthogonal of the uninteresting component. More precisely, using the orthogonal decomposition X = X0+XI, where X0 ∈Ker(T) and XI ∈I, equation (1) implies that conditionally to XI, X0 has a Gaussian distribution. X0 is therefore ’not interesting’ and we wish to project it out. Our goal is therefore to estimate I by some subspace bI computed from i.i.d. samples {Xi}n i=1 which follows the distribution with density p(x). In this paper we assume the effective dimension m to be known or fixed a priori by the user. Note that we do not estimate Γ , g , and T when estimating I. Population analysis. The main idea underlying our approach is summed up in the following Proposition (proof in Appendix). Whenever variable X has covariance matrix identity, this result allows, from an arbitrary smooth real function h on Rd, to find a vector β(h) ∈I. Proposition 1 Let X be a random variable whose density function p(x) satisfies (1) and suppose that h(x) is a smooth real function on Rd . Assume furthermore that Σ = E £ XX⊤¤ = Id. Then under mild regularity conditions the following vector belongs to the target space I: β(h) = E [∇h −Xh(X)] . (2) Estimation using empirical data. Since the unknown density p(x) is used to define β by Eq.(2), one can not directly use this formula in practice, and it must be approximated using the empirical data. We therefore have to estimate the population expectations using empirical ones. A bound on the corresponding approximation error is then given by the following theorem: Theorem 1 Let h be a smooth function. Assume that supy max (∥∇h(y)∥, ∥h(y)∥) < B and that X has covariance matrix E £ XX⊤¤ = Id and is such that for some λ0 > 0: E [exp (λ0 ∥X∥)] ≤a0 < ∞. (3) Denote eh(x) = ∇h(x) −xh(x). Suppose X1, . . . , Xn are i.i.d. copies of X and define bβ(h) = 1 n n X i=1 eh(Xi) , and bσ(h) = 1 n n X i=1 °°°eh(Xi) −bβ(h) °°° 2 ; (4) then with probability 1 −4δ the following holds: dist ³ bβ(h), I ´ ≤2 r bσ(h)log δ−1 + log d n + C(λ0, a0, B, d) µlog(nδ−1) log δ−1 n 3 4 ¶ . Comments. 1. The proof of the theorem relies on standard tools using Chernoff’s bounding method and is omitted for space. In this theorem, the covariance matrix of X is assumed to be known and equal to identity which is not a realistic assumption; in practice, we use a standard “whitening” procedure (see next section) using the empirical covariance matrix. Of course there is an additional error coming from this step, since the covariance matrix is also estimated empirically. In the extended version of the paper [3], we prove (under somewhat stronger assumptions) a bound for the entirely empirical procedure including whitening, resulting in an approximation error of the same order in n (up to a logarithmic factor). This result was omitted here due to space constraints. 2. Fixing δ, Theorem 1 implies that the vector bβ(h) obtained from any h(x) converges to the unknown non-Gaussian subspace I at a “parametric” rate of order 1/√n . Furthermore, the theorem gives us an estimation of the relative size of the estimation error for different functions h through the (computable from the data) factor p bσ(h) in the main term of the bound. This suggests using this quantity as a renormalizing factor so that the typical approximation error is (roughly) independent of the function h used. This normalization principle will be used in the main procedure. 3. Note the theorem results in an exponential deviation inequality (the dependence in the confidence level δ is logarithmic). As a consequence, using the union bound over a finite net, we can obtain as a corollary of the above theorem a uniform deviation bound of the same form over a (discretized) set of functions (where the log-cardinality of the set appears as an additional factor). For instance, if we consider a 1/n-discretization net of functions with d parameters, hence of size O(nd), then the above bounds holds uniformly when replacing the log δ−1 term by d log n + log δ−1. This does not change fundamentally the bound (up to an additional complexity factor p d log(n)), and justifies that we consider simultaneously such a family of functions in the main algorithm. I h h h h h 1 2 3 4 5 β β 1 β 4 2 ^ ^ ^ 3 β^ β^ 5 h(x) x Figure 1: The NGCA main idea: from a varied family of real functions h, compute a family of vectors bβ belonging to the target space up to small estimation error. 3 The NGCA algorithm In the last section, we have established that given an arbitrary smooth real function h on Rd, we are able to construct a vector bβ(h) which belongs to the target space I up to a small estimation error. The main idea is now to consider a large family of such functions (hk), giving rise to a family of vectors bβk (see Fig. 1). Theorem 1 ensures that the estimation error remains controlled uniformly, and we can also normalize the vectors such that the estimation error is of the same order for all vectors (see Comments 2 and 3 above). Under this condition, it can be shown that vectors with a longer norm are more informative about the target subspace, and that vectors with too small a norm are uninformative. We therefore throw out the smaller vectors, then estimate the target space I by applying a principal components analysis to the remaining vector family. In the proposed algorithm we will restrict our attention to functions of the form hf,ω(x) = f(⟨ω, x⟩), where ω ∈Rd, ∥ω∥= 1, and f belongs to a finite family F of smooth real functions of real variable. Our theoretical setting allows to ensure that the approximation error remains small uniformly over F and ω (rigorously, ω should be restricted to a finite ε-net of the unit sphere in order to consider a finite family of functions: in practice we will overlook this weak restriction). However, it is not feasible in practice to sample the whole parameter space for ω as soon as it has more than a few dimensions. To overcome this difficulty, we advocate using a well-known PP algorithm, FastICA [8], as a proxy to find good candidates for ωf for a fixed f. Note that this does not make NGCA equivalent to FastICA: the important point is that FastICA, as a stand-alone procedure, requires to fix the “index function” f beforehand. The crucial novelty of our method is that we provide a theoretical setting and a methodology which allows to combine the results of this projection pursuit method when used over a possibly large spectrum of arbitrary index functions f. NGCA ALGORITHM. Input: Data points (Xi) ∈Rd, dimension m of target subspace. Parameters: Number Tmax of FastICA iterations; threshold ϵ; family of real functions (fk). Whitening. The data Xi is recentered by subtracting the empirical mean. Let bΣ denote the empirical covariance matrix of the data sample (Xi) ; put bYi = bΣ−1 2 Xi the empirically whitened data. Main Procedure. Loop on k = 1, . . . , L: Draw ω0 at random on the unit sphere of Rd. Loop on t = 1, . . . , Tmax: [FastICA loop] Put bβt ←1 n n X i=1 ³ bYifk(⟨ωt−1, bYi⟩) −f ′ k(⟨ωt−1, bYi⟩)ωt−1 ´ . Put ωt ←bβt/∥bβt∥. End Loop on t Let Ni be the trace of the empirical covariance matrix of bβTmax: Ni = 1 n n X i=1 °°°bYifk(⟨ωTmax−1, bYi⟩) −f ′ k(⟨ωTmax−1, bYi⟩)ωTmax−1 °°° 2 − °°°bβTmax °°° 2 . Store v(k) ←bβTmax ∗ p n/Ni. [Normalization] End Loop on k Thresholding. From the family v(k), throw away vectors having norm smaller than threshold ϵ. PCA step. Perform PCA on the set of remaining v(k). Let Vm be the space spanned by the first m principal directions. Pull back in original space. Output: Wm = bΣ−1 2 Vm. Summing up, the NGCA algorithm finally consists of the following steps (see above pseudocode): (1) Data whitening (see Comment 1 in the previous section), (2) Apply FastICA to each function f ∈F to find a promising candidate value for ωf, (3) Compute the corresponding family of vectors (bβ(hf,ωf ))f∈F (using Eq. (4)), (4) Normalize the vectors appropriately; threshold and throw out uninformative ones, (5) Apply PCA, (6) Pull back in original space (de-whitening). In the implementation tested, we have used the following forms of the functions fk: f (1) σ (z) = z3 exp(−z2/2σ2) (Gauss-Pow3), f (2) b (z) = tanh(bz) (Hyperbolic Tangent), f (3) a (z) = {sin, cos} (az) (Fourier). More precisely, we consider discretized ranges for a ∈[0, A], b ∈[0, B], σ ∈[σmin, σmax]; this gives rise to a finite family (fk) (which includes simultaneously functions of the three different above families). 4 Numerical results Parameters used. All the experiments presented where obtained with exactly the same set of parameters: a ∈[0, 4] for the Fourier functions; b ∈[0, 5] for the Hyperbolic Tangent functions; σ2 ∈[0.5, 5] for the Gauss-pow3 functions. Each of these ranges was divided PP(pow3) PP(tanh) NGCA 0 0.5 1 1.5 2 2.5 x 10 −3 PP(pow3) PP(tanh) NGCA 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 PP(pow3) PP(tanh) NGCA 0 0.005 0.01 0.015 0.02 0.025 0.03 PP(pow3) PP(tanh) NGCA 0 0.005 0.01 0.015 0.02 0.025 0.03 (A) (B) (C) (D) Figure 2: Boxplots of the error criterion E(bI, I) over 100 training samples of size 1000. 0 0.5 1 1.5 2 x 10 −3 0 0.5 1 1.5 2 x 10 −3 NGCA PP (pow3) 0 0.02 0.04 0.06 0.08 0.1 0.12 0 0.02 0.04 0.06 0.08 0.1 0.12 NGCA PP (pow3) 0 0.005 0.01 0.015 0.02 0.025 0.03 0 0.005 0.01 0.015 0.02 0.025 0.03 NGCA PP (pow3) 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 NGCA PP (pow3) 0 0.5 1 1.5 2 x 10 −3 0 0.5 1 1.5 2 x 10 −3 NGCA PP (tanh) 0 0.02 0.04 0.06 0.08 0.1 0.12 0 0.02 0.04 0.06 0.08 0.1 0.12 NGCA PP (tanh) 0 0.005 0.01 0.015 0.02 0.025 0.03 0 0.005 0.01 0.015 0.02 0.025 0.03 NGCA PP (tanh) 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 0 0.002 0.004 0.006 0.008 0.01 0.012 0.014 NGCA PP (tanh) (A) (B) (C) (D) Figure 3: Sample-wise performance comparison plots (for error criterion E(bI, I)) of NGCA versus FastICA; top: versus pow3 index; bottom: versus tanh index. Each point represents a different sample of size 1000. In (C)-top, about 25% of the points corresponding to a failure of FastICA fall outside of the range and were not represented. into 1000 equispaced values, thus yielding a family (fk) of size 4000 (Fourier functions count twice because of the sine and cosine parts). Some preliminary calibration suggested to take ε = 1.5 as the threshold under which vectors are not informative. Finally we fixed the number of FastICA iterations Tmax = 10. With this choice of parameters, with 1000 points of data the computation time is typically of the order of 10 seconds on a modern PC under a Matlab implementation. Tests in a controlled setting. We performed numerical experiments using various synthetic data. We report exemplary results using 4 data sets. Each data set includes 1000 samples in 10 dimensions, and consists of 8-dimensional independent standard Gaussian and 2 non-Gaussian components as follows: (A) Simple Gaussian Mixture: 2-dimensional independent bimodal Gaussian mixtures; (B) Dependent super-Gaussian: 2-dimensional density is proportional to exp(−∥x∥); (C) Dependent sub-Gaussian: 2-dimensional uniform on the unit circle; (D) Dependent super- and sub-Gaussian: 1-dimensional Laplacian with density proportional to exp(−|xLap|) and 1-dimensional dependent uniform U(c, c + 1), where c = 0 for |xLap| ≤log 2 and c = −1 otherwise. We compare the NGCA method against stand-alone FastICA with two different index functions. Figure 2 shows boxplots and Figure 3 sample-wise comparison plots, over 100 samples, of the error criterion E(bI, I) = m−1 Pm i=1 ∥(Id −PI)bvi∥2, where {bvi}m i=1 is an Figure 4: 2D projection of the “oil flow” (12-dimensional) data obtained by different algorithms, from left two right: PCA, Isomap, FastICA (tanh index), NGCA. In each case, the data was first projected in 3D using the respective methods, from which a 2D projection was chosen visually so as to yield the clearest cluster structure. Available label information was not used to determine the projections. orthonormal basis of bI, Id is the identity matrix, and PI denotes the orthogonal projection on I. In datasets (A),(B),(C), NGCA appears to be on par with the best FastICA method. As expected, the best index for FastICA is data-dependent: the ’tanh’ index is more suited to the super-Gaussian data (B), while the ’pow3’ index works best with the sub-Gaussian data (C) (although, in this case, FastICA with this index has a tendency to get caught in local minima, leading to a disastrous result for about 25% of the samples. Note that NGCA does not suffer from this problem). Finally, the advantage of the implicit index adaptation feature of NGCA can be clearly observed in the data set (D), which includes both sub- and super-Gaussian components. In this case, neither of the two FastICA index functions taken alone does well, and NGCA gives significantly lower error than either FastICA flavor. Example of application for realistic data: visualization and clustering We now give an example of application of NGCA to visualization and clustering of realistic data. We consider here “oil flow” data, which has been obtained by numerical simulation of a complex physical model. This data was already used before for testing techniques of dimension reduction [2]. The data is 12-dimensional and our goal is to visualize the data, and possibly exhibit a clustered structure. We compared results obtained with the NGCA methodology, regular PCA, FastICA with tanh index and Isomap. The results are shown on Figure 4. A 3D projection of the data was first computed using these methods, which was in turn projected in 2D to draw the figure; this last projection was chosen manually so as to make the cluster structure as visible as possible in each case. The NGCA result appears better with a clearer clustered structure appearing. This structure is only partly visible in the Isomap result; the NGCA method additionally has the advantage of a clear geometrical interpretation (linear orthogonal projection). Finally, datapoints in this dataset are distributed in 3 classes. This information was not used in the different procedures, but we can see a posteriori that only NGCA clearly separates the classes in distinct clusters. Clustering applications on other benchmark datasets is presented in the extended paper [3]. 5 Conclusion We proposed a new semi-parametric framework for constructing a linear projection to separate an uninteresting, possibly of large amplitude multivariate Gaussian ‘noise’ subspace from the ‘signal-of-interest’ subspace. We provide generic consistency results on how well the non-Gaussian directions can be identified (Theorem 1). Once the low-dimensional ‘signal’ part is extracted, we can use it for a variety of applications such as data visualization, clustering, denoising or classification. Numerically we found comparable or superior performance to, e.g., FastICA in deflation mode as a generic representative of the family of PP algorithms. Note that in general, PP methods need to pre-specify a projection index with which they search non-Gaussian components. By contrast, an important advantage of our method is that we are able to simultaneously use several families of nonlinear functions; moreover, also inside a same function family we are able to use an entire range of parameters (such as frequency for Fourier functions). Thus, NGCA provides higher flexibility, and less restricting assumptions a priori on the data. In a sense, the functional indices that are the most relevant for the data at hand are automatically selected. Future research will adapt the theory to simultaneously estimate the dimension of the nonGaussian subspace. Extending the proposed framework to non-linear projection scenarios [4, 11, 10, 12, 1, 6] and to finding the most discriminative directions using labels are examples for which the current theory could be taken as a basis. Acknowledgements: This work was supported in part by the PASCAL Network of Excellence (EU # 506778). Proof of Proposition 1 Put α = E [Xh(X)] and ψ(x) = h(x)−α⊤x. Note that ∇ψ = ∇h−α, hence β(h) = E [∇ψ(X)]. Furthermore, it holds by change of variable that Z ψ(x + u)p(x)dx = Z ψ(x)p(x −u)dx. Under mild regularity conditions on p(x) and h(x), differentiating this with respect to u gives E [∇ψ(X)] = Z ∇ψ(x)p(x)dx = − Z ψ(x)∇p(x)dx = −E [ψ(X)∇log p(X)] , where we have used ∇p(x) = ∇log p(x) p(x). Eq.(1) now implies ∇log p(x) = ∇log g(Tx) − Γ −1x, hence β(ψ) = −E [ψ(X)∇log g(TX)] + E ˆ ψ(X)Γ −1X ˜ = −T ⊤E [ψ(X)∇g(TX)/g(TX)] + Γ −1E h Xh(X) −XX⊤E [Xh(X)] i . The last term above vanishes because we assumed E ˆ XX⊤˜ = Id. The first term belongs to I by definition. This concludes the proof. □ References [1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, 2003. [2] C.M. Bishop, M. Svensen and C.K.I. Wiliams. GTM: The generative topographic mapping. Neural Computation, 10(1):215–234, 1998. [3] G. Blanchard, M. Sugiyama, M. Kawanabe, V. Spokoiny, K.-R. M¨uller. In search of nonGaussian components of a high-dimensional distribution. Technical report of the Weierstrass Institute for Applied Analysis and Stochastics, 2006. [4] T.F. Cox and M.A.A. Cox. Multidimensional Scaling. Chapman & Hall, London, 2001. [5] J.H. Friedman and J.W. Tukey. A projection pursuit algorithm for exploratory data analysis. IEEE Transactions on Computers, 23(9):881–890, 1975. [6] S. Harmeling, A. Ziehe, M. Kawanabe and K.-R. M¨uller. Kernel-based nonlinear blind source separation. Neural Computation, 15(5):1089–1124, 2003. [7] P.J. Huber. Projection pursuit. The Annals of Statistics, 13:435–475, 1985. [8] A. Hyv¨arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3):626–634, 1999. [9] A. Hyv¨arinen, J. Karhunen and E. Oja. Independent component analysis. Wiley, 2001. [10] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000. [11] B. Sch¨olkopf, A.J. Smola and K.–R. M¨uller. Nonlinear component analysis as a kernel Eigenvalue problem. Neural Computation, 10(5):1299–1319, 1998. [12] J.B. Tenenbaum, V. de Silva and J.C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.
|
2005
|
40
|
2,856
|
Divergences, surrogate loss functions and experimental design XuanLong Nguyen University of California Berkeley, CA 94720 xuanlong@cs.berkeley.edu Martin J. Wainwright University of California Berkeley, CA 94720 wainwrig@eecs.berkeley.edu Michael I. Jordan University of California Berkeley, CA 94720 jordan@cs.berkeley.edu Abstract In this paper, we provide a general theorem that establishes a correspondence between surrogate loss functions in classification and the family of f-divergences. Moreover, we provide constructive procedures for determining the f-divergence induced by a given surrogate loss, and conversely for finding all surrogate loss functions that realize a given f-divergence. Next we introduce the notion of universal equivalence among loss functions and corresponding f-divergences, and provide necessary and sufficient conditions for universal equivalence to hold. These ideas have applications to classification problems that also involve a component of experiment design; in particular, we leverage our results to prove consistency of a procedure for learning a classifier under decentralization requirements. 1 Introduction A unifying theme in the recent literature on classification is the notion of a surrogate loss function—a convex upper bound on the 0-1 loss. Many practical classification algorithms can be formulated in terms of the minimization of surrogate loss functions; well-known examples include the support vector machine (hinge loss) and Adaboost (exponential loss). Significant progress has been made on the theoretical front by analyzing the general statistical consequences of using surrogate loss functions [e.g., 2, 10, 13]. These recent developments have an interesting historical antecedent. Working in the context of experimental design, researchers in the 1960’s recast the (intractable) problem of minimizing the probability of classification error in terms of the maximization of various surrogate functions [e.g., 5, 8]. Examples of experimental design include the choice of a quantizer as a preprocessor for a classifier [12], or the choice of a “signal set” for a radar system [5]. The surrogate functions that were used included the Hellinger distance and various forms of KL divergence; maximization of these functions was proposed as a criterion for the choice of a design. Theoretical support for this approach was provided by a classical theorem on the comparison of experiments due to Blackwell [3]. An important outcome of this line of work was the definition of a general family of “f-divergences” (also known as “Ali-Silvey distances”), which includes Hellinger distance and KL divergence as special cases [1, 4]. In broad terms, the goal of the current paper is to bring together these two literatures, in particular by establishing a correspondence between the family of surrogate loss functions and the family of f-divergences. Several specific goals motivate us in this regard: (1) different f-divergences are related by various well-known inequalities [11], so that a correspondence between loss functions and f-divergences would allow these inequalities to be harnessed in analyzing surrogate loss functions; (2) a correspondence could allow the definition of interesting equivalence classes of losses or divergences; and (3) the problem of experimental design, which motivated the classical research on f-divergences, provides new venues for applying the loss function framework from machine learning. In particular, one natural extension—and one which we explore towards the end of this paper—is in requiring consistency not only in the choice of an optimal discriminant function but also in the choice of an optimal experiment design. The main technical contribution of this paper is to state and prove a general theorem relating surrogate loss functions and f-divergences. 1 We show that the correspondence is quite strong: any surrogate loss induces a corresponding f-divergence, and any f-divergence satisfying certain conditions corresponds to a family of surrogate loss functions. Moreover, exploiting tools from convex analysis, we provide a constructive procedure for finding loss functions from f-divergences. We also introduce and analyze a notion of universal equivalence among loss functions (and corresponding f-divergences). Finally, we present an application of these ideas to the problem of proving consistency of classification algorithms with an additional decentralization requirement. 2 Background and elementary results Consider a covariate X ∈X, where X is a compact topological space, and a random variable Y ∈Y := {−1, +1}. The space (X × Y ) is assumed to be endowed with a Borel regular probability measure P. In this paper, we consider a variant of the standard classification problem, in which the decision-maker, rather than having direct access to X, only observes some variable Z ∈Z that is obtained via conditional probability Q(Z|X). The stochastic map Q is referred to as an experiment in statistics; in the signal processing literature, where Z is generally taken to be discrete, it is referred to as a quantizer. We let Q denote the space of all stochastic Q and let Q0 denote its deterministic subset. Given a fixed experiment Q, we can formulate a standard binary classification problem as one of finding a measurable function γ ∈Γ := {Z →R} that minimizes the Bayes risk P(Y ̸= sign(γ(Z))). Our focus is the broader question of determining both the classifier γ ∈Γ, as well as the experiment choice Q ∈Q so as to minimize the Bayes risk. The Bayes risk corresponds to the expectation of the 0-1 loss. Given the non-convexity of this loss function, it is natural to consider a surrogate loss function φ that we optimize in place of the 0-1 loss. We refer to the quantity Rφ(γ, Q) := Eφ(Y γ(Z)) as the φ-risk. For each fixed quantization rule Q, the optimal φ risk (as a function of Q) is defined as follows: Rφ(Q) := inf γ∈Γ Rφ(γ, Q). (1) Given priors q = P(Y = −1) and p = P(Y = 1), define nonnegative measures µ and π: µ(z) = P(Y = 1, Z = z) = p Z x Q(z|x)dP(x|Y = 1) π(z) = P(Y = −1, Z = z) = q Z x Q(z|x)dP(x|Y = −1). 1Proofs are omitted from this manuscript for lack of space; see the long version of the paper [7] for proofs of all of our results. As a consequence of Lyapunov’s theorem, the space of {(µ, π)} obtained by varying Q ∈ Q (or Q0) is both compact and convex (see [12] for details). For simplicity, we assume that the space Q of Q is restricted such that both µ and π are strictly positive measures. One approach to choosing Q is to define an f-divergence between µ and π; indeed this is the classical approach referred to earlier [e.g., 8]. Rather than following this route, however, we take an alternative path, setting up the problem in terms of φ-risk and optimizing out the discriminant function γ. Note in particular that the φ-risk can be represented in terms of the measures µ and π as follows: Rφ(γ, Q) = X z φ(γ(z))µ(z) + φ(−γ(z))π(z). (2) This representation allows us to compute the optimal value for γ(z) for all z ∈Z, as well as the optimal φ risk for a fixed Q. We illustrate this calculation with several examples: 0-1 loss. If φ is 0-1 loss, then γ(z) = sign(µ(z)−π(z)). Thus the optimal Bayes risk given a fixed Q takes the form: Rbayes(Q) = P z∈Z min{µ(z), π(z)} = 1 2 −1 2 P z∈Z |µ(z) − π(z)| =: 1 2(1 −V (µ, π)), where V (µ, π) denotes the variational distance between two measures µ and π. Hinge loss. Let φhinge(yγ(z)) = (1 −yγ(z))+. In this case γ(z) = sign(µ(z) −π(z)) and the optimal risk takes the form: Rhinge(Q) = P z∈Z 2 min{µ(z), π(z)} = 1 − P z∈Z |µ(z) −π(z)| = 1 −V (µ, π) = 2Rbayes(Q). Least squares loss. Letting φsqr(yγ(z)) = (1 −yγ(z))2, we have γ(z) = µ(z)−π(z) µ(z)+π(z). The optimal risk takes the form: Rsqr(Q) = P z∈Z 4µ(z)π(z) µ(z)+π(z) = 1 −P z∈Z (µ(z)−π(z))2 µ(z)+π(z) =: 1 −∆(µ, π), where ∆(µ, π) denotes the triangular discrimination distance. Logistic loss. Letting φlog(yγ(z)) := log ¡ 1 + exp−yγ(z) ¢ , we have γ(z) = log µ(z) π(z). The optimal risk for logistic loss takes the form: Rlog(Q) = P z∈Z µ(z) log µ(z)+π(z) µ(z) + π(z) log µ(z)+π(z) π(z) = log 2 −KL(µ|| µ+π 2 ) −KL(π|| µ+π 2 ) =: log 2 −C(µ, π), where C(U, V ) denotes the capacitory discrimination distance. Exponential loss. Letting φexp(yγ(z)) = exp(−yγ(z)), we have γ(z) = 1 2 log µ(z) π(z). The optimal risk for exponential loss takes the form: Rexp(Q) = P z∈Z 2 p µ(z)π(z) = 1 −P z∈Z( p µ(z) − p π(z))2 = 1 −2h2(µ, π), where h(µ, π) denotes the Hellinger distance between measures µ and π. All of the distances given above (e.g., variational, Hellinger) are all particular instances of f-divergences. This fact points to an interesting correspondence between optimized φ-risks and f-divergences. How general is this correspondence? 3 The correspondence between loss functions and f-divergences In order to resolve this question, we begin with precise definitions of f-divergences, and surrogate loss functions. A f-divergence functional is defined as follows [1, 4]: Definition 1. Given any continuous convex function f : [0, +∞) →R ∪{+∞}, the f-divergence between measures µ and π is given by If(µ, π) := P z π(z)f µ µ(z) π(z) ¶ . For instance, the variational distance is given by f(u) = |u−1|, KL divergence by f(u) = u log u, triangular discrimination by f(u) = (u −1)2/(u + 1), and Hellinger distance by f(u) = 1 2(√u −1)2. Surrogate loss φ. First, we require that any surrogate loss function φ is continuous and convex. Second, the function φ must be classification-calibrated [2], meaning that for any a, b ≥0 and a ̸= b, infα:α(a−b)<0 φ(α)a + φ(−α)b > infα∈R φ(α)a + φ(−α)b. It can be shown [2] that in the convex case φ is classification-calibrated if and only if it is differentiable at 0 and φ′(0) < 0. Lastly, let α∗= infα{φ(α) = inf φ}. If α∗< +∞, then for any δ > 0, we require that φ(α∗−δ) ≥φ(α∗+ δ). The interpretation of the last assumption is that one should penalize deviations away from α∗in the negative direction at least as strongly as deviations in the positive direction; this requirement is intuitively reasonable given the margin-based interpretation of α. From φ-risk to f-divergence. We begin with a simple result that formalizes how any φrisk induces a corresponding f-divergence. More precisely, the following lemma proves that the optimal φ risk for a fixed Q can be written as the negative of an f divergence. Lemma 2. For each fixed Q, let γQ denote the optimal decision rule. The φ risk for (Q, γQ) is an f-divergence between µ and π for some convex function f: Rφ(Q) = −If(µ, π). (3) Proof. The optimal φ risk takes the form: Rφ(Q) = X z∈Z inf α (φ(α)µ(z) + φ(−α)π(z)) = X z π(z) inf α µ φ(−α) + φ(α)µ(z) π(z) ¶ . For each z let u = µ(z) π(z), then infα(φ(−α) + φ(α)u) is a concave function of u (since minimization over a set of linear function is a concave function). Thus, the claim follows by defining (for u ∈R) f(u) := −inf α (φ(−α) + φ(α)u). (4) From f-divergence to φ-risk. In the remainder of this section, we explore the converse of Lemma 2. Given a divergence If(µ, π) for some convex function f, does there exist a loss function φ for which Rφ(Q) = −If(µ, π)? In the following, we provide a precise characterization of the set of f-divergences that can be realized in this way, as well as a constructive procedure for determining all φ that realize a given f-divergence. Our method requires the introduction of several intermediate functions. First, let us define, for each β, the inverse mapping φ−1(β) := inf{α : φ(α) ≤β}, where inf ∅:= +∞. Using the function φ−1, we then define a new function Ψ : R →R by Ψ(β) := ½φ(−φ−1(β)) if φ−1(β) ∈R, +∞ otherwise. (5) Note that the domain of Ψ is Dom(Ψ) = {β ∈R : φ−1(β) ∈R}. Define β1 := inf{β : Ψ(β) < +∞} and β2 := inf{β : Ψ(β) = inf Ψ}. (6) It is simple to check that inf φ = inf Ψ = φ(α∗), and β1 = φ(α∗), β2 = φ(−α∗). Furthermore, Ψ(β2) = φ(α∗) = β1, Ψ(β1) = φ(−α∗) = β2. With this set-up, the following lemma captures several important properties of Ψ: Lemma 3. (a) Ψ is strictly decreasing in (β1, β2). If φ is decreasing, then Ψ is also decreasing in (−∞, +∞). In addition, Ψ(β) = +∞for β < β1. (b) Ψ is convex in (−∞, β2]. If φ is decreasing, then Ψ is convex in (−∞, +∞). (c) Ψ is lower semi-continuous, and continuous in its domain. (d) There exists u∗∈(β1, β2) such that Ψ(u∗) = u∗. (e) There holds Ψ(Ψ(β)) = β for all β ∈(β1, β2). The connection between Ψ and an f-divergence arises from the following fact. Given the definition (5) of Ψ, it is possible to show that f(u) = sup β∈R (−βu −Ψ(β)) = Ψ∗(−u), (7) where Ψ∗denotes the conjugate dual of the function Ψ. Hence, if Ψ is a lower semicontinuous convex function, it is possible to recover Ψ from f by means of convex duality [9]: Ψ(β) = f ∗(−β). Thus, equation (5) provides means for recovering a loss function φ from Ψ. Indeed, the following theorem provides a constructive procedure for finding all such φ when Ψ satisfies necessary conditions specified in Lemma 3: Theorem 4. (a) Given a lower semicontinuous convex function f : R →R, define: Ψ(β) = f ∗(−β). (8) If Ψ is a decreasing function satisfying the properties specified in parts (c), (d) and (e) of Lemma 3, then there exist convex continuous loss function φ for which (3) and (4) hold. (b) More precisely, all such functions φ are of the form: For any α ≥0, φ(α) = Ψ(g(α + u∗)), and φ(−α) = g(α + u∗), (9) where u∗satisfies Ψ(u∗) = u∗for some u∗∈(β1, β2) and g : [u∗, +∞) →R is any increasing continuous convex function such that g(u∗) = u∗. Moreover, g is differentiable at u∗+ and g′(u∗+) > 0. One interesting consequence of Theorem 4 that any realizable f-divergence can in fact be obtained from a fairly large set of φ loss functions. More precisely, examining the statement of Theorem 4(b) reveals that for α ≤0, we are free to choose a function g that must satisfy only mild conditions; given a choice of g, then φ is specified for α > 0 accordingly by equation (9). We describe below how the Hellinger distance, for instance, is realized not only by the exponential loss (as described earlier), but also by many other surrogate loss functions. Additional examples can be found in [7]. Illustrative examples. Consider Hellinger distance, which is an f-divergence2 with f(u) = −2√u. Augment the domain of f with f(u) = +∞for u < 0. Following the prescription of Theorem 4(a), we first recover Ψ from f: Ψ(β) = f ∗(−β) = sup u∈R (−βu −f(u)) = ½1/β when β > 0 +∞ otherwise. Clearly, u∗= 1. Now if we choose g(u) = eu−1, then we obtain the exponential loss φ(α) = exp(−α). However, making the alternative choice g(u) = u, we obtain the function φ(α) = 1/(α+1) and φ(−α) = α+1, which also realizes the Hellinger distance. Recall that we have shown previously that the 0-1 loss induces the variational distance, which can be expressed as an f-divergence with fvar(u) = −2 min(u, 1) for u ≥0. It is thus of particular interest to determine other loss functions that also lead to variational distance. If we augment the function fvar by defining fvar(u) = +∞for u < 0, then we can recover Ψ from fvar as follows: Ψ(β) = f ∗ var(−β) = sup u∈R (−βu −fvar(u)) = ½(2 −β)+ when β ≥0 +∞ when β < 0. 2We consider f-divergences for two convex functions f1 and f2 to be equivalent if f1 and f2 are related by a linear term, i.e., f1 = cf2 + au + b for some constants c > 0, a, b, because then If1 and If2 are different by a constant. Clearly u∗= 1. Choosing g(u) = u leads to the hinge loss φ(α) = (1 −α)+, which is consistent with our earlier findings. Making the alternative choice g(u) = eu−1 leads to a rather different loss—namely, φ(α) = (2 −eα)+ for α ≥0 and φ(α) = e−α for α < 0— that also realizes the variational distance. Using Theorem 4 it can be shown that an f-divergence is realizable by a margin-based surrogate loss if and only if it is symmetric [7]. Hence, the list of non-realizable f-divergences includes the KL divergence KL(µ||π) (as well as KL(π||µ)). The symmetric KL divergence KL(µ||π) + KL(π||µ) is a realizable f-divergence. Theorem 4 allows us to construct all φ losses that realize it. One of them turns out to have the simple closed-form φ(α) = e−α −α, but obtaining it requires some non-trivial calculations [7]. 4 On comparison of loss functions and quantization schemes The previous section was devoted to study of the correspondence between f-divergences and the optimal φ-risk Rφ(Q) for a fixed experiment Q. Our ultimate goal, however, is that of choosing an optimal Q, a problem known as experimental design in the statistics literature [3]. One concrete application is the design of quantizers for performing decentralized detection [12, 6] in a sensor network. In this section, we address the experiment design problem via the joint optimization of φrisk (or more precisely, its empirical version) over both the decision γ and the choice of experiment Q (hereafter referred to as a quantizer). This procedure raises the natural theoretical question: for what loss functions φ does such joint optimization lead to minimum Bayes risk? Note that the minimum here is taken over both the decision rule γ and the space of experiments Q, so that this question is not covered by standard consistency results [13, 10, 2]. Here we describe how the results of the previous section can be leveraged to resolve this issue of consistency. 4.1 Universal equivalence The connection between f-divergences and 0-1 loss can be traced back to seminal work on the comparison of experiments [3]. Formally, we say that the quantization scheme Q1 dominates than Q2 if Rbayes(Q1) ≤Rbayes(Q2) for any prior probabilities q ∈(0, 1). We have the following theorem [3] (see also [7] for a short proof): Theorem 5. Q1 dominates Q2 iff If(µQ1, πQ1) ≥If(µQ2, πQ2), for all convex functions f. The superscripts denote the dependence of µ and π on the quantizer rules Q1, Q2. Using Lemma 2, we can establish the following: Corollary 6. Q1 dominates Q2 iff Rφ(Q1) ≤Rφ(Q2) for any surrogate loss φ. One implication of Corollary 6 is that if Rφ(Q1) ≤Rφ(Q2) for some loss function φ, then Rbayes(Q1) ≤Rbayes(Q2) for some set of prior probabilities on the labels Y . This fact justifies the use of a surrogate φ-loss as a proxy for the 0-1 loss, at least for a certain subset of prior probabilities. Typically, however, the goal is to select the optimal experiment Q for a pre-specified set of priors, in which context this implication is of limited use. We are thus motivated to consider a different method of determining which loss functions (or equivalently, f-divergences) lead to the same optimal experimental design as the 0-1 loss (respectively the variational distance). More generally, we are interested in comparing two arbitrary loss function φ1 and φ2, with corresponding divergences induced by f1 and f2 respectively: Definition 7. The surrogate loss functions φ1 and φ2 are universally equivalent, denoted by φ1 u≈φ2 (and f1 u≈f2), if for any P(X, Y ) and quantization rules Q1, Q2, there holds: Rφ1(Q1) ≤Rφ1(Q2) ⇔Rφ2(Q1) ≤Rφ2(Q2). (10) The following result provides necessary and sufficient conditions for universal equivalence: Theorem 8. Suppose that f1 and f2 are differentiable a.e., convex functions that map [0, +∞) to R. Then f1 u≈f2 if and only if f1(u) = cf2(u) + au + b for some constants a, b ∈R and c > 0. If we restrict our attention to convex and differentiable a.e. functions f, then it follows that all f-divergences univerally equivalent to the variational distance must have the form f(u) = −c min(u, 1) + au + b with c > 0. (11) As a consequence, the only φ-loss functions universally equivalent to 0-1 loss are those that induce an f-divergence of this form (11). One well-known example of such a function is the hinge loss; more generally, Theorem 4 allows us to construct all such φ. 4.2 Consistency in experimental design The notion of universal equivalence might appear quite restrictive because condition (10) must hold for any underlying probability measure P(X, Y ). However, this is precisely what we need when P(X, Y ) is unknown. Assume that the knowledge about P(X, Y ) comes from an empirical data sample (xi, yi)n i=1. Consider any algorithm (such as that proposed by Nguyen et al. [6]) that involves choosing a classifier-quantizer pair (γ, Q) ∈Γ × Q by minimizing an empirical version of φ-risk: ˆRφ(γ, Q) := 1 n n X i=1 X z φ(yiγ(z))Q(z|xi). More formally, suppose that (Cn, Dn) is a sequence of increasing compact function classes such that C1 ⊆C2 ⊆. . . ⊆Γ and D1 ⊆D2 ⊆. . . ⊆Q. Let (γ∗ n, Q∗ n) be an optimal solution to the minimization problem min(γ,Q)∈(Cn,Dn) ˆRφ(γ, Q), and let R∗ bayes denote the minimum Bayes risk achieved over the space of decision rules (γ, Q) ∈(Γ, Q). We call Rbayes(γ∗ n, Q∗ n) −R∗ bayes the Bayes error of our estimation procedure. We say that such a procedure is universally consistent if the Bayes error tends to 0 as n →∞, i.e., for any (unknown) Borel probability measure P on X × Y , lim n→∞Rbayes(γ∗ n, Q∗ n) −R∗ bayes = 0 in probability. When the surrogate loss φ is universally equivalent to 0-1 loss, we can prove that suitable learning procedures are indeed universally consistent. Our approach is based on the framework developed by various authors [13, 10, 2] for the case of ordinary classification, and using the strategy of decomposing the Bayes error into a combination of (a) approximation error introduced by the bias of the function classes Cn ⊆Γ: E0(Cn, Dn) = inf(γ,Q)∈(Cn,Dn) Rφ(γ, Q) −R∗ φ, where R∗ φ := inf(γ,Q)∈(Γ,Q) Rφ(γ, Q); and (b) estimation error introduced by the variance of using finite sample size n, E1(Cn, Dn) = E sup(γ,Q)∈(Cn,Dn) | ˆRφ(γ, Q) −Rφ(γ, Q)|, where the expectation is taken with respect to the (unknown) probability measure P(X, Y ). Assumptions. Assume that the loss function φ is universally equivalent to the 0-1 loss. From Theorem 8, the corresponding f-divergence must be of the form f(u) = −c min(u, 1) + au + b, for a, b ∈R and c > 0. Finally, we also assume that (a −b)(p −q) ≥0 and φ(0) ≥0.3 In addition, for each n = 1, 2, . . ., suppose that Mn := supy,z sup(γ,Q)∈(Cn,Dn) |φ(yγ(z))| < +∞. 3These technical conditions are needed so that the approximation error due to varying Q dominates the approximation error due to varying γ. Setting a = b is sufficient. The following lemma plays a key role in our proof: it links the excess φ-risk to the Bayes error when performing joint minimization: Lemma 9. For any (γ, Q), we have c 2(Rbayes(γ, Q) −R∗ bayes) ≤Rφ(γ, Q) −R∗ φ. Finally, we can relate the Bayes error to the approximation error and estimation error, and provide general conditions for universal consistency: Theorem 10. (a) For any Borel probability measure P, with probability at least 1−δ, there holds: Rbayes(γ∗ n, Q∗ n) −R∗ bayes ≤2 c(2E1(Cn, Dn) + E0(Cn, Dn) + 2Mn p 2 ln(2/δ)/n). (b) (Universal Consistency) If ∪∞ n=1Dn is dense in Q and if ∪∞ n=1Cn is dense in Γ so that limn→∞E0(Cn, Dn) = 0, and if the sequence of function classes (Cn, Dn) grows sufficiently slowly enough so that limn→∞E1(Cn, Dn) = limn→∞Mn p ln n/n = 0, there holds limn→∞Rbayes(γ∗ n, Q∗ n) −R∗ bayes = 0 in probability. 5 Conclusions We have presented a general theoretical connection between surrogate loss functions and f-divergences. As illustrated by our application to decentralized detection, this connection can provide new domains of application for statistical learning theory. We also expect that this connection will provide new applications for f-divergences within learning theory; note in particular that bounds among f-divergences (of which many are known; see, e.g., [11]) induce corresponding bounds among loss functions. References [1] S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another. J. Royal Stat. Soc. Series B, 28:131–142, 1966. [2] P. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification and risk bounds. Journal of the American Statistical Association, 2005. To appear. [3] D. Blackwell. Equivalent comparisons of experiments. Annals of Statistics, 24(2):265–272, 1953. [4] I. Csisz´ar. Information-type measures of difference of probability distributions and indirect observation. Studia Sci. Math. Hungar, 2:299–318, 1967. [5] T. Kailath. The divergence and Bhattacharyya distance measures in signal selection. IEEE Trans. on Communication Technology, 15(1):52–60, 1967. [6] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Nonparametric decentralized detection using kernel methods. IEEE Transactions on Signal Processing, 53(11):4053–4066, 2005. [7] X. Nguyen, M. J. Wainwright, and M. I. Jordan. On divergences, surrogate loss functions and decentralized detection. Technical Report 695, Department of Statistics, University of California at Berkeley, September 2005. [8] H. V. Poor and J. B. Thomas. Applications of Ali-Silvey distance measures in the design of generalized quantizers for binary decision systems. IEEE Trans. on Communications, 25:893– 900, 1977. [9] G. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1970. [10] I. Steinwart. Consistency of support vector machines and other regularized kernel machines. IEEE Trans. Info. Theory, 51:128–142, 2005. [11] F. Topsoe. Some inequalities for information divergence and related measures of discrimination. IEEE Transactions on Information Theory, 46:1602–1609, 2000. [12] J. Tsitsiklis. Extremal properties of likelihood-ratio quantizers. IEEE Trans. on Communication, 41(4):550–558, 1993. [13] T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annal of Statistics, 53:56–134, 2004.
|
2005
|
41
|
2,857
|
Mixture Modeling by Affinity Propagation Brendan J. Frey and Delbert Dueck University of Toronto Software and demonstrations available at www.psi.toronto.edu Abstract Clustering is a fundamental problem in machine learning and has been approached in many ways. Two general and quite different approaches include iteratively fitting a mixture model (e.g., using EM) and linking together pairs of training cases that have high affinity (e.g., using spectral methods). Pair-wise clustering algorithms need not compute sufficient statistics and avoid poor solutions by directly placing similar examples in the same cluster. However, many applications require that each cluster of data be accurately described by a prototype or model, so affinity-based clustering – and its benefits – cannot be directly realized. We describe a technique called “affinity propagation”, which combines the advantages of both approaches. The method learns a mixture model of the data by recursively propagating affinity messages. We demonstrate affinity propagation on the problems of clustering image patches for image segmentation and learning mixtures of gene expression models from microarray data. We find that affinity propagation obtains better solutions than mixtures of Gaussians, the K-medoids algorithm, spectral clustering and hierarchical clustering, and is both able to find a pre-specified number of clusters and is able to automatically determine the number of clusters. Interestingly, affinity propagation can be viewed as belief propagation in a graphical model that accounts for pairwise training case likelihood functions and the identification of cluster centers. 1 Introduction Many machine learning tasks involve clustering data using a mixture model, so that the data in each cluster is accurately described by a probability model from a pre-defined, possibly parameterized, set of models [1]. For example, words can be grouped according to common usage across a reference set of documents, and segments of speech spectrograms can be grouped according to similar speaker and phonetic unit. As researchers increasingly confront more challenging and realistic problems, the appropriate class-conditional models become more sophisticated and much more difficult to optimize. By marginalizing over hidden variables, we can still view many hierarchical learning problems as mixture modeling, but the class-conditional models become complicated and nonlinear. While such class-conditional models may more accurately describe the problem at hand, the optimization of the mixture model often becomes much more difficult. Exact computation of the data likelihoods may not be feasible and exact computation of the sufficient statistics needed to update parameterized models may not be feasible. Further, the complexity of the model and the approximations used for the likelihoods and the sufficient statistics often produce an optimization surface with a large number of poor local minima. A different approach to clustering ignores the notion of a class-conditional model, and links together pairs of data points that have high affinity. The affinity or similarity (a real number in [0, 1]) between two training cases gives a direct indication of whether they should be in the same cluster. Hierarchical clustering and its Bayesian variants [2] is a popular affinity-based clustering technique, whereby a binary tree is constructed greedily from the leaves to the root, by recursively linking together pairs of training cases with high affinity. Another popular method uses a spectral decomposition of the normalized affinity matrix [4]. Viewing affinities as transition probabilities in a random walk on data points, modes of the affinity matrix correspond to clusters of points that are isolated in the walk [3,5]. We describe a new method that, for the first time to our knowledge, combines the advantages of model-based clustering and affinity-based clustering. Unlike previous techniques that construct and learn probability models of transitions between data points [6, 7], our technique learns a probability model of the data itself. Like affinity-based clustering, our algorithm directly examines pairs of nearby training cases to help ascertain whether or not they should be in the same cluster. However, like model-based clustering, our technique uses a probability model that describes the data as a mixture of class-conditional distributions. Our method, called “affinity propagation”, can be viewed as the sum-product algorithm or the max-product algorithm in a graphical model describing the mixture model. 2 A greedy algorithm: K-medoids The first step in obtaining the benefit of pair-wise training case comparisons is to replace the parameters of the mixture model with pointers into the training data. A similar representation is used in K-medians clustering or K-medoids clustering, where the goal is to identify K training cases, or exemplars, as cluster centers. Exact learning is known to be NP-hard (c.f. [8]), but a hard-decision algorithm can be used to find approximate solutions. While the algorithm makes greedy hard decisions for the cluster centers, it is a useful intermediate step in introducing affinity propagation. For training cases x1, . . . , xN, suppose the likelihood of training case xi given that training case xk is its cluster center is P(xi|xi in xk) (e.g., a Gaussian likelihood would have the form e−(xi−xk)2/2σ2/ √ 2πσ2). Given the training data, this likelihood depends only on i and k, so we denote it by Lik. Lii is set to the Bayesian prior probability that xi is a cluster center. Initially, K training cases are chosen as exemplars, e.g., at random. Denote the current set of cluster center indices by K and the index of the current cluster center for xi by si. K-medoids iterates between assigning training cases to exemplars (E step), and choosing a training case as the new exemplar for each cluster (M step). Assuming for simplicity that the mixing proportions are equal and denoting the responsibility likelihood ratio by rik = P(xi|xi in xk)/P(xi|xi not in xk)1, the updates are E step For i = 1, . . . , N: For k ∈K: rik ←Lik/(P j:j̸=k Lij) si ←argmaxk∈K rik Greedy M step For k ∈K: Replace k in K with argmaxj:sj=k (Q i:si=k Lij) This algorithm nicely replaces parameter-to-training case comparisons with pair-wise training case comparisons. However, in the greedy M step, specific training cases are chosen as exemplars. By not searching over all possible combinations of exemplars, the algorithm will frequently find poor local minima. We now introduce an algorithm that does approximately search over all possible combinations of exemplars. 1Note that using the traditional definition of responsibility, rik ←Lik/(ΣjLij), will give the same decisions as using the likelihood ratio. 3 Affinity propagation The responsibilities in the greedy K-medoids algorithm can be viewed as messages that are sent from training cases to potential exemplars, providing soft evidence of the preference for each training case to be in each exemplar. To avoid making hard decisions for the cluster centers, we introduce messages called “availabilities”. Availabilities are sent from exemplars to training cases and provide soft evidence of the preference for each exemplar to be available as a center for each training case. Responsibilities are computed using likelihoods and availabilities, and availabilities are computed using responsibilities, recursively. We refer to both responsibilities and availabilities as affinities and we refer to the message-passing scheme as affinity propagation. Here, we explain the update rules; in the next section, we show that affinity propagation can be derived as the sum-product algorithm in a graphical model describing the mixture model. Denote the availability sent from candidate exemplar xk to training case xi by aki. Initially, these messages are set equal, e.g., aki = 1 for all i and k. Then, the affinity propagation update rules are recursively applied: Responsibility updates rik ←Lik/(P j:j̸=k aijLij) Availability updates akk ←Q j:j̸=k(1 + rjk) −1 aki ←1/( 1 rkk Q j:j̸=k,j̸=i(1 + rjk)−1 + 1 −Q j:j̸=k,j̸=i(1 + rjk)−1) The first update rule is quite similar to the update used in EM, except the likelihoods used to normalize the responsibilities are modulated by the availabilities of the competing exemplars. In this rule, the responsibility of a training case xi as its own cluster center, rii, is high if no other exemplars are highly available to xi and if xi has high probability under the Bayesian prior, Lii. The second update rule also has an intuitive explanation. The availability of a training case xk as its own exemplar, akk, is high if at least one other training case places high responsibility on xk being an exemplar. The availability of xk as a exemplar for xi, aki is high if the self-responsibility rkk is high (1/rkk−1 approaches −1), but is decreased if other training cases compete in using xk as an exemplar (the term 1/rkk −1 is scaled down if rjk is large for some other training case xj). Messages may be propagated in parallel or sequentially. In our implementation, each candidate exemplar absorbs and emits affinities in parallel, and the centers are ordered according to the sum of their likelihoods, i.e. P i Lik. Direct implementation of the above propagation rules gives an N 2-time algorithm, but affinities need only be propagated between i and k if Lik > 0. In practice, likelihoods below some threshold can be set to zero, leading to a sparse graph on which affinities are propagated. Affinity propagation accounts for a Bayesian prior pdf on the exemplars and is able to automatically search over the appropriate number of exemplars. (Note that the number of exemplars is not pre-specified in the above updates.) In applications where a particular number of clusters is desired, the update rule for the responsibilities (in particular, the selfresponsibilities rkk, which determine the availabilities of the exemplars) can be modified, as described in the next section. Later, we describe applications where K is pre-specified and where K is automatically selected by affinity propagation. The affinity propagation update rules can be derived as an instance of the sum-product Figure 1: Affinity propagation can be viewed as belief propagation in this factor graph. (“loopy BP”) algorithm in a graphical model. Using si to denote the index of the exemplar for xi, the product of the likelihoods of the training cases and the priors on the exemplars is QN i=1 Lisi. (If si = i, xi is an exemplar with a priori pdf Lii.) The set of hidden variables s1, . . . , sN completely specifies the mixture model, but not all configurations of these variables are allowed: si = k (xi in cluster xk) implies sk = k (xk is an exemplar) and sk = k (xk is an exemplar) implies si = k for some i ̸= k (some other training case is in cluster xk). The global indicator function for the satisfaction of these constraints can be written QN k=1 fk(s1, . . . , sN), where fk is the constraint for candidate cluster xk: fk(s1, . . . , sN) = 0 if sk = k and si ̸= k for all i ̸= k 0 if sk ̸= k and si = k for some i ̸= k 1 otherwise. Thus, the joint distribution of the mixture model and data factorizes as follows: P = N Y i=1 Lisi N Y k=1 fk(s1, . . . , sN). The factor graph [10] in Fig. 1 describes this factorization. Each black box corresponds to a term in the factorization, and it is connected to the variables on which the term depends. While exact inference in this factor graph is NP-hard, approximate inference algorithms can be used to infer the s variables. It is straightforward to show that the updates for affinity propagation correspond to the message updates for the sum-product algorithm or loopy belief propagation (see [10] for a tutorial). The responsibilities correspond to messages sent from the s’s to the f’s, while the availabilities correspond to messages sent from the f’s to the s’s. If the goal is to find K exemplars, an additional constraint g(s1, . . . , sN) = [K = PN k=1[sk = k]] can be included, where [ ] indicates Iverson’s notation ([true]=1 and [false] = 0). Messages can be propagated through this function in linear time, by implementing it as a Markov chain that accumulates exemplar counts. Max-product affinity propagation. Max-product affinity propagation can be derived as an instance of the max-product algorithm, instead of the sum-product algorithm. The update equations for the affinities are modified and maximizations are used instead of summations. An advantage of max-product affinity propagation is that the algorithm is invariant to multiplicative constants in the log-likelihoods. 4 Image segmentation A sensible model-based approach to image segmentation is to imagine that each patch in the image originates from one of a small number of prototype texture patches. The main difficulty is that in addition to standard additive or multiplicative pixel-level noise, another prevailing form of noise is due to transformations of the image features, and in particular translations. Pair-wise affinity-based techniques and in particular spectral clustering has been employed with some success [4, 9], with the main disadvantage being that without an underlying Figure 2: Segmentation of non-aligned gray-scale characters. Patches clustered by affinity propagation and K-medoids are colored according to classification (centers shown below solutions). Affinity propagation achieves a near-best score compared to 1000 runs of Kmedoids. model there is no sound basis for selecting good class representatives. Having a model with class representatives enables efficient synthesis (generation) of patches, and classification of test patches – requiring only K comparisons (to class centers) rather than N comparisons (to training cases). We present results for segmenting two image types. First, as a toy example, we segment an image containing many noisy examples of the letters ‘N’ ‘I’ ‘P’ and ‘S’ (see Fig. 2). The original image is gray-scale with resolution 216 × 240 and intensities ranging from 0 (background color, white) to 1 (foreground color, black). Each training case xi is a 24×24 image patch and xm i is the mth pixel in the patch. To account for translations, we include a hidden 2-D translation variable T. The match between patch xi and patch xk is measured by P m xm i ·f m(xk, T), where f(xk, T) is the patch obtained by applying a 2-D translation T plus cropping to patch xk. f m is the mth pixel in the translated, cropped patch. This metric is used in the likelihood function: Lik ∝ X T p(T)eβ(Σmxm i ·f m(xk,T ))/¯xi ≈eβ maxT (Σmxm i ·f m(xk,T ))/¯xi, where ¯xi = 1 242 P m xm i is used to normalize the match by the amount of ink in xi. β controls how strictly xi should match xk to have high likelihood. Max-product affinity propagation is independent of the choice of β, and for sum-product affinity propagation we quite arbitrarily chose β = 1. The exemplar priors Lkk were set to mediani,k̸=iLik. We cut the image in Fig. 2 into a 9×10 grid of non-overlapping 24×24 patches, computed the pair-wise likelihoods, and clustered them into K = 4 classes using the greedy EM algorithm (randomly chosen initial exemplars) and affinity propagation. (Max-product and sum-product affinity propagation yielded identical results.) We then took a much larger set of overlapping patches, classified them into the 4 categories, and then colored each pixel in the image according to the most frequent class for the pixel. The results are shown in Fig. 2. While affinity propagation is deterministic, the EM algorithm depends on initialization. So, we ran the EM algorithm 1000 times and in Fig. 2 we plot the cumulative distribution of the log P scores obtained by EM. The score for affinity propagation is also shown, and achieves near-best performance (98th percentile). We next analyzed the more natural 192 × 192 image shown in Fig. 3. Since there is no natural background color, we use mean-squared pixel differences in HSV color space to measure similarity between the 24 × 24 patches: Lik ∝e−β minT Σm∈W(xm i −f m(xk,T ))2, where W is the set of indices corresponding to a 16 × 16 window centered in the patch and f m(xk, T) is the same as above. As before, we arbitrarily set β = 1 and Lkk to mediani,k̸=iLik. Figure 3: Segmentation results for several methods applied to a natural image. For methods other than affinity propagation, many parameter settings were tried and the best segmentation selected. The histograms show the percentile in score achieved by affinity propagation compared to 1000 runs of greedy EM, for different random training sets. We cut the image in Fig. 3 into an 8 × 8 grid of non-overlapping 24 × 24 patches and clustered them into K = 6 classes using affinity propagation (both forms), greedy EM in our model, spectral clustering (using a normalized L-matrix based on a set of 29 × 29 overlapping patches), and mixtures of Gaussians2. For greedy EM, the affinity propagation algorithms, and mixtures of Gaussians, we then choose all possible 24 × 24 overlapping patches and calculated the likelihoods of them given each of the 6 cluster centers, classifying each patch according to its maximum likelihood. Fig. 3 shows the segmentations for the various methods, where the central pixel of each patch is colored according to its class. Again, affinity propagation achieves a solution that is near-best compared to one thousand runs of greedy EM. 5 Learning mixtures of gene models Currently, an important problem in genomics research is the discovery of genes and gene variants that are expressed as messenger RNAs (mRNAs) in normal tissues. In a recent study [11], we used DNA-based techniques to identify 837,251 possible exons (“putative exons”) in the mouse genome. For each putative exon, we used an Agilent microarray probe to measure the amount of corresponding mRNA that was present in each of 12 mouse tissues. Each 12-D vector, called an “expression profile”, can be viewed as a feature vector indicating the putative exon’s function. By grouping together feature vectors for nearby probes, we can detect genes and variations of genes. Here, we compare affinity propagation with hierarchical clustering, which was previously used to find gene structures [12]. Fig. 4a shows a normalized subset of the data and gives three examples of groups of nearby 2For spectral clustering, we tried β = 0.5, 1 and 2, and for each of these tried clustering using 6, 8, 10, 12 and 14 eigenvectors. We then visually picked the best segmentation (β = 1, 10 eigenvectors). The eigenvector features were clustered using EM in a mixture of Gaussians and out of 10 trials, the solution with highest likelihood was selected. For mixtures of Gaussians applied directly to the image patches, we picked the model with highest likelihood in 10 trials. (a) (b) Figure 4: (a) A normalized subset of 837,251 tissue expression profiles – mRNA level versus tissue – for putative exons from the mouse genome (most profiles are much noisier than these). (b) The true exon detection rate (in known genes) versus the false discovery rate, for affinity propagation and hierarchical clustering. feature vectors that are similar enough to provide evidence of gene units. The actual data is generally much noisier, and includes multiplicative noise (exon probe sensitivity can vary by two orders of magnitude), correlated additive noise (a probe can cross-hybridize in a tissue-independent manner to background mRNA sources), and spurious additive noise (due to a noisy measurement procedure and biological effects such as alternative splicing). To account for noise, false putative exons, and the distance between exons in the same gene, we used the following likelihood function: Lij = λe−λ|i−j| q·p0(xi) + (1−q) Z p(y, z, σ)e− 1 2σ2 Σ12 m=1(xm i −(y·xm j +z))2 √ 2πσ212 dydzdσ ≈λe−λ|i−j| q·p0(xi) + (1−q) max y,z,σ p(y, z, σ)e− 1 2σ2 Σ12 m=1(xm i −(y·xm j +z))2 √ 2πσ212 , where xm i is the expression level for the mth tissue in the ith probe (in genomic order). We found that in this application, the maximum is a sufficiently good approximation to the integral. The distribution over the distance between probes in the same gene |i −j| is assumed to be geometric with parameter λ. p0(xi) is a background distribution that accounts for false putative exons and q is the probability of a false putative exon within a gene. We assumed y, z and σ are independent and uniformly distributed3. The Bayesian prior probability that xk is an exemplar is set to θ · p0(xk), where θ is a control knob used to vary the sensitivity of the system. Because of the term λe−λ|i−j| and the additional assumption that genes on the same strand do not overlap, it is not necessary to propagate affinities between all 837, 2512 pairs of training cases. We assume Lij = 0 for |i −j| > 100, in which case it is not necessary to propagate affinities between xi and xj. The assumption that genes do not overlap implies that if si = k, then sj = k for j ∈{min(i, k), . . . , max(i, k)}. It turns out that this constraint causes the dependence structure in the update equations for the affinities to reduce to a chain, so affinities need only be propagated forward and backward along the genome. After affinity propagation is used to automatically select the number of mixture 3Based on the experimental procedure and a set of previously-annotated genes (RefSeq), we estimated λ = 0.05, q = 0.7, y ∈[.025, 40], z ∈[−µ, µ] (where µ = maxi,m xm i ), σ ∈(0, µ]. We used a mixture of Gaussians for p0(xi), which was learned from the entire training set. components and identify the mixture centers and the probes that belong to them (genes), each probe xi is labeled as an exon or a non-exon depending on which of the two terms in the above likelihood function (q · p0(xi) or the large term to its right) is larger. Fig. 4b shows the fraction of exons in known genes detected by affinity propagation versus the false detection rate. The curve is obtained by varying the sensitivity parameter, θ. The false detection rate was estimated by randomly permuting the order of the probes in the training set, and applying affinity propagation. Even for quite low false discovery rates, affinity propagation identifies over one third of the known exons. Using a variety of metrics, including the above metric, we also used hierarchical clustering to detect exons. The performance of hierarchical clustering using the metric with highest sensitivity is also shown. Affinity propagation has significantly higher sensitivity, e.g., achieving a five-fold increase in true detection rate at a false detection rate of 0.4%. 6 Computational efficiency The following table compares the MATLAB execution times of our implementations of the methods we compared on the problems we studied. For methods that first compute a likelihood or affinity matrix, we give the timing of this computation first. Techniques denoted by “*” were run many times to obtain the shown results, but the given time is for a single run. Affinity Prop K-medoids* Spec Clust* MOG EM* Hierarch Clust NIPS 12.9 s + 2.0 s 12.9 s + .2 s Dog 12.0 s + 1.5 s 12.0 s + 0.1 s 12.0 s + 29 s 3.3 s Genes 16 m + 43 m 16 m + 28 m 7 Summary An advantage of affinity propagation is that the update rules are deterministic, quite simple, and can be derived as an instance of the sum-product algorithm in a factor graph. Using challenging applications, we showed that affinity propagation obtains better solutions (in terms of percentile log-likelihood, visual quality of image segmentation and sensitivityto-specificity) than other techniques, including K-medoids, spectral clustering, Gaussian mixture modeling and hierarchical clustering. To our knowledge, affinity propagation is the first algorithm to combine advantages of pair-wise clustering methods that make use of bottom-up evidence and model-based methods that seek to fit top-down global models to the data. References [1] CM Bishop. Neural Networks for Pattern Recognition. Oxford University Press, NY, 1995. [2] KA Heller, Z Ghahramani. Bayesian hierarchical clustering. ICML, 2005. [3] M Meila, J Shi. Learning segmentation by random walks. NIPS 14, 2001. [4] J Shi, J Malik. Normalized cuts and image segmentation. Proc CVPR, 731-737, 1997. [5] A Ng, M Jordan, Y Weiss. On spectral clustering: Analysis and an algorithm. NIPS 14, 2001. [6] N Shental A Zomet T Hertz Y Weiss. Pairwise clustering and graphical models NIPS 16 2003. [7] R Rosales, BJ Frey. Learning generative models of affinity matrices. Proc UAI, 2003. [8] M Charikar, S Guha, A Tardos, DB Shmoys. A constant-factor approximation algorithm for the k-median problem. J Comp and Sys Sci, 65:1, 129-149, 2002. [9] J Malik et al.. Contour and texture analysis for image segmentation. IJCV 43:1, 2001. [10] FR Kschischang, BJ Frey, H-A Loeliger. Factor graphs and the sum-product algorithm. IEEE Trans Info Theory 47:2, 498-519, 2001. [11] BJ Frey, QD Morris, M Robinson, TR Hughes. Finding novel transcripts in high-resolution genome-wide microarray data using the GenRate model. Proc RECOMB 2005, 2005. [12] D. D. Shoemaker et al. Experimental annotation of the human genome using microarray technology. Nature 409, 922-927, 2001.
|
2005
|
42
|
2,858
|
Tensor Subspace Analysis Xiaofei He1 Deng Cai2 Partha Niyogi1 1 Department of Computer Science, University of Chicago {xiaofei, niyogi}@cs.uchicago.edu 2 Department of Computer Science, University of Illinois at Urbana-Champaign dengcai2@uiuc.edu Abstract Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. The typical linear subspace learning algorithms include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projection (LPP). All of these methods consider an n1 × n2 image as a high dimensional vector in Rn1×n2, while an image represented in the plane is intrinsically a matrix. In this paper, we propose a new algorithm called Tensor Subspace Analysis (TSA). TSA considers an image as the second order tensor in Rn1 ⊗Rn2, where Rn1 and Rn2 are two vector spaces. The relationship between the column vectors of the image matrix and that between the row vectors can be naturally characterized by TSA. TSA detects the intrinsic local geometrical structure of the tensor space by learning a lower dimensional tensor subspace. We compare our proposed approach with PCA, LDA and LPP methods on two standard databases. Experimental results demonstrate that TSA achieves better recognition rate, while being much more efficient. 1 Introduction There is currently a great deal of interest in appearance-based approaches to face recognition [1], [5], [8]. When using appearance-based approaches, we usually represent an image of size n1 × n2 pixels by a vector in Rn1×n2. Throughout this paper, we denote by face space the set of all the face images. The face space is generally a low dimensional manifold embedded in the ambient space [6], [7], [10]. The typical linear algorithms for learning such a face manifold for recognition include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Locality Preserving Projection (LPP) [4]. Most of previous works on statistical image analysis represent an image by a vector in high-dimensional space. However, an image is intrinsically a matrix, or the second order tensor. The relationship between the rows vectors of the matrix and that between the column vectors might be important for finding a projection, especially when the number of training samples is small. Recently, multilinear algebra, the algebra of higher-order tensors, was applied for analyzing the multifactor structure of image ensembles [9], [11], [12]. Vasilescu and Terzopoulos have proposed a novel face representation algorithm called Tensorface [9]. Tensorface represents the set of face images by a higher-order tensor and extends Singular Value Decomposition (SVD) to higher-order tensor data. In this way, the multiple factors related to expression, illumination and pose can be separated from different dimensions of the tensor. In this paper, we propose a new algorithm for image (human faces in particular) representation based on the considerations of multilinear algebra and differential geometry. We call it Tensor Subspace Analysis (TSA). For an image of size n1 × n2, it is represented as the second order tensor (or, matrix) in the tensor space Rn1 ⊗Rn2. On the other hand, the face space is generally a submanifold embedded in Rn1 ⊗Rn2. Given some images sampled from the face manifold, we can build an adjacency graph to model the local geometrical structure of the manifold. TSA finds a projection that respects this graph structure. The obtained tensor subspace provides an optimal linear approximation to the face manifold in the sense of local isometry. Vasilescu shows how to extend SVD(PCA) to higher order tensor data. We extend Laplacian based idea to tensor data. It is worthwhile to highlight several aspects of the proposed approach here: 1. While traditional linear dimensionality reduction algorithms like PCA, LDA and LPP find a map from Rn to Rl (l < n), TSA finds a map from Rn1 ⊗Rn2 to Rl1 ⊗Rl2 (l1 < n1, l2 < n2). This leads to structured dimensionality reduction. 2. TSA can be performed in either supervised, unsupervised, or semi-supervised manner. When label information is available, it can be easily incorporated into the graph structure. Also, by preserving neighborhood structure, TSA is less sensitive to noise and outliers. 3. The computation of TSA is very simple. It can be obtained by solving two eigenvector problems. The matrices in the eigen-problems are of size n1×n1 or n2×n2, which are much smaller than the matrices of size n × n (n = n1 × n2) in PCA, LDA and LPP. Therefore, TSA is much more computationally efficient in time and storage. There are few parameters that are independently estimated, so performance in small data sets is very good. 4. TSA explicitly takes into account the manifold structure of the image space. The local geometrical structure is modeled by an adjacency graph. 5. This paper is primarily focused on the second order tensors (or, matrices). However, the algorithm and analysis presented here can also be applied to higher order tensors. 2 Tensor Subspace Analysis In this section, we introduce a new algorithm called Tensor Subspace Analysis for learning a tensor subspace which respects the geometrical and discriminative structures of the original data space. 2.1 Laplacian based Dimensionality Reduction Problems of dimensionality reduction has been considered. One general approach is based on graph Laplacian [2]. The objective function of Laplacian eigenmap is as follows: min f X ij (f(xi) −f(xj))2 Sij where S is a similarity matrix. These optimal functions are nonlinear but may be expensive to compute. A class of algorithms may be optimized by restricting problem to more tractable families of functions. One natural approach restricts to linear function giving rise to LPP [4]. In this paper we will consider a more structured subset of linear functions that arise out of tensor analysis. This provided greater computational benefits. 2.2 The Linear Dimensionality Reduction Problem in Tensor Space The generic problem of linear dimensionality reduction in the second order tensor space is the following. Given a set of data points X1, · · · , Xm in Rn1 ⊗Rn2, find two transformation matrices U of size n1 × l1 and V of size n2 × l2 that maps these m points to a set of points Y1, · · · , Ym ∈Rl1 ⊗Rl2(l1 < n1, l2 < n2), such that Yi “represents” Xi, where Yi = U T XiV . Our method is of particular applicability in the special case where X1, · · · , Xm ∈M and M is a nonlinear submanifold embedded in Rn1 ⊗Rn2. 2.3 Optimal Linear Embeddings As we described previously, the face space is probably a nonlinear submanifold embedded in the tensor space. One hopes then to estimate geometrical and topological properties of the submanifold from random points (“scattered data”) lying on this unknown submanifold. In this section, we consider the particular question of finding a linear subspace approximation to the submanifold in the sense of local isometry. Our method is fundamentally based on LPP [4]. Given m data points X = {X1, · · · , Xm} sampled from the face submanifold M ∈Rn1 ⊗ Rn1, one can build a nearest neighbor graph G to model the local geometrical structure of M. Let S be the weight matrix of G. A possible definition of S is as follows: Sij = e− ∥Xi−Xj ∥2 t , if Xi is among the k nearest neighbors of Xj, or Xj is among the k nearest neighbors of Xi; 0, otherwise. (1) where t is a suitable constant. The function exp(−∥Xi −Xj∥2/t) is the so called heat kernel which is intimately related to the manifold structure. ∥· ∥is the Frobenius norm of matrix, i.e. ∥A∥= qP i P j a2 ij. When the label information is available, it can be easily incorporated into the graph as follows: Sij = ( e− ∥Xi−Xj ∥2 t , if Xi and Xj share the same label; 0, otherwise. (2) Let U and V be the transformation matrices. A reasonable transformation respecting the graph structure can be obtained by solving the following objective functions: min U,V X ij ∥U T XiV −U T XjV ∥2Sij (3) The objective function incurs a heavy penalty if neighboring points Xi and Xj are mapped far apart. Therefore, minimizing it is an attempt to ensure that if Xi and Xj are “close” then U T XiV and U T XjV are “close” as well. Let Yi = U T XiV . Let D be a diagonal matrix, Dii = P j Sij. Since ∥A∥2 = tr(AAT ), we see that: 1 2 X ij ∥U T XiV −U T XjV ∥2Sij = 1 2 X ij tr (Yi −Yj)(Yi −Yj)T Sij = 1 2 X ij tr YiY T i + YjY T j −YiY T j −YjY T i Sij = tr X i DiiYiY T i − X ij SijYiY T j = tr X i DiiU T XiV V T XT i U − X ij SijU T XiV V T XT j U = tr U T X i DiiXiV V T XT i − X ij SijXiV V T XT j U .= tr U T (DV −SV ) U where DV = P i DiiXiV V T XT i and SV = P ij SijXiV V T XT j . Similarly, ∥A∥2 = tr(AT A), so we also have 1 2 X ij ∥U T XiV −U T XjV ∥2Sij = 1 2 X ij tr (Yi −Yj)T (Yi −Yj) Sij = 1 2 X ij tr Y T i Yi + Y T j Yj −Y T i Yj −Y T j Yi Sij = tr X i DiiY T i Yi − X ij SijY T i Yj = tr V T X i DiiXT i UU T Xi − X ij XT i UU T Xj V .= tr V T (DU −SU) V where DU = P i DiiXT i UU T Xi and SU = P ij SijXT i UU T Xj. Therefore, we should simultaneously minimize tr U T (DV −SV ) U and tr V T (DU −SU) V . In addition to preserving the graph structure, we also aim at maximizing the global variance on the manifold. Recall that the variance of a random variable x can be written as follows: var(x) = Z M (x −µ)2dP(x), µ = Z M xdP(x) where M is the data manifold, µ is the expected value of x and dP is the probability measure on the manifold. By spectral graph theory [3], dP can be discretely estimated by the diagonal matrix D(Dii = P j Sij) on the sample points. Let Y = U T XV denote the random variable in the tensor subspace and suppose the data points have a zero mean. Thus, the weighted variance can be estimated as follows: var(Y ) = X i ∥Yi∥2Dii = X i tr(Y T i Yi)Dii = X i tr(V T XT i UU T XiV )Dii = tr V T X i DiiXT i UU T Xi ! V ! = tr V T DUV Similarly, ∥Yi∥2 = tr(YiY T i ), so we also have: var(Y ) = X i tr(YiY T i )Dii = tr U T X i DiiXiV V T XT i ! U ! = tr U T DV U Finally, we get the following optimization problems: min U,V tr U T (DV −SV ) U tr (U T DV U) (4) min U,V tr V T (DU −SU) V tr (V T DUV ) (5) The above two minimization problems (4) and (5) depends on each other, and hence can not be solved independently. In the following subsection, we describe a simple computational method to solve these two optimization problems. 2.4 Computation In this subsection, we discuss how to solve the optimization problems (4) and (5). It is easy to see that the optimal U should be the generalized eigenvectors of (DV −SV , DV ) and the optimal V should be the generalized eigenvectors of (DU −SU, DU). However, it is difficult to compute the optimal U and V simultaneously since the matrices DV , SV , DU, SU are not fixed. In this paper, we compute U and V iteratively as follows. We first fix U, then V can be computed by solving the following generalized eigenvector problem: (DU −SU)v = λDUv (6) Once V is obtained, U can be updated by solving the following generalized eigenvector problem: (DV −SV )u = λDV u (7) Thus, the optimal U and V can be obtained by iteratively computing the generalized eigenvectors of (6) and (7). In our experiments, U is initially set to the identity matrix. It is easy to show that the matrices DU, DV , DU −SU, and DV −SV are all symmetric and positive semi-definite. 3 Experimental Results In this section, several experiments are carried out to show the efficiency and effectiveness of our proposed algorithm for face recognition. We compare our algorithm with the Eigenface (PCA) [8], Fisherface (LDA) [1], and Laplacianface (LPP) [5] methods, three of the most popular linear methods for face recognition. Two face databases were used. The first one is the PIE (Pose, Illumination, and Experience) database from CMU, and the second one is the ORL database. In all the experiments, preprocessing to locate the faces was applied. Original images were normalized (in scale and orientation) such that the two eyes were aligned at the same position. Then, the facial areas were cropped into the final images for matching. The size of each cropped image in all the experiments is 32×32 pixels, with 256 gray levels per pixel. No further preprocessing is done. For the Eigenface, Fisherface, and Laplacianface methods, the image is represented as a 1024-dimensional vector, while in our algorithm the image is represented as a (32 × 32)-dimensional matrix, or the second order tensor. The nearest neighbor classifier is used for classification for its simplicity. In short, the recognition process has three steps. First, we calculate the face subspace from the training set of face images; then the new face image to be identified is projected into d-dimensional subspace (PCA, LDA, and LPP) or (d × d)-dimensional tensor subspace (TSA); finally, the new face image is identified by nearest neighbor classifier. In our TSA algorithm, the number of iterations is taken to be 3. 3.1 Experiments on PIE Database The CMU PIE face database contains 68 subjects with 41,368 face images as a whole. The face images were captured by 13 synchronized cameras and 21 flashes, under varying pose, illumination and expression. We choose the five near frontal poses (C05, C07, C09, C27, 0 100 200 300 400 30 40 50 60 70 Dims d (d×d for TSA) Error rate (%) TSA Laplacianfaces (PCA+LPP) Fisherfaces (PCA+LDA) Eigenfaces (PCA) Baseline (a) 5 Train 0 200 400 600 20 30 40 50 60 Dims d (d×d for TSA) Error rate (%) TSA Laplacianfaces (PCA+LPP) Fisherfaces (PCA+LDA) Eigenfaces (PCA) Baseline (b) 10 Train 0 200 400 600 800 1000 10 20 30 40 50 Dims d (d×d for TSA) Error rate (%) TSA Laplacianfaces (PCA+LPP) Fisherfaces (PCA+LDA) Eigenfaces (PCA) Baseline (c) 20 Train 0 200 400 600 800 1000 5 10 15 20 25 30 35 Dims d (d×d for TSA) Error rate (%) TSA Laplacianfaces (PCA+LPP) Fisherfaces (PCA+LDA) Eigenfaces (PCA) Baseline (d) 30 Train Figure 1: Error rate vs. dimensionality reduction on PIE database Table 1: Performance comparison on PIE database Method 5 Train 10 Train error dim time(s) error dim time(s) Baseline 69.9% 1024 55.7% 1024 Eigenfaces 69.9% 338 0.907 55.7% 654 5.297 Fisherfaces 31.5% 67 1.843 22.4% 67 9.609 Laplacianfaces 30.8% 67 2.375 21.1% 134 11.516 TSA 27.9% 112 0.594 16.9% 132 2.063 20 Train 30 Train Method error dim time(s) error dim time(s) Baseline 38.2% 1024 27.9% 1024 Eigenfaces 38.1% 889 14.328 27.9% 990 15.453 Fisherfaces 15.4% 67 35.828 7.77% 67 38.406 Laplacianfaces 14.1% 146 39.172 7.13% 131 47.610 TSA 9.64% 132 7.125 6.88% 122 15.688 C29) and use all the images under different illuminations and expressions, thus we get 170 images for each individual. For each individual, l(= 5, 10, 20, 30) images are randomly selected for training and the rest are used for testing. The training set is utilized to learn the subspace representation of the face manifold by using Eigenface, Fisherface, Laplacianface and our algorithm. The testing images are projected into the face subspace in which recognition is then performed. For each given l, we average the results over 20 random splits. It would be important to note that the Laplacianface algorithm and our algorithm share the same graph structure as defined in Eqn. (2). Figure 1 shows the plots of error rate versus dimensionality reduction for the Eigenface, Fisherface, Laplacianface, TSA and baseline methods. For the baseline method, the recognition is simply performed in the original 1024-dimensional image space without any dimensionality reduction. Note that, the upper bound of the dimensionality of Fisherface is c −1 where c is the number of individuals. For our TSA algorithm, we only show its performance in the (d × d)-dimensional tensor subspace, say, 1, 4, 9, etc. As can be seen, the performance of the Eigenface, Fisherface, Laplacianface, and TSA algorithms varies with the number of dimensions. We show the best results obtained by them in Table 1 and the corresponding face subspaces are called optimal face subspace for each method. It is found that our method outperforms the other four methods with different numbers of training samples (5, 10, 20, 30) per individual. The Eigenface method performs the worst. It does not obtain any improvement over the baseline method. The Fisherface and Laplacianface methods perform comparatively to each each. The dimensions of the optimal subspaces are also given in Table 1. As we have discussed, TSA can be implemented very efficiently. We show the running time in seconds for each method in Table 1. As can be seen, TSA is much faster than the 0 10 20 30 40 50 60 70 15 20 25 30 35 40 45 50 55 Dims d Error rate (%) TSA Laplacianfaces (PCA+LPP) Fisherfaces (PCA+LDA) Eigenfaces (PCA) Baseline (a) 2 Train 0 10 20 30 40 50 60 70 10 20 30 40 50 Dims d Error rate (%) TSA Laplacianfaces (PCA+LPP) Fisherfaces (PCA+LDA) Eigenfaces (PCA) Baseline (b) 3 Train 0 10 20 30 40 50 60 70 5 10 15 20 25 30 35 40 45 Dims d Error rate (%) TSA Laplacianfaces (PCA+LPP) Fisherfaces (PCA+LDA) Eigenfaces (PCA) Baseline (c) 4 Train 0 10 20 30 40 50 60 70 0 10 20 30 40 Dims d Error rate (%) TSA Laplacianfaces (PCA+LPP) Fisherfaces (PCA+LDA) Eigenfaces (PCA) Baseline (d) 5 Train Figure 2: Error rate vs. dimensionality reduction on ORL database Table 2: Performance comparison on ORL database Method 2 Train 3 Train error dim time error dim time Baseline 30.2% 1024 22.4% 1024 Eigenfaces 30.2% 79 38.13 22.3% 113 85.16 Fisherfaces 25.2% 23 60.32 13.1% 39 119.69 Laplacianfaces 22.2% 39 62.65 12.5% 39 136.25 TSA 20.0% 102 65.00 10.7% 112 135.93 4 Train 5 Train Method error dim time error dim time Baseline 16.0% 1024 11.7% 1024 Eigenfaces 15.9% 122 141.72 11.6% 182 224.69 Fisherfaces 9.17% 39 212.82 6.55% 39 355.63 Laplacianfaces 8.54% 39 248.90 5.45% 40 410.78 TSA 7.12% 102 201.40 4.75% 102 302.97 Eigenface, Fisherface and Laplacianface methods. All the algorithms were implemented in Matlab 6.5 and run on a Intel P4 2.566GHz PC with 1GB memory. 3.2 Experiments on ORL Database The ORL (Olivetti Research Laboratory) face database is used in this test. It consists of a total of 400 face images, of a total of 40 people (10 samples per person). The images were captured at different times and have different variations including expressions (open or closed eyes, smiling or non-smiling) and facial details (glasses or no glasses). The images were taken with a tolerance for some tilting and rotation of the face up to 20 degrees. For each individual, l(= 2, 3, 4, 5) images are randomly selected for training and the rest are used for testing. The experimental design is the same as that in the last subsection. For each given l, we average the results over 20 random splits. Figure 3.2 shows the plots of error rate versus dimensionality reduction for the Eigenface, Fisherface, Laplacianface, TSA and baseline methods. Note that, the presentation of the performance of the TSA algorithm is different from that in the last subsection. Here, for a given d, we show its performance in the (d×d)dimensional tensor subspace. The reason is for better comparison, since the Eigenface and Laplacianface methods start to converge after 70 dimensions and there is no need to show their performance after that. The best result obtained in the optimal subspace and the running time (millisecond) of computing the eigenvectors for each method are shown in Table 2. As can be seen, our TSA algorithm performed the best in all the cases. The Fisherface and Laplacianface methods performed comparatively to our method, while the Eigenface method performed poorly. 4 Conclusions and Future Work Tensor based face analysis (representation and recognition) is introduced in this paper in order to detect the underlying nonlinear face manifold structure in the manner of tensor subspace learning. The manifold structure is approximated by the adjacency graph computed from the data points. The optimal tensor subspace respecting the graph structure is then obtained by solving an optimization problem. We call this Tensor Subspace Analysis method. Most of traditional appearance based face recognition methods (i.e. Eigenface, Fisherface, and Laplacianface) consider an image as a vector in high dimensional space. Such representation ignores the spacial relationships between the pixels in the image. In our work, an image is naturally represented as a matrix, or the second order tensor. Tensor representation makes our algorithm much more computationally efficient than PCA, LDA, and LPP. Experimental results on PIE and ORL databases demonstrate the efficiency and effectiveness of our method. TSA is linear. Therefore, if the face manifold is highly nonlinear, it may fail to discover the intrinsic geometrical structure. It remains unclear how to generalize our algorithm to nonlinear case. Also, in our algorithm, the adjacency graph is induced from the local geometry and class information. Different graph structures lead to different projections. It remains unclear how to define the optimal graph structure in the sense of discrimination. References [1] P.N. Belhumeur, J.P. Hepanha, and D.J. Kriegman, “Eigenfaces vs. fisherfaces: recognition using class specific linear projection,”IEEE. Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997. [2] M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering ,” Advances in Neural Information Processing Systems 14, 2001. [3] Fan R. K. Chung, Spectral Graph Theory, Regional Conference Series in Mathematics, number 92, 1997. [4] X. He and P. Niyogi, “Locality Preserving Projections,”Advance in Neural Information Processing Systems 16, Vancouver, Canada, December 2003. [5] X. He, S. Yan, Y. Hu, P. Niyogi, and H.-J. Zhang, “Face Recognition using Laplacianfaces,”IEEE. Trans. Pattern Analysis and Machine Intelligence, vol. 27, No. 3, 2005. [6] S. Roweis, and L. K. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, vol 290, 22 December 2000. [7] J. B. Tenenbaum, V. de Silva, and J. C. Langford, “A Global Geometric Framework for Nonlinear Dimensionality Reduction,” Science, vol 290, 22 December 2000. [8] M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, 3(1):71-86, 1991. [9] M. A. O. Vasilescu and D. Terzopoulos, “Multilinear Subspace Analysis for Image Ensembles,” IEEE Conference on Computer Vision and Pattern Recognition, 2003. [10] K. Q. Weinberger and L. K. Saul, “Unsupervised Learning of Image Manifolds by SemiDefinite Programming,” IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, 2004. [11] J. Yang, D. Zhang, A. Frangi, and J. Yang, “Two-dimensional PCA: a new approach to appearance-based face representation and recognition,”IEEE. Trans. Pattern Analysis and Machine Intelligence, vol. 26, No. 1, 2004. [12] J. Ye, R. Janardan, Q. Li, “Two-Dimensional Linear Discriminant Analysis ,” Advances in Neural Information Processing Systems 17, 2004.
|
2005
|
43
|
2,859
|
Query By Committee Made Real Ran Gilad-Bachrach†♦ Amir Navot‡ Naftali Tishby†‡ † School of Computer Science and Engineering ‡ Interdisciplinary Center for Neural Computation The Hebrew University, Jerusalem, Israel. ♦Intel Research Abstract Training a learning algorithm is a costly task. A major goal of active learning is to reduce this cost. In this paper we introduce a new algorithm, KQBC, which is capable of actively learning large scale problems by using selective sampling. The algorithm overcomes the costly sampling step of the well known Query By Committee (QBC) algorithm by projecting onto a low dimensional space. KQBC also enables the use of kernels, providing a simple way of extending QBC to the non-linear scenario. Sampling the low dimension space is done using the hit and run random walk. We demonstrate the success of this novel algorithm by applying it to both artificial and a real world problems. 1 Introduction Stone’s celebrated theorem proves that given a large enough training sequence, even naive algorithms such as the k-nearest neighbors can be optimal. However, collecting large training sequences poses two main obstacles. First, collecting these sequences is a lengthy and costly task. Second, processing large datasets requires enormous resources. The selective sampling framework [1] suggests permitting the learner some control over the learning process. In this way, the learner can collect a short and informative training sequence. This is done by generating a large set of unlabeled instances and allowing the learner to select the instances to be labeled. The Query By Committee algorithm (QBC) [2] was the inspiration behind many algorithms in the selective sampling framework [3, 4, 5]. QBC is a simple yet powerful algorithm. During learning it maintains a version space, the space of all the classifiers which are consistent with all the previous labeled instances. Whenever an unlabeled instance is available, QBC selects two random hypotheses from the version space and only queries for the label of the new instance if the two hypotheses disagree. Freund et al. [6] proved that when certain conditions apply, QBC will reach a generalization error of ǫ when using only O (log 1/ǫ) labels. QBC works in an online fashion where each instance is considered only once to decide whether to query for its label or not. This is significant when there are a large number of unlabeled instances. In this scenario, batch processing of the data is unfeasible (see e.g. [7]). However, QBC was never implemented as is, since it requires the ability to sample hypotheses from the version space, a task that all known method do in an unreasonable amount of time [8]. The algorithm we present in this paper uses the same skeleton as QBC, but replaces sampling from the high dimensional version space by sampling from a low dimensional projection of it. By doing so, we obtain an algorithm which can cope with large scale problems and at the same time authorizes the use of kernels. Although the algorithm uses linear classifiers at its core, the use of kernels makes it much broader in scope. This new sampling method is presented in section 2. Section 3 gives a detailed description of the kernelized version, the Kernel Query By Committee (KQBC) algorithm. The last building block is a method for sampling from convex bodies. We suggest the hit and run [9] random walk for this purpose in section 4. A Matlab implementation of KQBC is available at http://www.cs.huji.ac.il/labs/learning/code/qbc. The empirical part of this work is presented in section 5. We demonstrate how KQBC works on two binary classification tasks. The first is a synthetic linear classification task. The second involves differentiating male and female facial images. We show that in both cases, KQBC learns faster than Support Vector Machines (SVM) [10]. KQBC can be used to select a subsample to which SVM is applied. In our experiments, this method was superior to SVM; however, KQBC outperformed both. Related work: Many algorithms for selective sampling have been suggested in the literature. However only a few of them have a theoretical justification. As already mentioned, QBC has a theoretical analysis. Two other notable algorithms are the greedy active learning algorithm [11] and the perceptron based active learning algorithm [12]. The greedy active learning algorithm has the remarkable property of being close to optimal in all settings. However, it operates in a batch setting, where selecting the next query point requires reevaluation of the whole set of unlabeled instances. This is problematic when the dataset is large. The perceptron based active learning algorithm, on the other hand, is extremely efficient in its computational requirements, but is restricted to linear classifiers since it requires the explicit use of the input dimension. Graepel et al. [13] presented a billiard walk in the version space as a part of the Bayes Point Machine. Similar to the method presented here, the billiard walk is capable of sampling hypotheses from the version space when kernels are used. The method presented here has several advantages: it has better theoretical grounding and it is easier to implement. 2 A New Method for Sampling the Version-Space The Query By Committee algorithm [2] provides a general framework that can be used with any concept class. Whenever a new instance is presented, QBC generates two independent predictions for its label by sampling two hypotheses from the version space1. If the two predictions differ, QBC queries for the label of the instance at hand (see algorithm 1). The main obstacle in implementing QBC is the need to sample from the version space (step 2b). It is not clear how to do this with reasonable computational complexity. As is the case for most research in machine learning, we first focus on the class of linear classifiers and then extend the discussion by using kernels. In the linear case, the dimension of the version space is the input dimension which is typically large for real world problems. Thus direct sampling is practically impossible. We overcome this obstacle by projecting the version space onto a low dimensional subspace. Assume that the learner has seen the labeled sample S = {(xi, yi)}k i=1, where xi ∈IRd and yi ∈{±1}. The version space is defined to be the set of all classifiers which correctly classify all the instances seen so far: V = {w : ∥w∥≤1 and ∀i yi (w · xi) > 0} (1) 1The version space is the collection of hypotheses that are consistent with previous labels. Algorithm 1 Query By Committee [2] Inputs: • A concept class C and a probability measure ν defined over C. The algorithm: 1. Let S ←φ, V ←C. 2. For t = 1, 2, . . . (a) Receive an instance x. (b) Let h1, h2 be two random hypotheses selected from ν restricted to V . (c) If h1 (x) ̸= h2 (x) then i. Ask for the label y of x. ii. Add the pair (x, y) to S. iii. Let V ←{c ∈C : ∀(x, y) ∈S c (x) = y}. QBC assumes a prior ν over the class of linear classifiers. The sample S induces a posterior over the class of linear classifiers which is the restriction of ν to V . Thus, the probability that QBC will query for the label of an instance x is exactly 2 Pr w∼ν|V [w · x > 0] Pr w∼ν|V [w · x < 0] (2) where ν|V is the restriction of ν to V . From (2) we see that there is no need to explicitly select two random hypotheses. Instead, we can use any stochastic approach that will query for the label with the same probability as in (2). Furthermore, if we can sample ˆy ∈{±1} such that Pr [ˆy = 1] = Prw∼ν|V [w · x > 0] and (3) Pr [ˆy = −1] = Prw∼ν|V [w · x < 0] (4) we can use it instead, by querying the label of x with a probability of 2 Pr[ˆy = 1] Pr [ˆy = −1]. Based on this observation, we introduce a stochastic algorithm which returns ˆy with probabilities as specified in (3) and (4). This procedure can replace the sampling step in the QBC algorithm. Let S = {(xi, yi)}k i=1 be a labeled sample. Let x be an instance for which we need to decide whether to query for its label or not. We denote by V the version space as defined in (1) and denote by T the space spanned by x1, . . . , xk and x. QBC asks for two random hypotheses from V and queries for the label of x only if these two hypotheses predict different labels for x. Our procedure does the same thing, but instead of sampling the hypotheses from V we sample them from V ∩T . One main advantage of this new procedure is that it samples from a space of low dimension and therefore its computational complexity is much lower. This is true since T is a space of dimension k + 1 at most, where k is the number of queries for label QBC made so far. Hence, the body V ∩T is a low-dimensional convex body2 and thus sampling from it can be done efficiently. The input dimension plays a minor role in the sampling algorithm. Another important advantage is that it allows us to use kernels, and therefore gives a systematic way to extend QBC to the non-linear scenario. The use of kernels is described in detail in section 3. The following theorem proves that indeed sampling from V ∩T produces the desired results. It shows that if the prior ν (see algorithm 1) is uniform, then sampling hypotheses uniformly from V or from V ∩T generates the same results. 2From the definition of the version space V it follows that it is a convex body. Theorem 1 Let S = {(xi, yi)}k i=1 be a labeled sample and x an instance. Let the version space be V = {w : ∥w∥≤1 and ∀i yi (w · xi) > 0} and T = span (x, x1, . . . , xk) then Prw∼U(V ) [w · x > 0] = Prw∼U(V ∩T ) [w · x > 0] and Prw∼U(V ) [w · x < 0] = Prw∼U(V ∩T ) [w · x < 0] where U (·) is the uniform distribution. The proof of this theorem is given in the supplementary material [14]. 3 Sampling with Kernels In this section we show how the new sampling method presented in section 2 can be used together with kernel. QBC uses the random hypotheses for one purpose alone: to check the labels they predict for instances. In our new sampling method the hypotheses are sampled from V ∩T , where T = span (x, x1, . . . , xk). Hence, any hypothesis is represented by w ∈V ∩T , that has the form w = α0x + k X j=1 αjxj (5) The label w assigns to an instance x′ is w · x′ = α0x + k X j=1 αjxj · x′ = α0x · x′ + k X j=1 αjxj · x′ (6) Note that in (6) only inner products are used, hence we can use kernels. Using these observations, we can sample a hypothesis by sampling α0, . . . , αk and define w as in (5). However, since the xi’s do not form an orthonormal basis of T , sampling the α’s uniformly is not equivalent to sampling the w’s uniformly. We overcome this problem by using an orthonormal basis of T . The following lemma shows a possible way in which the orthonormal basis for T can be computed when only inner products are used. The method presented here does not make use of the fact that we can build this basis incrementally. Lemma 1 Let x0, . . . , xk be a set of vectors, let T = span (x0, . . . , xk) and let G = (gi,j) be the Grahm matrix such that gi,j = xi · xj. Let λ1, . . . , λr be the non-zero eigen values of G with the corresponding eigen-vectors γ1, . . . , γr. Then the vectors t1, . . . , tr such that ti = k X l=0 γi (l) √λi xl form an orthonormal basis of the space T . The proof of lemma 1 is given in the supplementary material [14]. This lemma is significant since the basis t1, . . . , tr enables us to sample from V ∩T using simple techniques. Note that a vector w ∈T can be expressed as Pr i=1 α (i) ti. Since the ti’s form an orthonormal basis, ∥w∥= ∥α∥. Furthermore, we can check the label w assigns to xj by w · xj = X i α (i) ti · xj = X i,l α (i) γi (l) √γi xl · xj which is a function of the Grahm matrix. Therefore, sampling from V ∩T boils down to the problem of sampling from convex bodies, where instead of sampling a vector directly we sample the coefficients of the orthonormal basis t1, . . . , tr. There are several methods for generating the final hypothesis to be used in the generalization phase. In the experiments reported in section 5 we have randomly selected a single hypothesis from V ∩T and used it to make all predictions, where V is the version space at the time when the learning terminated and T is the span of all instances for which KQBC queried for label during the learning process. 4 Hit and Run Hit and run [9] is a method of sampling from a convex body K using a random walk. Let z ∈K, a single step of the hit and run begins by choosing a random point u from the unit sphere. Afterwards the algorithm moves to a random point selected uniformly from l ∩K, where l is the line passing through z and z + u. Hit and run has several advantages over other random walks for sampling from convex bodies. First, its stationary distribution is indeed the uniform distribution, it mixes fast [9] and it does not require a “warm” starting point [15]. What makes it especially suitable for practical use is the fact that it does not require any parameter tuning other than the number of random steps. It is also very easy to implement. Current proofs [9, 15] show that O∗ d3 steps are needed for the random walk to mix. However, the constants in these bounds are very large. In practice hit and run mixes much faster than that. We have used it to sample from the body V ∩T . The number of steps we used was very small, ranging from couple of hundred to a couple of thousands. Our empirical study shows that this suffices to obtain impressive results. 5 Empirical Study In this section we present the results of applying our new kernelized version of the query by committee (KQBC), to two learning tasks. The first task requires classification of synthetic data while the second is a real world problem. 5.1 Synthetic Data In our first experiment we studied the task of learning a linear classifier in a d-dimensional space. The target classifier is the vector w∗= (1, 0, . . . , 0) thus the label of an instance x ∈IRd is the sign of its first coordinate. The instances were normally distributed N (µ = 0, Σ = Id). In each trial we used 10000 unlabeled instances and let KQBC select the instances to query for the labels. We also applied Support Vector Machine (SVM) to the same data in order to demonstrate the benefit of using active learning. The linear kernel was used for both KQBC and SVM. Since SVM is a passive learner, SVM was trained on prefixes of the training data of different sizes. The results are presented in figure 1. The difference between KQBC and SVM is notable. When both are applied to a 15dimensional linear discrimination problem (figure 1b), SVM and KQBC have an error rate of ∼6% and ∼0.7% respectively after 120 labels. After such a short training sequence the difference is of an order of magnitude. The same qualitative results appear for all problem sizes. As expected, the generalization error of KQBC decreases exponentially fast as the number of queries is increased, whereas the generalization error of SVM decreases only at an inverse-polynomial rate (the rate is O∗(1/k) where k is the number of labels). This should not come as a surprise since Freund et al. [6] proved that this is the expected behavior. Note also that the bound of 50 · 2−0.67k/d over the generalization error that was proved in [6] was replicated in our experiments (figure 1c). 0 10 20 30 40 50 60 70 80 0.1 1 10 100 (a) 5 dimensions % generalization error 0 50 100 150 200 250 0.1 1 10 100 (b) 15 dimensions % generalization error 0 50 100 150 200 250 300 350 400 450 500 1 3 10 30 100 (c) 45 dimensions % generalization error Kernel Query By Committee Support Vector Machine 48⋅2−0.9k/5 Kernel Query By Committee Support Vector Machine 53⋅2−0.76k/15 Kernel Query By Committee Support Vector Machine 50⋅2−0.67k/45 Figure 1: Results on the synthetic data. The generalization error (y-axis) in percents (in logarithmic scale) versus the number of queries (x-axis). Plots (a), (b) and (c) represent the synthetic task in 5, 15 and 45 dimensional spaces respectively. The generalization error of KQBC is compared to the generalization error of SVM. The results presented here are averaged over 50 trials. Note that the error rate of KQBC decreases exponentially fast. Recall that [6] proved a bound on the generalization error of 50 · 2−0.67k/d where k is the number of queries and d is the dimension. 5.2 Face Images Classification The learning algorithm was then applied in a more realistic setting. In the second task we used the AR face images dataset [16]. The people in these images are wearing different accessories, have different facial expressions and the faces are lit from different directions. We selected a subset of 1456 images from this dataset. Each image was converted into grayscale and re-sized to 85×60 pixels, i.e. each image was represented as a 5100 dimensional vector. See figure 2 for sample images. The task was to distinguish male and female images. For this purpose we split the data into a training sequence of 1000 images and a test sequence of 456 images. To test statistical significance we repeated this process 20 times, each time splitting the dataset into training and testing sequences. We applied both KQBC and SVM to this dataset. We used the Gaussian kernel: K (x1, x2) = exp −∥x1 −x2∥2 /2σ2 where σ = 3500 which is the value favorable by SVM. The results are presented in figure 3. It is apparent from figure 3 that KQBC outperforms SVM. When the budget allows for 100 −140 labels, KQBC has an error rate of 2 −3 percent less than the error rate of SVM. When 140 labels are used, KQBC outperforms SVM by 3.6% on average. This difference is significant as in 90% of the trials Figure 2: Examples of face images used for the face recognition task. 0 20 40 60 80 100 120 140 160 180 200 12 15 18 21 24 27 30 33 36 39 42 45 48 number of labels % generalization error Kernel Query By Committeei (KQBC) Support Vector Machine (SVM) SVM over KQBC selected instances Figure 3: The generalization error of KQBC and SVM for the faces dataset (averaged over 20 trials). The generalization error (y-axis) vs. number of queries (x-axis) for KQBC (solid) and SVM (dashed) are compared. When SVM was applied solely to the instances selected by KQBC (dotted line) the results are better than SVM but worse than KQBC. KQBC outperformed SVM by more than 1%. In one of the cases, KQBC was 11% better. We also used KQBC as an active selection method for SVM. We trained SVM over the instances selected by KQBC. The generalization error obtained by this combined scheme was better than the passive SVM but worse than KQBC. In figure 4 we see the last images for which KQBC queried for labels. It is apparent, that the selection made by KQBC is non-trivial. All the images are either highly saturated or partly covered by scarves and sunglasses. We conclude that KQBC indeed performs well even when kernels are used. 6 Summary and Further Study In this paper we present a novel version of the QBC algorithm. This novel version is both efficient and rigorous. The time-complexity of our algorithm depends solely on the number of queries made and not on the input dimension or the VC-dimension of the class. Furthermore, our technique only requires inner products of the labeled data points - thus it can be implemented with kernels as well. We showed a practical implementation of QBC using kernels and the hit and run random walk which is very close to the “provable” version. We conducted a couple of experiments with this novel algorithm. In all our experiments, KQBC outperformed SVM significantly. However, this experimental study needs to be extended. In the future, we would like to compare our algorithm with other active learning algorithms, over a variety of datasets. Figure 4: Images selected by KQBC. The last six faces for which KQBC queried for a label. Note that three of the images are saturated and that two of these are wearing a scarf that covers half of their faces. References [1] D. Cohn, L. Atlas, and R. Ladner. Training connectionist networks with queries and selective sampling. Advanced in Neural Information Processing Systems 2, 1990. [2] H. S. Seung, M. Opper, and H. Sompolinsky. Query by committee. Proc. of the Fifth Workshop on Computational Learning Theory, pages 287–294, 1992. [3] C. Campbell, N. Cristianini, and A. Smola. Query learning with large margin classifiers. In Proc. 17’th International Conference on Machine Learning (ICML), 2000. [4] S. Tong. Active Learning: Theory and Applications. PhD thesis, Stanford University, 2001. [5] G. Tur, R. Schapire, and D. Hakkani-T¨ur. Active learning for spoken language understanding. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, 2003. [6] Y. Freund, H. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee algorithm. Machine Learning, 28:133–168, 1997. [7] H. Mamitsuka and N. Abe. Efficient data mining by active learning. In Progress in Discovery Science, pages 258–267, 2002. [8] R. Bachrach, S. Fine, and E. Shamir. Query by committee, linear separation and random walks. Theoretical Computer Science, 284(1), 2002. [9] L. Lov´asz and S. Vempala. Hit and run is fast and fun. Technical Report MSR-TR2003-05, Microsoft Research, 2003. [10] B. Boser, I. Guyon, and V. Vapnik. Optimal margin classifiers. In Fifth Annual Workshop on Computational Learning Theory, pages 144–152, 1992. [11] S. Dasgupta. Analysis of a greedy active learning strategy. In Neural Information Processing Systems (NIPS), 2004. [12] S. Dasgupta, A. T. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. In Proceeding of the 18th annual Conference on Learning Theory (COLT), 2005. [13] R. Herbrich, T. Graepel, and C. Campbell. Bayes point machines. Journal of Machine Learning Research, 1:245–279, 2001. [14] R. Gilad-Bachrach, A. Navot, and N. Tishby. Query by committee made real - supplementary material. http://www.cs.huji.ac.il/∼ranb/kqcb supp.ps. [15] L. Lov´asz and S. Vempala. Hit-and-run from a corner. In Proc. of the 36th ACM Symposium on the Theory of Computing (STOC), 2004. [16] A.M. Martinez and R. Benavente. The ar face database. Technical report, CVC Tech. Rep. #24, 1998.
|
2005
|
44
|
2,860
|
An Application of Markov Random Fields to Range Sensing James Diebel and Sebastian Thrun Stanford AI Lab Stanford University, Stanford, CA 94305 Abstract This paper describes a highly successful application of MRFs to the problem of generating high-resolution range images. A new generation of range sensors combines the capture of low-resolution range images with the acquisition of registered high-resolution camera images. The MRF in this paper exploits the fact that discontinuities in range and coloring tend to co-align. This enables it to generate high-resolution, low-noise range images by integrating regular camera images into the range data. We show that by using such an MRF, we can substantially improve over existing range imaging technology. 1 Introduction In recent years, there has been an enormous interest in developing technologies for measuring range. The set of commercially available technologies include passive stereo with two or more cameras, active stereo, triangulating light stripers, millimeter wavelength radar, and scanning and flash lidar. In the low-cost arena, systems such as the Swiss Ranger and the CanestaVision sensors provide means to acquire low-res range data along with passive camera images. Both of these devices capture high-res visual images along with lower-res depth information. This is the case for a number of devices at all price ranges, including the highly-praised range camera by 3DV Systems. This paper addresses a single shortcoming that (with the exception of stereo) is shared by most active range acquisition devices: Namely that range is captured at much lower resolution than images. This raises the question as to whether we can turn a low-resolution depth imager into a high-resolution one, by exploiting conventional camera images? A positive answer to this question would significantly advance the field of depth perception. Yet we lack techniques to fuse high-res conventional images with low-res depth images. This paper applies graphical models to the problem of fusing low-res depth images with high-res camera images. Specifically, we propose a Markov Random Field (MRF) method for integrating both data sources. The intuition behind the MRF is that depth discontinuities in a scene often co-occur with color or brightness changes within the associated camera image. Since the camera image is commonly available at much higher resolution, this insight can be used to enhance the resolution and accuracy of the depth image. Our approach performs this data integration using a multi-resolution MRF, which ties together image and range data. The mode of the probability distribution defined by the MRF provides us with a high-res depth map. Because we are only interested in finding the mode, we can apply fast optimization technique to the MRF inference problem, such as a Figure 1: The MRF is composed of 5 node types: The measurements mapped to two types of variables, the range measurement variables labeled z, image pixel variables labeled x. The density of image pixels is larger than those of the range measurements. The reconstructed range nodes, labeled y, are unobservable, but their density matches that of the image pixels. Auxiliary nodes labeled w and u mediate the information from the image and the depth map, as described in the text. conjugate gradient algorithm. This approach leads to a high-res depth map within seconds, increasing the resolution of our depth sensor by an order of magnitude while improving local accuracy. To back up this claim, we provide several example results obtained using a low-res laser range finder paired with a conventional point-and-shot camera. While none of the modeling or inference techniques in this paper are new, we believe that this paper provides a significant application of graphical modeling techniques to a problem that can dramatically alter an entire growing industry. 2 The Image-Range MRF Figure 1 shows the MRF designed for our task. The input to the MRF occurs at two layers, through the variables labeled xi and the variables labeled zi. The variables xi correspond to the image pixels, and their values are the three-dimensional RGB value of each pixel. The variables zi are the range measurements. The range measurements are sampled much less densely than the image pixels, as indicated in this figure. The key variables in this MRF are the ones labeled y, which model the reconstructed range at the same resolution as the image pixels. These variables are unobservable. Additional nodes labeled u and w leverage the image information into the estimated depth map y. Specifically, the MRF is defined through the following potentials: 1. The depth measurement potential is of the form Ψ = X i∈L k (yi −zi)2 (1) Here L is the set of indexes for which a depth measurement is available, and k is a constant weight placed on the depth measurements. This potential measures the quadratic distance between the estimated range in the high-res grid y and the measured range in the variables z, where available. 2. A depth smoothness prior is expressed by a potential of the form Φ = X i X j∈N(i) wij (yi −yj)2 (2) Here N(i) is the set of nodes adjacent to i. Φ is a weighted quadratic distance between neighboring nodes. 3. The weighting factors wij are a key element, in that they provide the link to the image layer in the MRF. Each wij is a deterministic function of the corresponding two adjacent image pixels, which is calculated as follows: wij = exp(−c uij) (3) uij = ||xi −xj||2 2 (4) Here c is a constant that quantifies how unwilling we are to have smoothing occur across edges in the image. The resulting MRF is now defined through the constraints Ψ and Φ. The conditional distribution over the target variables y is given by an expression of the form p(y | x, z) = 1 Z exp(−1 2(Ψ + Φ)) (5) where Z is a normalizer (partition function). 3 Optimization Unfortunately, computing the full posterior is impossible for such an MRF, not least because the MRF may possesses many millions of nodes; even loopy belief propagation [19] requires enormous time for convergence. Instead, for the depth reconstruction problem we shall be content with computing the mode of the posterior. Finding the mode of the log-posterior is, of course, a least square optimization problem, which we solve with the well-known conjugate gradient (CG) algorithm [12]. A typical single-image optimization with 2 · 105 nodes takes about a second to optimize on a modern computer. The details of the CG algorithm are omitted for brevity, but can be found in contemporary texts. The resulting algorithm for probable depth image reconstruction is now remarkably simple: Simply set y[0] by the bilinear interpolation of z, and then iterate the CG update rule. The result is a probable reconstruction of the depth map at the same resolution as the camera image. 4 Results Our experiments were performed with a SICK sweeping laser range finder and a Canon consumer digital camera with 5 mega pixels per image. Both were mounted on a rotating platform controlled by a servo actuator. This configuration allows us to survey an entire room from a consistent vantage point and with known camera and laser positions at all times. The output of this system is a set of pre-aligned laser range measurements and camera images. Figure 2 shows a scan of a bookshelf in our lab. The top row contains several views of the raw measurements and the bottom row is the output of the MRF. The latter is clearly much sharper and less noisy; many features that are smaller than the resolution of the laser scanner are pulled out by the camera image. Figure 5 shows the same scene from much further back. A more detailed look is taken in Figure 3. Here we examine the painted metal door frame to an office. The detailed structure is completely invisible in the raw laser scan but is easily drawn out when the image data is incorporated. It is notable that traditional mesh fairing algorithms would not be able to recover this fine structure, as there is simply insufficient evidence of it in the range data alone. Specifically, when running our MRF using a fixed value for wij, which effectively decouples the range image and the depth image, the depth reconstruction leads to a model that is either overly noise (for wij = 1 or (a) Raw low-res depth map (b) Raw low-res 3D model (c) Image mapped onto 3D model (d) MRF high-res depth map (e) MRF high-res 3D model (f) Image mapped onto 3D model Figure 2: Example result of our MRF approach. Panels (a-c) show the raw data, the low-res depth map, a 3D model constructed from this depth map, and the same model with image texture superimposed. Panels (d-f) show the results of our algorithm. The depth map is now high-resolution, as is the 3D model. The 3D rendering is a substantial improvement over the raw sensor data; in fact, many small details are now visible. smooths out the edge features for wij = 5. Our approach clearly recovers those corners, thanks to the use of the camera image. Finally, in Fig. 4 we give one more example of a shipping crate next to a white wall. The coarse texture of the wooden surface is correctly inferred in contrast to the smooth white wall. This brings up the obvious problem that sharp color gradients do frequently occur on smooth surfaces; take, for example, posters. While this fact can sometimes lead to falsely-textured surfaces, it has been our experience that these flaws are often unnoticeable (a) Raw 3D model, with and without color from the image (b) Two results ignoring the image color information, for two different smoothers (c) Reconstruction with our MRF, integrating both depth and image color Figure 3: The important of the image information in depth recovery is illustrated in this figure. It shows a part of a door frame, for which a course depth map and a fine-grained image is available. The rendering labeled (b) show the result of our MRF when color is entirely ignored, for different fixed value of the weights wij. The images in (c) are the results of our approach, which clearly retains the sharp corner of the door frame. and certainly no worse than the original scan. Clearly, the reconstruction of such depth maps is an ill-posed problem, and our approach generates a high-res model that is still significantly better than the original data. Notice, however, that the background wall is recovered accurately, and the corner of the room is visually enhanced. 5 Related Work One of the primary acquisition techniques for depth is stereo. A good survey and comparison of stereo algorithms can is due to [14]. Our algorithm does not apply to stereo vision, since by definition the resolution of the image and the inferred depth map are equivalent. (a) 3D model based on the raw range data, with and without texture (b) Refined and super-resolved model, generated by our MRF Figure 4: This example illustrate that the amount of smoothing in the range data is dependent on the image texture. On the left is a wooden box with an unsmooth surface that causes significant color variations. The 3D model generated from the MRF provides relatively little smoothing. In the background is a while wall with almost no color variation. Here our approach smooths the mesh significantly; in fact, it enhances the visibility of the room corner. Passive stereo, in which the sensor does not carry its own light source, is unable to estimate ranges in the absence of texture (e.g., when imaging a featureless wall). Active stereo techniques supply their own light [4]. However, those techniques differ in characteristics from laser-based system to an extent that renders them practically inapplicable for many applications (most notably: long-range acquisition, where time-of-flight techniques are an order of magnitude more accurate then triangulation techniques, and bright-light outdoor environments). We remark that Markov Random fields have become a defining methodology in stereo reconstruction [15], along with layered EM-style methods [2, 16]; see the comparison in [14]. Similar work due to [20] relies on a different set of image cues to improve stereo shape estimates. In particular, learned regression coefficients are used to predict the band-passed shape of a scene from a band-passed image of that scene. The regression coefficients are Figure 5: 3D model of a larger indoor environment, after applying our MRF. learned from laser-stripe-scanned reference models with regitered images. For range images, surfaces, and point clouds, there exists a large literature on smoothing while preserving features such as edges. This includes work on diffusion processes [6], frequency-domain filtering [17], and anisotropic diffusion [5]; see also [3] and [1]. Most recently [10] proposed an efficient non-iterative technique for feature-preserving mesh smoothing, [9] adapted bilateral filtering for application to mesh denoising. and [7] developed anisotropic MRF techniques. None of these techniques, however, integrates highresolution images to guide the smoothing process. Instead, they all operate on monochromatic 3D surfaces. Our work can be viewed as generating super-resolution. Super-resolution techniques have long been popular in the computer vision field [8] and in aerial photogrammetry [11]. Here Bayesian techniques are often brought to bear for integrating multiple images into a single image of higher resolution. None of these techniques deal with range data. Finally, multiple range scans are often integrated into a single model [13, 18], yet none of these techniques involve image data. 6 Conclusion We have presented a Markov Random Field that integrated high-res image data into low-res range data, to recover range data at the same resolution as the image data. This approach is specifically aimed at a new wave of commercially available sensors, which provide range at lower resolution than image data. The significance of this work lies in the results. We have shown that our approach can truly fill the resolution gap between range and images, and use image data to effectively boost the resolution of a range finder. While none of the techniques used here are new (even though CG is usually not applied for inference in MRFs), we believe this is the first application of MRF to multimodal data integration. A large number of scientific fields would benefit from better range sensing; the present approach provides a solution that endows low-cost range finders with unprecedented resolution and accuracy. References [1] C.L. Bajaj and G. Xu. Anisotropic diffusion of surfaces and functions on surfaces. In Proceedings of SIGGRAPH, pages 4–32, 2003. [2] S. Baker, R Szeliski, and P. Anandan. A layered approach to stereo reconstruction. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 434–438, Santa Barbara, CA, 1998. [3] U. Clarenz, U. Diewald, and M. Rumpf. Anisotropic geometric diffusion in surface processing. In Proceedings of the IEEE Conference on Visualization, pages 397–405, 2000. [4] J. Davis, R. Ramamoothi, and S. Rusinkiewicz. Spacetime stereo: A unifying framework for depth from triangulation. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2003. [5] M. Desbrun, M. Meyer, P. Schr¨oder, and A. Barr. Anisotropic feature-preserving denoising of height fields and bivariate data. In Proceedings Graphics Interface, Montreal, Quebec, 2000. [6] M. Desbrun, M. Meyer, P. Schr¨oder, and A. H. Barr. Implicit fairing of irregular meshes using diffusion and curvature flow. In Proceedings of SIGGRAPH, 1999. [7] J. Diebel, S. Thrun, and M. Br¨uning. A bayesian method for probable surface reconstruction and decimation. IEEE Transactions on Graphics, 2005. To appear. [8] M. Elad and A. Feuer. Restoration of single super-resolution image from several blurred. IEEE Transcation on Image Processing, 6(12):1646–1658, 1997. [9] S. Fleishman, I. Drori, and D. Cohen-Or. Bilateral mesh denoising. In Proceedings of SIGGRAPH, pages 950–953, 2003. [10] T.R. Jones, F. Durand, and M. Desbrun. Non-iterative, feature-preserving mesh smoothing. In Proceedings of SIGGRAPH, pages 943–949, 2003. [11] I. K. Jung and S. Lacroix. High resolution terrain mapping using low altitude aerial stereo imagery. In Proceedings of the International Conference on Computer Vision (ICCV), Nice, France, 2003. [12] W. H. Press. Numerical recipes in C: the art of scientific computing. Cambridge University Press, Cambridge; New York, 1988. [13] S. Rusinkiewicz and M. Levoy. Efficient variants of the ICP algorithm. In Proc. Third International Conference on 3D Digital Imaging and Modeling (3DIM), Quebec City, Canada, 2001. IEEEComputer Society. [14] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1-3):7–42, 2002. [15] J. Sun, H.-Y. Shum, and N.-N. Zheng. Stereo matching using belief propagation. IEEE Transcation on PAMI, 25(7), 2003. [16] R. Szeliski. Stereo algorithms and representations for image-based rendering. In Proceedings of the British Machine Vision Conference, Vol 2, pages 314–328, 1999. [17] G. Taubin. A signal processing approach to fair surface design. In Proceedings of SIGGRAPH, pages 351–358, 1995. [18] S. Thrun, W. Burgard, and D. Fox. Probabilistic Robotics. MIT Press, Cambridge, MA, 2005. [19] Y. Weiss and W.T. Freeman. Correctness of belief propagation in gaussian graphical models of arbitrary topology. Neural Computation, 13(10):2173–2200, 2001. [20] W. T. Freeman and A. Torralba. Shape recipes: Scene representations that refer to the image. In Advances in Neural Information Processing Systems (NIPS) 15, Cambridge, MA, 2003. MIT Press.
|
2005
|
45
|
2,861
|
Analysis of Spectral Kernel Design based Semi-supervised Learning Tong Zhang Yahoo! Inc. New York City, NY 10011 Rie Kubota Ando IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Abstract We consider a framework for semi-supervised learning using spectral decomposition based un-supervised kernel design. This approach subsumes a class of previously proposed semi-supervised learning methods on data graphs. We examine various theoretical properties of such methods. In particular, we derive a generalization performance bound, and obtain the optimal kernel design by minimizing the bound. Based on the theoretical analysis, we are able to demonstrate why spectral kernel design based methods can often improve the predictive performance. Experiments are used to illustrate the main consequences of our analysis. 1 Introduction Spectral graph methods have been used both in clustering and in semi-supervised learning. This paper focuses on semi-supervised learning, where a classifier is constructed from both labeled and unlabeled training examples. Although previous studies showed that this class of methods work well for certain concrete problems (for example, see [1, 4, 5, 6]), there is no satisfactory theory demonstrating why (and under what circumstances) such methods should work. The purpose of this paper is to develop a more complete theoretical understanding for graph based semi-supervised learning. In Theorem 2.1, we present a transductive formulation of kernel learning on graphs which is equivalent to supervised kernel learning. This new kernel learning formulation includes some of the previous proposed graph semi-supervised learning methods as special cases. A consequence is that we can view such graph-based semi-supervised learning methods as kernel design methods that utilize unlabeled data; the designed kernel is then used in the standard supervised learning setting. This insight allows us to prove useful results concerning the behavior of graph based semi-supervised learning from the more general view of spectral kernel design. Similar spectral kernel design ideas also appeared in [2]. However, they didn’t present a graph-based learning formulation (Theorem 2.1 in this paper); nor did they study the theoretical properties of such methods. We focus on two issues for graph kernel learning formulations based on Theorem 2.1. First, we establish the convergence of graph based semi-supervised learning (when the number of unlabeled data increases). Second, we obtain a learning bound, which can be used to compare the performance of different kernels. This analysis gives insights to what are good kernels, and why graph-based spectral kernel design is often helpful in various applications. Examples are given to justify the theoretical analysis. Due to the space limitations, proofs will not be included in this paper. 2 Transductive Kernel Learning on Graphs We shall start with notations for supervised learning. Consider the problem of predicting a real-valued output Y based on its corresponding input vector X. In the standard machine learning formulation, we assume that the data (X, Y ) are drawn from an unknown underlying distribution D. Our goal is to find a predictor p(x) so that the expected true loss of p given below is as small as possible: R(p(·)) = E(X,Y )∼DL(p(X), Y ), where we use E(X,Y )∼D to denote the expectation with respect to the true (but unknown) underlying distribution D. Typically, one needs to restrict the hypothesis function family size so that a stable estimate within the function family can be obtained from a finite number of samples. We are interested in learning in Hilbert spaces. For notational simplicity, we assume that there is a feature representation ψ(x) ∈H, where H is a high (possibly infinity) dimensional feature space. We denote ψ(x) by column vectors, so that the inner product in the Hilbert-space H is the vector product. A linear classifier p(x) on H can be represented by a vector w ∈H such that p(x) = wT ψ(x). Let the training samples be (X1, Y1), . . . , (Xn, Yn). We consider the following regularized linear prediction method on H: ˆp(x) = ˆwT ψ(x), ˆw = arg min w∈H " 1 n n X i=1 L(wT ψ(Xi), Yi) + λwT w # . (1) If H is an infinite dimensional space, then it is not be feasible to solve (1) directly. A remedy is to use kernel methods. Given a feature representation ψ(x), we can define kernel k(x, x′) = ψ(x)T ψ(x′). It is well-known (the so-called representer theorem) that the solution of (1) can be represented as ˆp(x) = Pn i=1 ˆαik(Xi, x), where [ˆαi] is given by [ˆαi] = arg min [αi]∈Rn 1 n n X i=1 L n X j=1 αjk(Xi, Xj), Yi + λ n X i,j=1 αiαjk(Xi, Xj) . (2) The above formulations of kernel methods are standard. In the following, we present an equivalence of supervised kernel learning to a specific semi-supervised formulation. Although this representation is implicit in some earlier papers, the explicit form of this method is not well-known. As we shall see later, this new kernel learning formulation is critical for analyzing a class of graph-based semi-supervised learning methods. In this framework, the data graph consists of nodes that are the data points Xj. The edge connecting two nodes Xi and Xj is weighted by k(Xi, Xj). The following theorem, which establishes the graph kernel learning formulation we will study in this paper, essentially implies that graph-based semi-supervised learning is equivalent to the supervised learning method which employs the same kernel. Theorem 2.1 (Graph Kernel Learning) Consider labeled data {(Xi, Yi)}i=1,...,n and unlabeled data Xj (j = n+1, . . ., m). Consider real-valued vectors f = [f1, . . . , fm]T ∈ Rm, and the following semi-supervised learning method: ˆf = arg inf f∈Rm " 1 n n X i=1 L(fi, Yi) + λf T K−1f # , (3) where K (often called gram-matrix in kernel learning or affinity matrix in graph learning) is an m × m matrix with Ki,j = k(Xi, Xj) = ψ(Xi)T ψ(Xj). Let ˆp be the solution of (1), then ˆfj = ˆp(Xj) for j = 1, . . . , m. The kernel gram matrix K is always positive semi-definite. However, if K is not full rank (singular), then the correct interpretation of f T K−1f is limµ→0+ f T (K + µIm×m)−1f, where Im×m is the m × m identity matrix. If we start with a given kernel k and let K = [k(Xi, Xj)], then a semi-supervised learning method of the form (3) is equivalent to the supervised method (1). It follows that with a formulation like (3), the only way to utilize unlabeled data is to replace K by a kernel ¯K in (3), or k by ¯k in (2), where ¯K (or ¯k) depends on the unlabeled data. In other words, the only benefit of unlabeled data in this setting is to construct a good kernel based on unlabeled data. Some of previous graph-based semi-supervised learning methods employ the same formulation (3) with K−1 replaced by the graph Laplacian operator L (which we will describe in Section 5). However, the equivalence of this formulation and supervised kernel learning (with kernel matrix K = L−1) was not obtained in these earlier studies. This equivalence is important for good theoretical understanding, as we will see later in this paper. Moreover, by treating graph-based supervised learning as unsupervised kernel design (see Figure 1), the scope of this paper is more general than graph Laplacian based methods. Input: labeled data [(Xi, Yi)]i=1,...,n, unlabeled data Xj (j = n + 1, . . . , m) shrinkage factors sj ≥0 (j = 1, . . . , m), kernel function k(·, ·), Output: predictive values ˆf ′ j on Xj (j = 1, . . . , m) Form the kernel matrix K = [k(Xi, Xj)] (i, j = 1, . . . , m) Compute the kernel eigen-decomposition: K = m Pm j=1 µjvjvT j , where (µj, vj) are eigenpairs of K (vT j vj = 1) Modify the kernel matrix as: ¯K = m Pm j=1 sjµjvjvT j (∗) Compute ˆf ′ = arg minf∈Rm 1 n Pn i=1 L(fi, Yi) + λf T ¯K−1f . Figure 1: Spectral kernel design based semi-supervised learning on graph In Figure 1, we consider a general formulation of semi-supervised learning method on data graph through spectral kernel design. This is the method we will analyze in the paper. As a special case, we can let sj = g(µj) in Figure 1, where g is a rational function, then ¯K = g(K/m)K. In this special case, we do not have to compute eigen-decomposition of K. Therefore we obtain a simpler algorithm with the (∗) in Figure 1 replaced by ¯K = g(K/m)K. (4) As mentioned earlier, the idea of using spectral kernel design has appeared in [2] although they didn’t base their method on the graph formulation (3). However, we believe our analysis also sheds lights to their methods. The semi-supervised learning method described in Figure 1 is useful only when ˆf ′ is a better predictor than ˆf in Theorem 2.1 (which uses the original kernel K) – in other words, only when the new kernel ¯K is better than K. In the next few sections, we will investigate the following issues concerning the theoretical behavior of this algorithm: (a) the limiting behavior of ˆf ′ as m →∞; that is, whether ˆf ′ j converges for each j; (b) the generalization performance of (3); (c) optimal Kernel design by minimizing the generalization error, and its implications; (d) statistical models under which spectral kernel design based semi-supervised learning is effective. 3 The Limiting Behavior of Graph-based Semi-supervised Learning We want to show that as m →∞, the semi-supervised algorithm in Figure 1 is wellbehaved. That is, ˆf ′ j converges as m →∞. This is one of the most fundamental issues. Using feature space representation, we have k(x, x′) = ψ(x)T ψ(x′). Therefore a change of kernel can be regarded as a change of feature mapping. In particular, we consider a feature transformation of the form ¯ψ(x) = S1/2ψ(x), where S is an appropriate positive semi-definite operator on H. The following result establishes an equivalent feature space formulation of the semi-supervised learning method in Figure 1. Theorem 3.1 Using notations in Figure 1. Assume k(x, x′) = ψ(x)T ψ(x′). Consider S = Pm j=1 sjujuT j , where uj = Ψvj/√µj, Ψ = [ψ(X1), . . . , ψ(Xm)], then (µj, uj) is an eigenpair of ΨΨT /m. Let ˆp′(x) = ˆw′T S1/2ψ(x), ˆw′ = arg min w∈H " 1 n n X i=1 L(wT S1/2ψ(Xi), Yi) + λwT w # . Then ˆf ′ j = ˆp′(Xj) (j = 1, . . . , m). The asymptotic behavior of Figure 1 when m →∞can be easily understood from Theorem 3.1. In this case, we just replace ΨΨT/m = 1 m Pm j=1 ψ(Xj)ψ(Xj)T by EXψ(X)ψ(X)T . The spectral decomposition of EXψ(X)ψ(X)T corresponds to the feature space PCA. It is clear that if S converges, then the feature space algorithm in Theorem 3.1 also converges. In general, S converges if the eigenvectors uj converges and the shrinkage factors sj are bounded. As a special case, we have the following result. Theorem 3.2 Consider a sequence of data X1, X2, . . . drawn from a distribution, with only the first n points labeled. Assume when m →∞, Pm j=1 ψ(Xj)ψ(Xj)T /m converges to EXψ(X)ψ(X)T almost surely, and g is a continuous function in the spectral range of EXψ(X)ψ(X)T . Now in Figure 1 with (∗) given by (4) and kernel k(x, x′) = ψ(x)T ψ(x′), ˆf ′ j converges almost surely for each fixed j. 4 Generalization analysis on graph We study the generalization behavior of graph based semi-supervised learning algorithm (3), and use it to compare different kernels. We will then use this bound to justify the kernel design method given in Section 2. To measure the sample complexity, we consider m points (Xj, Yj) for i = 1, . . . , m. We randomly pick n distinct integers i1, . . . , in from {1, . . . , m} uniformly (sample without replacement), and regard it as the n labeled training data. We obtain predictive values ˆfj on the graph using the semi-supervised learning method (3) with the labeled data, and test it on the remaining m −n data points. We are interested in the average predictive performance over all random draws. Theorem 4.1 Consider (Xj, Yj) for i = 1, . . . , m. Assume that we randomly pick n distinct integers i1, . . . , in from {1, . . . , m} uniformly (sample without replacement), and denote it by Zn. Let ˆf(Zn) be the semi-supervised learning method (3) using training data in Zn: ˆf(Zn) = arg minf∈Rm 1 n P i∈Zn L(fi, Yi) + λf T K−1f . If | ∂ ∂pL(p, y)| ≤γ, and L(p, y) is convex with respect to p, then we have EZn 1 m −n X j /∈Zn L( ˆfj(Zn), Yj) ≤ inf f∈Rm 1 m m X j=1 L(fj, Yj) + λf T K−1f + γ2tr(K) 2λnm . The bound depends on the regularization parameter λ in addition to the kernel K. In order to compare different kernels, it is reasonable to compare the bound with the optimal λ for each K. That is, in addition to minimizing f, we also minimize over λ on the right hand of the bound. Note that in practice, it is usually not difficult to find a nearly-optimal λ through cross validation, implying that it is reasonable to assume that we can choose the optimal λ in the bound. With the optimal λ, we obtain: EZn 1 m −n X j /∈Zn L( ˆfj(Zn), Yj) ≤inf f∈Rm 1 m m X j=1 L(fj, Yj) + γ √ 2n p R(f, K) , where R(f, K) = tr(K/m) f TK−1f is the complexity of f with respect to kernel K. If we define ¯K as in Figure 1, then the complexity of a function f with respect to ¯K is given by R(f, ¯K) = (Pm j=1 sjµj)(Pm j=1 α2 j/(sjµj)). If we believe that a good approximate target function f can be expressed as f = P j αjvj with |αj| ≤βj for some known βj, then based on this belief, the optimal choice of the shrinkage factor becomes sj = βj/µj. That is, the kernel that optimizes the bound is ¯K = P j βjvjvT j , where vj are normalized eigenvectors of K. In this case, we have R(f, ¯K) ≤(P j βj)2. The eigenvalues of the optimal kernel is thus independent of K, but depends only on the spectral coefficient’s range βj of the approximate target function. Since there is no reason to believe that the eigenvalues µj of the original kernel K are proportional to the target spectral coefficient range. If we have some guess of the spectral coefficients of the target, then one may use the knowledge to obtain a better kernel. This justifies why spectral kernel design based algorithm can be potentially helpful (when we have some information on the target spectral coefficients). In practice, it is usually difficult to have a precise guess of βj. However, for many application problems, we observe in practice that the eigenvalues of kernel K decays more slowly than that of the target spectral coefficients. In this case, our analysis implies that we should use an alternative kernel with faster eigenvalue decay: for example, using K2 instead of K. This has a dimension reduction effect. That is, we effectively project the data into the principal components of data. The intuition is also quite clear: if the dimension of the target function is small (spectral coefficient decays fast), then we should project data to those dimensions by reducing the remaining noisy dimensions (corresponding to fast kernel eigenvalue decay). 5 Spectral analysis: the effect of input noise We provide a justification on why spectral coefficients of the target function often decay faster than the eigenvalues of a natural kernel K. In essence, this is due to the fact that input vector X is often corrupted with noise. Together with results in the previous section, we know that in order to achieve optimal performance, we need to use a kernel with faster eigenvalue decay. We will demonstrate this phenomenon under a statistical model, and use the feature space notation in Section 3. For simplicity, we assume that ψ(x) = x. We consider a two-class classification problem in R∞(with the standard 2-norm innerproduct), where the label Y = ±1. We first start with a noise free model, where the data can be partitioned into p clusters. Each cluster ℓis composed of a single center point ¯xℓ(having zero variance) with label ¯yℓ= ±1. In this model, assume that the centers are well separated so that there is a weight vector w∗such that wT ∗w∗< ∞and wT ∗¯xℓ= ¯yℓ. Without loss of generality, we may assume that ¯xℓand w∗belong to a p-dimensional subspace Vp. Let V ⊥ p be its orthogonal complement. Assume now that the observed input data are corrupted with noise. We first generate a center index ℓ, and then noise δ (which may depend on ℓ). The observed input data is the corrupted data X = ¯xℓ+ δ, and the observed output is Y = wT ∗¯xℓ. In this model, let ℓ(Xi) be the center corresponding to Xi, the observation can be decomposed as: Xi = ¯xℓ(Xi) + δ(Xi), and Yi = wT ∗¯xℓ(Xi). Given noise δ, we decompose it as δ = δ1 + δ2 where δ1 is the orthogonal projection of δ in Vp, and δ2 is the orthogonal projection of δ in V ⊥ p . We assume that δ1 is a small noise component; the component δ2 can be large but has small variance in every direction. Theorem 5.1 Consider the data generation model in this section, with observation X = ¯xℓ+ δ and Y = wT ∗¯xℓ. Assume that δ is conditionally zero-mean given ℓ: Eδ|ℓδ = 0. Let EXXT = P j µjujuT j be the spectral decomposition with decreasing eigenvalues µj (uT j uj = 1). Then the following claims are valid: let σ2 1 ≥σ2 2 ≥· · · be the eigenvalues of Eδ2δT 2 , then µj ≥σ2 j ; if ∥δ1∥2 ≤b/∥w∗∥2, then |wT ∗Xi −Yi| ≤b; ∀t ≥0, P j≥1(wT ∗uj)2µ−t j ≤wT ∗(E ¯xℓ¯xT ℓ)−tw∗. Consider m points X1, . . . , Xm. Let Ψ = [X1, . . . , Xm] and K = ΨT Ψ = m P j µjvjvT j be the kernel spectral decomposition. Let uj = Ψvj/√mµj, fi = wT ∗Xi, and f = P j αjvj. Then it is not difficult to verify that αj = √mµjwT ∗uj. If we assume that asymptotically 1 m Pm i=1 XiXT i →EXXT, then we have the following consequences: • fi = wT ∗Xi is a good approximate target when b is small. In particular, if b < 1, then this function always gives the correct class label. • For all t > 0, the spectral coefficient αj of f decays as 1 m Pm j=1 α2 j/µ1+t j ≤ wT ∗(E¯xℓ¯xT ℓ)−tw∗. • The eigenvalue µj decays slowly when the noise spectral decays slowly: µj ≥σ2 j . If the clean data are well behaved in that we can find a weight vector such that wT ∗(EX ¯xℓ(X)¯xT ℓ(X))−tw∗is bounded for some t > 1, then when the data are corrupted with noise, we can find a good approximate target that has spectral decay faster (on average) than that of the kernel eigenvalues. This analysis implies that if the feature representation associated with the original kernel is corrupted with noise, then it is often helpful to use a kernel with faster spectral decay. For example, instead of using K, we may use ¯K = K2. However, it may not be easy to estimate the exact decay rate of the target spectral coefficients. In practice, one may use cross validation to optimize the kernel. A kernel with fast spectral decay projects the data into the most prominent principal components. Therefore we are interested in designing kernels which can achieve a dimension reduction effect. Although one may use direct eigenvalue computation, an alternative is to use a function g(K/m)K for such an effect, as in (4). For example, we may consider a normalized kernel such that K/m = P j µjujuT j where 0 ≤uj ≤1. A standard normalization method is to use D−1/2KD−1/2, where D is the diagonal matrix with each entry corresponding to the row sums of K. It follows that g(K/m)K = m P j g(µj)µjujuT j . We are interested in a function g such that g(µ)µ ≈1 when µ ∈[α, 1] for some α, and g(µ)µ ≈0 when µ < α (where α is close to 1). One such function is to let g(µ)µ = (1−α)/(1−αµ). This is the function used in various graph Laplacian formulations with normalized Gaussian kernel as the initial kernel K. For example, see [5]. Our analysis suggests that it is the dimension reduction effect of this function that is important, rather than the connection to graph Laplacian. As we shall see in the empirical examples, other kernels such as K2, which achieve similar dimension reduction effect (but has nothing to do with graph Laplacian), also improve performance. 6 Empirical Examples This section shows empirical examples to demonstrate some consequences of our theoretical analysis. We use the MNIST data set (http://yann.lecun.com/exdb/mnist/), consisting of hand-written digit images (representing 10 classes, from digit “0” to digit “9”). In the following experiments, we randomly draw m = 2000 samples. We regard n = 100 of them as labeled data, and the remaining m −n = 1900 as unlabeled test data. 0.001 0.01 0.1 1 10 100 1000 0 20 40 60 80 100 120 140 160 180 200 coefficients dimension (d) Normalized 25NN, MNIST Y avg K K2 K3 K4 Inverse 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0 50 100 150 200 accuracy dimension (d) Accuracy: 25NN, MNIST Y K4 K3 K2 K 1,..1,0,.. Inverse original K Figure 2: Left: spectral coefficients; right: classification accuracy. Throughout the experiments, we use the least squares loss: L(p, y) = (p −y)2 for simplicity. We study the performance of various kernel design methods, by changing the spectral coefficients of the initial gram matrix K, as in Figure 1. Below we write ¯µj for the new spectral coefficient of the new gram matrix ¯K: i.e., ¯K = Pm i=1 ¯µivivT i . We study the following kernel design methods (also see [2]), with a dimension cut off parameter d, so that ¯µi = 0 when i > d. (a) [1, . . . , 1, 0, . . . , 0]: ¯µi = 1 if i ≤d, and 0 otherwise. This was used in spectral clustering [3]. (b) K: ¯µi = µi if i ≤d; 0 otherwise. This method is essentially kernel principal component analysis which keeps the d most significant principal components of K. (c) Kp: ¯µi = µp i if i ≤d; 0 otherwise. We set p = 2, 3, 4. This accelerates the decay of eigenvalues of K. (d) Inverse: ¯µi = 1/(1 −ρµi) if i ≤d; 0 otherwise. ρ is a constant close to 1 (we used 0.999). This is essentially graph-Laplacian based semi-supervised learning for normalized kernel (e.g. see [5]). Note that the standard graph-Laplacian formulation sets d = m. (e) Y : ¯µi = |Y T vi| if i ≤d; 0 otherwise. This is the oracle kernel that optimizes our generalization bound. The purpose of testing this oracle method is to validate our analysis by checking whether good kernel in our theory produces good classification performance on real data. Note that in the experiments, we use averaged Y over the ten classes. Therefore the resulting kernel will not be the best possible kernel for each specific class, and thus its performance may not always be optimal. Figure 2 shows the spectral coefficients of the above mentioned kernel design methods and the corresponding classification performance. The initial kernel is normalized 25-NN, which is defined as K = D−1/2WD−1/2 (see previous section), where Wij = 1 if either the i-th example is one of the 25 nearest neighbors of the j-th example or vice versa; and 0 otherwise. As expected, the results demonstrate that the target spectral coefficients Y decay faster than that of the original kernel K. Therefore it is useful to use kernel design methods that accelerate the eigenvalue decay. The accuracy plot on the right is consistent with our theory. The near oracle kernel ’Y’ performs well especially when the dimension cut-off is large. With appropriate dimension d, all methods perform better than the supervised base-line (original K) which is below 65%. With appropriate dimension cut-off, all methods perform similarly (over 80%). However, Kp with (p = 2, 3, 4) is less sensitive to the cut-off dimension d than the kernel principal component dimension reduction method K. Moreover, the hard threshold method in spectral clustering ([1, . . . , 1, 0, . . . , 0]) is not stable. Similar behavior can also be observed with other initial kernels. Figure 3 shows the classification accuracy with the standard Gaussian kernel as the initial kernel K, both with and without normalization. We also used different bandwidth t to illustrate that the behavior of different methods are similar with different t (in a reasonable range). The result shows that normalization is not critical for achieving high performance, at least for this data. Again, we observe that the near oracle method performs extremely well. The spectral clustering kernel is sensitive to the cut-off dimension, while Kp with p = 2, 3, 4 are quite stable. The standard kernel principal component dimension reduction (method K) performs very well with appropriately chosen dimension cut-off. The experiments are consistent with our theoretical analysis. 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0 50 100 150 200 accuracy dimension (d) Accuracy: normalized Gaussian, MNIST Y K4 K3 K2 K 1,..1,0,.. Inverse original K 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0 50 100 150 200 accuracy dimension (d) Accuracy: Gaussian, MNIST 1,..,1,0,.. K K2 K3 K4 Y original K Figure 3: Classification accuracy with Gaussian kernel k(i, j) = exp(−||xi −xj||2 2/t). Left: normalized Gaussian (t = 0.1); right: unnormalized Gaussian (t = 0.3). 7 Conclusion We investigated a class of graph-based semi-supervised learning methods. By establishing a graph-based formulation of kernel learning, we showed that this class of semi-supervised learning methods is equivalent to supervised kernel learning with unsupervised kernel design (explored in [2]). We then obtained a generalization bound, which implies that the eigenvalues of the optimal kernel should decay at the same rate of the target spectral coefficients. Moreover, we showed that input noise can cause the target spectral coefficients to decay faster than the kernel spectral coefficients. The analysis explains why it is often helpful to modify the original kernel eigenvalues to achieve a dimension reduction effect. References [1] Mikhail Belkin and Partha Niyogi. Semi-supervised learning on Riemannian manifolds. Machine Learning, Special Issue on Clustering:209–239, 2004. [2] Olivier Chapelle, Jason Weston, and Bernhard Sch:olkopf. Cluster kernels for semisupervised learning. In NIPS, 2003. [3] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, pages 849–856, 2001. [4] M. Szummer and T. Jaakkola. Partially labeled classification with Markov random walks. In NIPS 2001, 2002. [5] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Schlkopf. Learning with local and global consistency. In NIPS 2003, pages 321–328, 2004. [6] Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. In ICML 2003, 2003.
|
2005
|
46
|
2,862
|
Representing Part-Whole Relationships in Recurrent Neural Networks Viren Jain2, Valentin Zhigulin1,2, and H. Sebastian Seung1,2 1Howard Hughes Medical Institute and 2Brain & Cog. Sci. Dept., MIT viren@mit.edu, valentin@mit.edu, seung@mit.edu Abstract There is little consensus about the computational function of top-down synaptic connections in the visual system. Here we explore the hypothesis that top-down connections, like bottom-up connections, reflect partwhole relationships. We analyze a recurrent network with bidirectional synaptic interactions between a layer of neurons representing parts and a layer of neurons representing wholes. Within each layer, there is lateral inhibition. When the network detects a whole, it can rigorously enforce part-whole relationships by ignoring parts that do not belong. The network can complete the whole by filling in missing parts. The network can refuse to recognize a whole, if the activated parts do not conform to a stored part-whole relationship. Parameter regimes in which these behaviors happen are identified using the theory of permitted and forbidden sets [3, 4]. The network behaviors are illustrated by recreating Rumelhart and McClelland’s “interactive activation” model [7]. In neural network models of visual object recognition [2, 6, 8], patterns of synaptic connectivity often reflect part-whole relationships between the features that are represented by neurons. For example, the connections of Figure 1 reflect the fact that feature B both contains simpler features A1, A2, and A3, and is contained in more complex features C1, C2, and C3. Such connectivity allows neurons to follow the rule that existence of the part is evidence for existence of the whole. By combining synaptic input from multiple sources of evidence for a feature, a neuron can “decide” whether that feature is present. 1 The synapses shown in Figure 1 are purely bottom-up, directed from simple to complex features. However, there are also top-down connections in the visual system, and there is little consensus about their function. One possibility is that top-down connections also reflect part-whole relationships. They allow feature detectors to make decisions using the rule that existence of the whole is evidence for existence of its parts. In this paper, we analyze the dynamics of a recurrent network in which part-whole relationships are stored as bidirectional synaptic interactions, rather than the unidirectional interactions of Figure 1. The network has a number of interesting computational capabilities. When the network detects a whole, it can rigorously enforce part-whole relationships 1Synaptic connectivity may reflect other relationships besides part-whole. For example, invariances can be implemented by connecting detectors of several instances of the same feature to the same target, which is consequently an invariant detector of the feature. A1 A2 A3 C1 C2 C3 B Figure 1: The synaptic connections (arrows) of neuron B represent part-whole relationships. Feature B both contains simpler features and is contained in more complex features. The synaptic interactions are drawn one-way, as in most models of visual object recognition. Existence of the part is regarded as evidence for existence of the whole. This paper makes the interactions bidirectional, allowing the existence of the whole to be evidence for the existence of its parts. by ignoring parts that do not belong. The network can complete the whole by filling in missing parts. The network can refuse to recognize a whole, if the activated parts do not conform to a stored part-whole relationship. Parameter regimes in which these behaviors happen are identified using the recently developed theory of permitted and forbidden sets [3, 4]. Our model is closely related to the interactive activation model of word recognition, which was proposed by McClelland and Rumelhart to explain the word superiority effect studied by visual psychologists [7]. Here our concern is not to model a psychological effect, but to characterize mathematically how computations involving part-whole relationships can be carried out by a recurrent network. 1 Network model Suppose that we are given a set of part-whole relationships specified by ξa i = 1, if part i is contained in whole a 0, otherwise We assume that every whole contains at least one part, and every part is contained in at least one whole. The stimulus drives a layer of neurons that detect parts. These neurons also interact with a layer of neurons that detect wholes. We will refer to part-detectors as “P-neurons” and whole-detectors as “W-neurons.” The part-whole relationships are directly stored in the synaptic connections between P and W neurons. If ξa i = 1, the ith neuron in the P layer and the ath neuron in the W layer have an excitatory interaction of strength γ. If ξa i = 0, the neurons have an inhibitory interaction of strength σ. Furthermore, the P-neurons inhibit each other with strength β, and the Wneurons inhibit each other with strength α. All of these interactions are symmetric, and all activation functions are the rectification nonlinearity [z]+ = max{z, 0}. Then the dynamics of the network takes the form ˙ Wa + Wa = γ X i Piξa i −σ X i (1 −ξa i )Pi −α X b̸=a Wb + , (1) ˙Pi + Pi = γ X a Waξa i −σ X a (1 −ξa i )Wa −β X j̸=i Pj + Bi + . (2) where Bi is the input to the P layer from the stimulus. Figure 2 shows an example of a network with two wholes. Each whole contains two parts. One of the parts is contained in both wholes. excitation γ Wa Wb P1 P2 P3 -α -β } -β B1 B2 B3 W layer P layer } -σ -σ inhibition Figure 2: Model in example configuration: ξ = {(1, 1, 0), (0, 1, 1)}. When a stimulus is presented, it activates some of the P-neurons, which activate some of the W-neurons. The network eventually converges to a stable steady state. We will assume that α > 1. In the Appendix, we prove that this leads to unconditional winner-take-all behavior in the W layer. In other words, no more than one W-neuron can be active at a stable steady state. If a single W-neuron is active, then a whole has been detected. Potentially there are also many P-neurons active, indicating detection of parts. This representation may have different properties, depending on the choice of parameters β, γ, and σ. As discussed below, these include rigorous enforcement of part-whole relationships, completion of wholes by “filling in” missing parts, and non-recognition of parts that do not conform to a whole. 2 Enforcement of part-whole relationships Suppose that a single W-neuron is active at a stable steady state, so that a whole has been detected. Part-whole relationships are said to be enforced if the network always ignores parts that are not contained in the detected whole, despite potentially strong bottom-up evidence for them. It can be shown that enforcement follows from the inequality σ2 + β2 + γ2 + 2σβγ > 1. (3) which guarantees that neuron i in the P layer is inactive, if neuron a in the W layer is active and ξa i = 0. When part-whole relations are enforced, prior knowledge about legal combinations of parts strictly constrains what may be perceived. This result is proven in the Appendix, and only an intuitive explanation is given here. Enforcement is easiest to understand when there is interlayer inhibition (σ > 0). In this case, the active W-neuron directly inhibits the forbidden P-neurons. The case of σ = 0 is more subtle. Then enforcement is mediated by lateral inhibition in the P layer. Excitatory feedback from the W-neuron has the effect of counteracting the lateral inhibition between the P-neurons that belong to the whole. As a result, these P-neurons become strongly activated enough to inhibit the rest of the P layer. 3 Completion of wholes by filling in missing parts If a W-neuron is active, it excites the P-neurons that belong to the whole. As a result, even if one of these P-neurons receives no bottom-up input (Bi = 0), it is still active. We call this phenomenon “completion,” and it is guaranteed to happen when γ > p β (4) The network may thus “imagine” parts that are consistent with the recognized whole, but are not actually present in the stimulus. As with enforcement, this condition depends on top-down connections. In the special case γ = √β, the interlayer excitation between a W-neuron and its P-neurons exactly cancels out the lateral inhibition between the P-neurons at a steady state. So the recurrent connections effectively vanish, letting the activity of the P-neurons be determined by their feedforward inputs. When the interlayer excitation is stronger than this, the inequality (4) holds, and completion occurs. 4 Non-recognition of a whole If there is no interlayer inhibition (σ = 0), then a single W-neuron is always active, assuming that there is some activity in the P layer. To see this, suppose for the sake of contradiction that all the W-neurons are inactive. Then they receive no inhibition to counteract the excitation from the P layer. This means some of them must be active, which contradicts our assumption. This means that the network always recognizes a whole, even if the stimulus is very different from any part-whole combination that is stored in the network. However, if interlayer inhibition is sufficiently strong (large σ), the network may refuse to recognize a whole. Neurons in the P layer are activated, but there is no activity in the W layer. Formal conditions on σ can be derived, but are not given here because of space limitations. In case of non-recognition, constraints on the P-layer are not enforced. It is possible for the network to detect a configuration of parts that is not consistent with any stored whole. 5 Example: Interactive Activation model To illustrate the computational capabilities of our network, we use it to recreate the interactive activation (IA) model of McClelland and Rumelhart. Figure 3 shows numerical simulations of a network containing three layers of neurons representing strokes, letters, and words, respectively. There are 16 possible strokes in each of four letter positions. For each stroke, there are two neurons, one signaling the presence of the stroke and the other signaling its absence. Letter neurons represent each letter of the alphabet in each of four positions. Word neurons represent each of 1200 common four letter words. The letter and word layers correspond to the P and W layers that were introduced previously. There are bidirectional interactions between the letter and word layers, and lateral inhibition within the layers. The letter neurons also receive input from the stroke neurons, but this interaction is unidirectional. Our network differs in two ways from the original IA model. First, all interactions involving letter and word neurons are symmetric. In the original model, the interactions between the letter and word layers were asymmetric. In particular, inhibitory connections only ran from letter neurons to word neurons, and not vice versa. Second, the only nonlinearity in our model is rectification. These two aspects allow us to apply the full machinery of the theory of permitted and forbidden sets. Figure 3 shows the result of presenting the stimulus “MO M” for four different settings of parameters. In each of the four cases, the word layer of the network converges to the same result, detecting the word “MOON”, which is the closest stored word to the stimulus. However, the activity in the letter layer is different in the four cases. enforcement non-enforcement completion noncompletion input: P layer reconstruction W layer P layer reconstruction W layer Figure 3: Simulation of 4 different parameter regimes in a letter-word recognition network. Within each panel, the middle column presents a feature-layer reconstruction based on the letter activity shown in the left column. W layer activity is shown in the right column. The top row shows the network state after 10 iterations of the dynamics. The bottom row shows the steady state. In the left column, the parameters obey the inequality (3), so that part-whole relationships are enforced. The activity of the letter layer is visualized by activating the strokes corresponding to each active letter neuron. The activated letters are part of the word “MOON”. In the top left, the inequality (4) is satisfied, so that the missing “O” in the stimulus is filled in. In the bottom left, completion does not occur. In the simulations of the right column, paraminput: multi-stability non-recognition event Figure 4: Simulation of a non-recognition event and example of multistability. eters are such that part-whole relationships are not enforced. Consequently, the word layer is much more active. Bottom-up input provides evidence for several other letters, which is not suppressed. In the top right, the inequality (4) is satisfied, so that the missing “O” in the stimulus is filled in. In the bottom right, the “O” neuron is not activated in the third position, so there is no completion. However, some letter neurons for the third position are activated, due to the input from neurons that indicate the absence of strokes. Figure 4 shows simulations for large σ, deep in the enforcement regime where non-recognition is a possibility. From one initial condition, the network converges to a state in which no W neurons are active, a non-recognition. From another initial condition, the network detects the word “NORM”. Deep in the enforcement regime, the top-down feedback can be so strong that the network has multiple stable states, many of which bear little resemblance to the stimulus at all. This is a problematic aspect of this network. It can be prevented by setting parameters at the edge of the enforcement regime. 6 Discussion We have analyzed a recurrent network that performs computations involving part-whole relationships. The network can fill in missing parts and suppress parts that do not belong. These two computations are distinct and can be dissociated from each other, as shown in Figure 3. While these two computations can also be performed by associative memory models, they are not typically dissociable in these models. For example, in the Hopfield model pattern completion and noise suppression are both the result of recall of one of a finite number of stereotyped activity patterns. We believe that our model is more appropriate for perceptual systems, because its behavior is piecewise linear, due its reliance on rectification nonlinearity. Therefore, analog aspects of computation are able to coexist with the part-whole relationships. Furthermore, in our model the stimulus is encoded in maintained synaptic input to the network, rather than as an initial condition of the dynamics. A Appendix: Permitted and forbidden sets Our mathematical results depend on the theory of permitted and forbidden sets [3, 4], which is summarized briefly here. The theory is applicable to neural networks with rectification nonlinearity, of the form ˙xi + xi = [bi + P j Wijxj]+. Neuron i is said to be active when xi > 0. For a network of N neurons, there are 2N possible sets of active neurons. For each active set, consider the submatrix of Wij corresponding to the synapses between active neurons. If all eigenvalues of this submatrix have real parts less than or equal to unity, then the active set is said to be permitted. Otherwise the active set is said to be forbidden. A set is permitted if and only if there exists an input vector b such that those neurons are active at a stable steady state. Permitted sets can be regarded as memories stored in the synaptic connections Wij. If Wij is a symmetric matrix, the nesting property holds: every subset of a permitted set is permitted, and every superset of a forbidden set is forbidden. The present model can be seen as a general method for storing permitted sets in a recurrent network. This method introduces a neuron for each permitted set, relying on a unary or “grandmother cell” representation. In contrast, Xie et al.[9] used lateral inhibition in a single layer of neurons to store permitted sets. By introducing extra neurons, the present model achieves superior storage capacity, much as unary models of associative memory [1] surpass distributed models [5]. A.1 Unconditional winner-take-all in the W layer The synapses between two W-neurons have strengths 0 −α −α 0 The eigenvalues of this matrix are ±α. Therefore two W-neurons constitute a forbidden set if α > 1. By the nesting property, it follows more than two W-neurons is also a forbidden set, and that the W layer has the unconditional winner-take-all property. A.2 Part-whole combinations as permitted sets Theorem 1. Suppose that β < 1. If γ2 < β + (1 −β)/k then any combination of k ≥1 parts consistent with a whole corresponds to a permitted set. Proof. Consider k parts belonging to a whole. They are represented by one W-neuron and k P-neurons, with synaptic connections given by the (k + 1) × (k + 1) matrix M = −β(11T −I) γ1 γ1T 0 , (5) where 1 is the k-dimensional vector whose elements are all equal to one. Two eigenvectors of M are of the form (1T c), and have the same eigenvalues as the 2 × 2 matrix −β(k −1) γ γk 0 This matrix has eigenvalues less than one when γ2 < β + (1 −β)/k and β(k −1) + 2 > 0. The other k −1 eigenvectors are of the form (dT , 0), where dT 1 = 0. These have eigenvalues β. Therefore all eigenvalues of W are less than one if the condition of the theorem is satisfied. A.3 Constraints on combining parts Wa Pi Pj -β γ -σ Figure 5: A set of one W-neuron and two P-neurons is forbidden if one part belongs to the whole and the other does not. Here, we derive conditions under which the network can enforce the constraint that steady state activity be confined to parts that constitute a whole. Theorem 2. Suppose that β > 0 and σ2+β2+γ2+2σβγ > 1 If a W-neuron is active, then only P-neurons corresponding to parts contained in the relevant whole can be active at a stable steady state. Proof. Consider P-neurons Pi, Pj, and W-neuron Wa. Suppose that ξa i = 1 but ξa j = 0. As shown in Figure 5, the matrix of connections is given by: W = 0 −β γ −β 0 −σ γ −σ 0 ! (6) This set is permitted if all eigenvalues of W −I have negative real parts. The characteristic equation of I −W is λ3 +b1λ2 + b2λ + b3 = 0, where b1 = 3, b2 = 3 −σ2 −β2 −γ2 and b3 = 1−2σβγ−σ2−β2−γ2. According to the Routh-Hurwitz theorem, all the eigenvalues have negative real parts if and only if b1 > 0, b3 > 0 and b1b2 > b3. Clearly, the first condition is always satisfied. The second condition is more restrictive than the third. It is satisfied only when σ2 +β2 +γ2 +2σβγ < 1. Hence, one of the eigenvalues has a positive real part when this condition is broken, i.e., when σ2+β2+γ2+2σβγ > 1. By the nesting property, any larger set of P-neurons inconsistent with the W-neuron is also forbidden. A.4 Completion of wholes Theorem 3. If γ > √β and a single W-neuron a is active at a steady state, then Pi > 0 for all i such that ξa i = 1. Proof. Suppose that the detected whole has k parts. At the steady state Pi = ξa i 1 −β Bi −(β −γ2)Ptot + where Ptot = X i Pi = 1 1 −β + (β −γ2)k k X i=1 Biξa i (7) A.5 Preventing runaway If feedback loops cause the network activity to diverge, then the preceding analyses are not relevant. Here we give a sufficient condition guaranteeing that runaway instability does not happen. It is not a necessary condition. Interestingly, the condition implies the condition of Theorem 1. Theorem 4. Suppose that P and W obey the dynamics of Eqs. (1) and (2), and define the objective function E = 1 −α 2 X a W 2 a + α 2 X a Wa !2 + 1 −β 2 X i P 2 i + β 2 X i Pi !2 − X i BiPi −γ X ia PiWaξa i + σ X ia (1 −ξa i )PiWa. (8) Then E is a Lyapunov like function that, given β > γ2 −1−γ2 N−1 , ensures convergence of the dynamics to a stable steady state. Proof. (sketch) Differentiation of E with respect to time shows that that E is nonincreasing in the nonnegative orthant and constant only at steady states of the network dynamics. We must also show that E is radially unbounded, which is true if the quadratic part of E is copositive definite. Note that the last term of E is lower-bounded by zero and the previous term is upper bounded by γ P ia PiWa. We assume α > 1. Thus, we can use Cauchy’s inequality, P i P 2 i ≥(P i Pi)2 /N, and the fact that P a W 2 a ≤(P a Wa)2 for Wa ≥0, to derive E ≥1 2 ( X a Wa)2 + 1 −β + βN N ( X i Pi)2 −2γ( X a Wa X i Pi) ! − X i BiPi. (9) If β > γ2 −1−γ2 N−1 , the quadratic form in the inequality is positive definite and E is radially unbounded. References [1] E. B. Baum, J. Moody, and F. Wilczek. Internal representations for associative memory. Biol. Cybern., 59:217–228, 1988. [2] K. Fukushima. Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern, 36(4):193–202, 1980. [3] R.H. Hahnloser, R. Sarpeshkar, M.A. Mahowald, R.J. Douglas, and H.S. Seung. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405(6789):947– 51, Jun 22 2000. [4] R.H. Hahnloser, H.S. Seung, and J.-J. Slotine. Permitted and forbidden sets in symmetric threshold-linear networks. Neural Computation, 15:621–638, 2003. [5] J.J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A, 79(8):2554–8, Apr 1982. [6] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Comput., 1:541–551, 1989. [7] J. L. McClelland and D. E. Rumelhart. An interactive activation model of context effects in letter perception: Part i. an account of basic findings. Psychological Review, 88(5):375–407, Sep 1981. [8] M Riesenhuber and T Poggio. Hierarchical models of object recognition in cortex. Nat Neurosci, 2(11):1019–25, Nov 1999. [9] X. Xie, R.H. Hahnloser, and H. S. Seung. Selectively grouping neurons in recurrent networks of lateral inhibition. Neural Computation, 14:2627–2646, 2002.
|
2005
|
47
|
2,863
|
Size Regularized Cut for Data Clustering Yixin Chen Department of CS Univ. of New Orleans yixin@cs.uno.edu Ya Zhang Department of EECS Uinv. of Kansas yazhang@ittc.ku.edu Xiang Ji NEC-Labs America, Inc. xji@sv.nec-labs.com Abstract We present a novel spectral clustering method that enables users to incorporate prior knowledge of the size of clusters into the clustering process. The cost function, which is named size regularized cut (SRcut), is defined as the sum of the inter-cluster similarity and a regularization term measuring the relative size of two clusters. Finding a partition of the data set to minimize SRcut is proved to be NP-complete. An approximation algorithm is proposed to solve a relaxed version of the optimization problem as an eigenvalue problem. Evaluations over different data sets demonstrate that the method is not sensitive to outliers and performs better than normalized cut. 1 Introduction In recent years, spectral clustering based on graph partitioning theories has emerged as one of the most effective data clustering tools. These methods model the given data set as a weighted undirected graph. Each data instance is represented as a node. Each edge is assigned a weight describing the similarity between the two nodes connected by the edge. Clustering is then accomplished by finding the best cuts of the graph that optimize certain predefined cost functions. The optimization usually leads to the computation of the top eigenvectors of certain graph affinity matrices, and the clustering result can be derived from the obtained eigen-space [12, 6]. Many cost functions, such as the ratio cut [3], average association [15], spectral k-means [19], normalized cut [15], min-max cut [7], and a measure using conductance and cut [9] have been proposed along with the corresponding eigen-systems for the data clustering purpose. The above data clustering methods, as well as most other methods in the literature, bear a common characteristic that manages to generate results maximizing the intra-cluster similarity, and/or minimizing the inter-cluster similarity. These approaches perform well in some cases, but fail drastically when target data sets possess complex, extreme data distributions, and when the user has special needs for the data clustering task. For example, it has been pointed out by several researchers that normalized cut sometimes displays sensitivity to outliers [7, 14]. Normalized cut tends to find a cluster consisting of a very small number of points if those points are far away from the center of the data set [14]. There has been an abundance of prior work on embedding user’s prior knowledge of the data set in the clustering process. Kernighan and Lin [11] applied a local search procedure that maintained two equally sized clusters while trying to minimize the association between the clusters. Wagstaff et al. [16] modified k-means method to deal with a priori knowledge about must-link and cannot link constraints. Banerjee and Ghosh [2] proposed a method to balance the size of the clusters by considering an explicit soft constraint. Xing et al. [17] presented a method to learn a clustering metric over user specified samples. Yu and Shi [18] introduced a method to include must-link grouping cues in normalized cut. Other related works include leaving ϵ fraction of the points unclustered to avoid the effect of outliers [4] and enforcing minimum cluster size constraint [10]. In this paper, we present a novel clustering method based on graph partitioning. The new method enables users to incorporate prior knowledge of the expected size of clusters into the clustering process. Specifically, the cost function of the new method is defined as the sum of the inter-cluster similarity and a regularization term that measures the relative size of two clusters. An “optimal” partition corresponds to a tradeoff between the inter-cluster similarity and the relative size of two clusters. We show that the size of the clusters generated by the optimal partition can be controlled by adjusting the weight on the regularization term. We also prove that the optimization problem is NP-complete. So we present an approximation algorithm and demonstrate its performance using two document data sets. 2 Size regularized cut We model a given data set using a weighted undirected graph G = G(V, E, W) where V, E, and W denote the vertex set, edge set, and graph affinity matrix, respectively. Each vertex i ∈V represents a data point, and each edge (i, j) ∈E is assigned a nonnegative weight Wij to reflect the similarity between the data points i and j. A graph partitioning method attempts to organize vertices into groups so that the intra-cluster similarity is high, and/or the inter-cluster similarity is low. A simple way to quantify the cost for partitioning vertices into two disjoint sets V1 and V2 is the cut size cut(V1, V2) = X i∈V1,j∈V2 Wij , which can be viewed as the similarity or association between V1 and V2. Finding a binary partition of the graph that minimizes the cut size is known as the minimum cut problem. There exist efficient algorithms for solving this problem. However, the minimum cut criterion favors grouping small sets of isolated nodes in the graph [15]. To capture the need for more balanced clusters, it has been proposed to include the cluster size information as a multiplicative penalty factor in the cost function, such as average cut [3] and normalized cut [15]. Both cost functions can be uniformly written as [5] cost(V1, V2) = cut(V1, V2) 1 |V1|β + 1 |V1|β . (1) Here, β = [β1, · · · , βN]T is a weight vector where βi is a nonnegative weight associated with vertex i, and N is the total number of vertices in V. The penalty factor for “unbalanced partition” is determined by |Vj|β (j = 1, 2), which is a weighted cardinality (or weighted size) of Vj, i.e., |Vj|β = X i∈Vj βi . (2) Dhillon [5] showed that if βi = 1 (for all i), the cost function (1) becomes average cut. If βi = P j Wij, then (1) turns out to be normalized cut. In contrast with minimum cut, average cut and normalized cut tend to generate more balanced clusters. However, due to the multiplicative nature of their cost functions, average cut and normalized cut are still sensitive to outliers. This is because the cut value for separating outliers from the rest of the data points is usually close to zero, and thus makes the multiplicative penalty factor void. To avoid the drawback of the above multiplicative cost functions, we introduce an additive cost function for graph bi-partitioning. The cost function is named size regularized cut (SRcut), and is defined as SRcut(V1, V2) = cut(V1, V2) −α|V1|β|V2|β (3) where |Vj|β (j = 1, 2) is described in (2), β and α > 0 are given a priori. The last term in (3), α|V1|β|V2|β, is the size regularization term, which can be interpreted as below. Since |V1|β + |V2|β = |V|β = βT e where e is a vector of 1’s, it is straightforward to show that the following inequality |V1|β|V2|β ≤ βT e 2 2 holds for arbitrary V1, V2 ∈V satisfying V1 ∪V2 = V and V1 ∩V2 = ∅. In addition, the equality holds if and only if |V1|β = |V2|β = βT e 2 . Therefore, |V1|β|V2|β achieves the maximum value when two clusters are of equal weighted size. Consequently, minimizing SRcut is equivalent to minimizing the similarity between two clusters and, at the same time, searching for a balanced partition. The tradeoff between the inter-cluster similarity and the balance of the cut depends on the α parameter, which needs to be determined by the prior information on the size of clusters. If α = 0, minimum SRcut will assign all vertices to one cluster. On the other end, if α ≫0, minimum SRcut will generate two clusters of equal size (if N is an even number). We defer the discussion on the choice of α to Section 5. In a spirit similar to that of (3), we can define size regularized association (SRassoc) as SRassoc(V1, V2) = X i=1,2 cut(Vi, Vi) + 2α|V1|β|V2|β where cut(Vi, Vi) measures the intra-cluster similarity. An important property of SRassoc and SRcut is that they are naturally related: SRcut(V1, V2) = cut(V, V) −SRassoc(V1, V2) 2 . Hence, minimizing size regularized cut is in fact identical to maximizing size regularized association. In other words, minimizing the size regularized inter-cluster similarity is equivalent to maximizing the size regularized intra-cluster similarity. In this paper, we will use SRcut as the clustering criterion. 3 Size ratio monotonicity Let V1 and V2 be a partition of V. The size ratio r = min(|V1|β,|V2|β) max(|V1|β,|V2|β) defines the relative size of two clusters. It is always within the interval [0, 1], and a larger value indicates a more balanced partition. The following theorem shows that by controlling the parameter α in the SRcut cost function, one can control the balance of the optimal partition. In addition, the size ratio increases monotonically as the increase of α. Theorem 3.1 (Size Ratio Monotonicity) Let Vi 1 and Vi 2 be the clusters generated by the minimum SRcut with α = αi, and the corresponding size ratio, ri, be defined as ri = min(|Vi 1|β, |Vi 2|β) max(|Vi 1|β, |Vi 2|β) . If α1 > α2 ≥0, then r1 ≥r2. Proof: Given vertex weight vector β, let S be the collection of all distinct values that the size regularization term in (3) can have, i.e., S = {S | V1 ∪V2 = V, V1 ∩V2 = ∅, S = |V1|β|V2|β} . Clearly, |S|, the number of elements in S, is less than or equal to 2N−1 where N is the size of V. Hence we can write the elements in S in ascending order as 0 = S1 < S2 < · · · · · · < S|S| ≤ βT e 2 2 . Next, we define cuti be the minimal cut satisfying |V1|β|V2|β = Si, i.e., cuti = min |V1|β|V2|β = Si V1 ∪V2 = V V1 ∩V2 = ∅ cut(V1, V2) , then min V1 ∪V2 = V V1 ∩V2 = ∅ SRcut(V1, V2) = min i=1,···,|S| (cuti −αSi) . If V2 1 and V2 2 are the clusters generated by the minimum SRcut with α = α2, then |V2 1|β|V2 2|β = Sk∗where k∗= argmini=1,···,|S| (cuti −α2Si). Therefore, for any 1 ≤t < k∗, cutk∗−α2Sk∗≤cutt −α2St . (4) If α1 > α2, we have (α2 −α1)Sk∗< (α2 −α1)St . (5) Adding (4) and (5) gives cutk∗−α1Sk∗< cutt −α1St, which implies k∗≤argmini=1,···,|S| (cuti −α1Si) . (6) Now, let V1 1 and V1 2 be the clusters generated by the minimum SRcut with α = α1, and |V1 1|β|V1 2|β = Sj∗where j∗= argmini=1,···,|S| (cuti −α1Si). From (6) we have j∗≥ k∗, therefore Sj∗≥Sk∗, or equivalently |V1 1|β|V1 2|β ≥|V2 1|β|V2 2|β. Without loss of generality, we can assume that |V1 1|β ≤|V1 2|β and |V2 1|β ≤|V2 2|β, therefore |V1 1|β ≤|V|β 2 and |V2 1|β ≤|V|β 2 . Considering the fact that f(x) = x(|V|β −x) is strictly monotonically increasing as x ≤|V|β 2 and f(|V1 1|β) ≥f(|V2 1|β), we have |V1 1|β ≥|V2 1|β. This leads to r1 = |V1 1 |β |V1 2 |β ≥r2 = |V2 1 |β |V2 2 |β . □ Unfortunately, minimizing size regularized cut for an arbitrary α is an NP-complete problem. This is proved in the following section. 4 Size regularized cut and graph bisection The decision problem for minimum SRcut can be formulated as: whether, given an undirected graph G(V, E, W) with weight vector β and regularization parameter α, a partition exists such that SRcut is less than a given cost. This decision problem is clearly NP because we can verify in polynomial time the SRcut value for a given partition. Next we show that graph bisection can be reduced, in polynomial time, to minimum SRcut. Since graph bisection is a classified NP-complete problem [1], so is minimum SRcut. Definition 4.1 (Graph Bisection) Given an undirected graph G = G(V, E, W) with even number of vertices where W is the adjacency matrix, find a pair of disjoint subsets V1, V2 ⊂V of equal size and V1 ∪V2 = V, such that the number of edges between vertices in V1 and vertices in V2, i.e., cut(V1, V2), is minimal. Theorem 4.2 (Reduction of Graph Bisection to SRcut) For any given undirected graph G = G(V, E, W) where W is the adjacency matrix, finding the minimum bisection of G is equivalent to finding a partition of G that minimizes the SRcut cost function with weights β = e and the regularization parameter α > d∗where d∗= max i=1,···,N X j=1,···,N Wij . Proof: Without loss of generality, we assume that N is even (if not, we can always add an isolated vertex). Let cuti be the minimal cut with the size of the smaller subset is i, i.e., cuti = min min(|V1|, |V2|) = i V1 ∪V2 = V V1 ∩V2 = ∅ cut(V1, V2) . Clearly, we have d∗≥cuti+1 −cuti for 0 ≤i ≤ N 2 −1. If 0 ≤i ≤ N 2 −1, then N −2i −1 ≥1. Therefore, for any α > d∗, we have α(N −2i −1) > d∗≥cuti+1 −cuti . This implies that cuti −αi(N −i) > cuti+1 −α(i + 1)(N −i −1) , or, equivalently, min min(|V1|, |V2|) = i V1 ∪V2 = V V1 ∩V2 = ∅ cut(V1, V2)−α|V1||V2| > min min(|V1|, |V2|) = i + 1 V1 ∪V2 = V V1 ∩V2 = ∅ cut(V1, V2)−α|V1||V2| for 0 ≤i ≤N 2 −1. Hence, for any α > d∗, minimizing SRcut is identical to minimizing cut(V1, V2) −α|V1||V2| with the constraint that |V1| = |V2| = N 2 , V1 ∪V2 = V, and V1 ∩V2 = ∅, which is exactly the graph bisection problem since α|V1||V2| = α N2 4 is a constant. □ 5 An approximation algorithm for SRcut Given a partition of vertex set V into two sets V1 and V2, let x ∈{−1, 1}N be an indicator vector such that xi = 1 if i ∈V1 and xi = −1 if i ∈V2. It is not difficult to show that cut(V1, V2) = (e + x)T 2 W(e −x) 2 and |V1|β|V2|β = (e + x)T 2 ββT (e −x) 2 . We can therefore rewrite SRcut in (3) as a function of the indicator vector x: SRcut(V1, V2) = (e + x)T 2 (W −αββT )(e −x) 2 = −1 4xT (W −αββT )x + 1 4eT (W −αββT )e . (7) Given W, α, and β, we have argminx∈{−1,1}N SRcut(x) = argmaxx∈{−1,1}N xT (W −αββT )x If we define a normalized indicator vector, y = 1 √ N x (i.e., ∥y∥= 1), then minimum SRcut can be found by solving the following discrete optimization problem y = argmaxy∈{− 1 √ N 1 √ N }N yT (W −αββT )y , (8) which is NP-complete. However, if we relax all the elements in the indicator vector y from discrete values to real values and keep the unit length constraint on y, the above optimization problem can be easily solved. And the solution is the eigenvector corresponding to the largest eigenvalue of W −αββT (or named the largest eigenvector). Similar to other spectral graph partitioning techniques that use top eigenvectors to approximate “optimal” partitions, the largest eigenvector of W −αββT provides a linear search direction, along which a splitting point can be found. We use a simple approach by checking each element in the largest eigenvector as a possible splitting point. The vertices, whose continuous indicators are greater than or equal to the splitting point, are assigned to one cluster. The remaining vertices are assigned to the other cluster. The corresponding SRcut value is then computed. The final partition is determined by the splitting point with the minimum SRcut value. The relaxed optimization problem provides a lower bound on the optimal SRcut value, SRcut∗. Let λ1 be the largest eigenvalue of W −αββT . From (7) and (8), it is straightforward to show that SRcut∗≥eT (W −αββT )e −Nλ1 4 . The SRcut value of the partition generated by the largest eigenvector provides an upper bound for SRcut∗. As implied by SRcut cost function in (3), the partition of the dataset depends on the value of α, which determines the tradeoff between inter-cluster similarity and the balance of the partition. Moreover, Theorem 3.1 indicates that with the increase of α, the size ratio of the clusters generated by the optimal partition increase monotonically, i.e., the partition becomes more balanced. Even though, we do not have a counterpart of Theorem 3.1 for the approximated partition derived above, our empirical study shows that, in general, the size ratio of the approximated partition increases along with α. Therefore, we use the prior information on the size of the clusters to select α. Specifically, we define expected size ratio, R, as R = min(s1,s2) max(s1,s2) where s1 and s2 are the expected size of the two clusters (known a priori). We then search for a value of α such that the resulting size ratio is close to R. A simple one-dimensional search method based on bracketing and bisection is implemented [13]. The pseudo code of the searching algorithm is given in Algorithm 1 along with the rest of the clustering procedure. The input of the algorithm is the graph affinity matrix W, the weight vector β, the expected size ratio R, and α0 > 0 (the initial value of α). The output is a partition of V. In our experiments, α0 is chosen to be 10 eT We N 2 . If the expected size ratio R is unknown, one can estimate R assuming that the data are i.i.d. samples and a sample belongs to the smaller cluster with probability p ≤0.5 (i.e., R = p 1−p). It is not difficult to prove that ˆp of n randomly selected samples from the data set is an unbiased estimator of p. Moreover, the distribution of ˆp can be well approximated by a normal distribution with mean p and variance p(1−p) n when n is sufficiently large (say n > 30). Hence ˆp converges to p as the increase of n. This suggests a simple strategy for SRcut with unknown R. One can manually examine n ≪N randomly selected data instances to get ˆp and the 95% confidence interval [plow, phigh], from which one can evaluate the invertal [Rlow, Rhigh] for R. Algorithm 1 is then applied to a number of evenly distributed R’s within the interval to find the corresponding partitions. The final partition is chosen to be the one with the minimum cut value by assuming that a “good” partition should have a small cut. 6 Time complexity The time complexity of each iteration is determined by that of computing the largest eigenvector. Using power method or Lanczos method [8], the running time is O(MN 2) where M is the number of matrix-vector computations required and N is the number of vertices. Hence the overall time complexity is O(KMN 2) where K is the number of iterations in searching α. Similar to other spectral graph clustering methods, the time complexity of SRcut can be significantly reduced if the affinity matrix W is sparse, i.e., the graph is only Algorithm 1: Size Regularized Cut 1 initialize αl to 2α0 and αh to α0 2 2 REPEAT 3 αl ←αl 2 , y ←largest eigenvector of W −αlββT 4 partition V using y and compute size ratio r 5 UNTIL (r < R) 6 REPEAT 7 αh ←2αh, y ←largest eigenvector of W −αhββT 8 partition V using y and compute size ratio r 9 UNTIL (r ≥R) 10 REPEAT 11 α ←αl+αh 2 , y ←largest eigenvector of W −αββT 12 partition V using y and compute size ratio r 13 IF (r < R) 14 αl ←α 15 ELSE 16 αh ←α 17 END IF 18 UNTIL (|r −R| < 0.01R or αh −αl < 0.01α0) locally connected. Although W −αββT is in general not sparse, the time complexity of power method is still O(MN). This is because (W −αββT )y can be evaluated as the sum of Wy and αβ(βT y), each requiring O(N) operations. Therefore, by enforcing the sparsity, the overall time complexity of SRcut is O(KMN). 7 Experiments We test the SRcut algorithm using two data sets, Reuters-21578 document corpus and 20Newsgroups. Reuters-21578 data set contains 21578 documents that have been manually assigned to 135 topics. In our experiments, we discarded documents with multiple category labels, and removed the topic classes containing less than 5 documents. This leads to a data set of 50 clusters with a total of 9102 documents. The 20-Newsgroups data set contains about 20000 documents collected from 20 newsgroups, each corresponding to a distinct topic. The number of news articles in each cluster is roughly the same. We pair each cluster with another cluster to form a data set, so that 190 test data sets are generated. Each document is represented by a term-frequency vector using TF-IDF weights. We use the normalized mutual information as our evaluation metric. Normalized mutual information is always within the interval [0, 1], with a larger value indicating a better performance. A simple sampling scheme described in Section 5 is used to estimate the expected size ratio. For the Reuters-21578 data set, 50 test runs were conducted, each on a test set created by mixing 2 topics randomly selected from the data set. The performance score in Table 1 was obtained by averaging the scores from 50 test runs. The results for 20Newsgroups data set were obtained by averaging the scores from 190 test data sets. Clearly, SRcut outperforms the normalized cut on both data sets. SRcut performs significantly better than normalized cut on the 20-Newsgroups data set. In comparison with Reuters-21578, many topic classes in the 20-Newsgroups data set contain outliers. The results suggest that SRcut is less sensitive to outliers than normalized cut. 8 Conclusions We proposed size regularized cut, a novel method that enables users to specify prior knowledge of the size of two clusters in spectral clustering. The SRcut cost function takes into Table 1: Performance comparison for SRcut and Normalized Cut. The numbers shown are the normalized mutual information. A larger value indicates a better performance. Algorithms Reuters-21578 20-Newsgroups SRcut 0.7330 0.7315 Normalized Cut 0.7102 0.2531 account inter-cluster similarity and the relative size of two clusters. The “optimal” partition of the data set corresponds to a tradeoff between the inter-cluster similarity and the balance of the partition. We proved that finding a partition with minimum SRcut is an NPcomplete problem. We presented an approximation algorithm to solve a relaxed version of the optimization problem. Evaluations over different data sets indicate that the method is not sensitive to outliers and performs better than normalized cut. The SRcut model can be easily adapted to solve multiple-clusters problem by applying the clustering method recursively/iteratively on data sets. Since graph bisection can be reduced to SRcut, the proposed approximation algorithm provides a new spectral technique for graph bisection. Comparing SRcut with other graph bisection algorithms is therefore an interesting future work. References [1] S. Arora, D. Karger, and M. Karpinski, “Polynomial Time Approximation Schemes for Dense Instances of NP-hard Problems,” Proc. ACM Symp. on Theory of Computing, pp. 284-293, 1995. [2] A. Banerjee and J. Ghosh, “On Scaling up Balanced Clustering Algorithms,” Proc. SIAM Int’l Conf. on Data Mining, pp. 333-349, 2002. [3] P. K. Chan, D. F. Schlag, and J. Y. Zien, “Spectral k-Way Ratio-Cut Partitioning and Clustering,” IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, 13:1088-1096, 1994. [4] M. Charikar, S. Khuller, D. M. Mount, and G. Narasimhan, “Algorithms for Facility Location Problems with Outliers,” Proc. ACM-SIAM Symp. on Discrete Algorithms, pp. 642-651, 2001. [5] I. S. Dhillon, “Co-clustering Documents and Words using Bipartite Spectral Graph Partitioning,” Proc. ACM SIGKDD Conf. Knowledge Discovery and Data Mining, pp. 269-274, 2001. [6] C. Ding, “Data Clustering: Principal Components, Hopfield and Self-Aggregation Networks,” Proc. Int’l Joint Conf. on Artificial Intelligence, pp. 479-484, 2003. [7] C. Ding, X. He, H. Zha, M. Gu, and H. Simon, “Spectral Min-Max Cut for Graph Partitioning and Data Clustering,” Proc. IEEE Int’l Conf. Data Mining, pp. 107-114, 2001. [8] G. H. Golub and C. F. Van Loan, Matrix Computations, John Hopkins Press, 1999. [9] R. Kannan, S. Vempala, and A. Vetta, “On Clusterings - Good, Bad and Spectral,” Proc. IEEE Symp. on Foundations of Computer Science, pp. 367-377, 2000. [10] D. R. Karget and M. Minkoff, “Building Steiner Trees with Incomplete Global Knowledge,” Proc. IEEE Symp. on Foundations of Computer Science, pp. 613-623, 2000 [11] B. Kernighan and S. Lin, “An Efficient Heuristic Procedure for Partitioning Graphs,” The Bell System Technical Journal, 49:291-307, 1970. [12] A. Y. Ng, M. I. Jordan, and Y. Weiss, “On Spectral Clustering: Analysis and an Algorithm,” Advances in Neural Information Processing Systems 14, pp. 849-856, 2001. [13] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C, second edition, Cambridge University Press, 1992. [14] A. Rahimi and B. Recht, “Clustering with Normalized Cuts is Clustering with a Hyperplane,” Statistical Learning in Computer Vision, 2004. [15] J. Shi and J. Malik, “Normalized Cuts and Image Segmentation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 22:888-905, 2000. [16] K. Wagstaff, C. Cardie, S. Rogers, and S. Schrodl, “Constrained K-means Clustering with Background Knowledge,” Proc. Int’l Conf. on Machine Learning, pp. 577-584, 2001. [17] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell, “Distance Metric Learning, with Applications to Clustering with Side Information,” Advances in Neural Information Processing Systems 15, pp. 505-512, 2003. [18] X. Yu and J. Shi, “Segmentation Given Partial Grouping Constraints,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 26:173-183, 2004. [19] H. Zha, X. He, C. Ding, H. Simon, and M. Gu, “Spectral Relaxation for K-means Clustering,” Advances in Neural Information Processing Systems 14, pp. 1057-1064, 2001.
|
2005
|
48
|
2,864
|
How fast to work: Response vigor, motivation and tonic dopamine Yael Niv1,2 Nathaniel D. Daw2 Peter Dayan2 1ICNC, Hebrew University, Jerusalem 2Gatsby Computational Neuroscience Unit, UCL yaelniv@alice.nc.huji.ac.il {daw,dayan}@gatsby.ucl.ac.uk Abstract Reinforcement learning models have long promised to unify computational, psychological and neural accounts of appetitively conditioned behavior. However, the bulk of data on animal conditioning comes from free-operant experiments measuring how fast animals will work for reinforcement. Existing reinforcement learning (RL) models are silent about these tasks, because they lack any notion of vigor. They thus fail to address the simple observation that hungrier animals will work harder for food, as well as stranger facts such as their sometimes greater productivity even when working for irrelevant outcomes such as water. Here, we develop an RL framework for free-operant behavior, suggesting that subjects choose how vigorously to perform selected actions by optimally balancing the costs and benefits of quick responding. Motivational states such as hunger shift these factors, skewing the tradeoff. This accounts normatively for the effects of motivation on response rates, as well as many other classic findings. Finally, we suggest that tonic levels of dopamine may be involved in the computation linking motivational state to optimal responding, thereby explaining the complex vigor-related effects of pharmacological manipulation of dopamine. 1 Introduction A banal, but nonetheless valid, behaviorist observation is that hungry animals work harder to get food [1]. However, associated with this observation are two stranger experimental facts and a large theoretical failing. The first weird fact is that hungry animals will in some circumstances work more vigorously even for motivationally irrelevant outcomes such as water [2, 3], which seems highly counterintuitive. Second, contrary to the emphasis theoretical accounts have placed on the effects of dopamine (DA) on learning to choose between actions, the most overt behavioral effects of DA interventions are similar swings in undirected vigor [4], at least part of which appear immediately, without learning [5]. Finally, computational theories fail to deliver on the close link they trumpet between DA, behavior, and reinforcement learning (RL; eg [6]), as they do not address the whole experimental paradigm of free-operant tasks [7], whence hail those and many other results. Rather than the standard RL problem of discrete choices between alternatives at prespecified timesteps [8], free-operant experiments investigate tasks in which subjects pace their own responding (typically on a lever or other manipulandum). The primary choice in these tasks is of how rapidly/vigorously to behave, rather than what behavior to choose (as typically only one relevant action is available). RL models are silent about these aspects, and thus fail to offer a principled understanding of the policies selected by the animals. 0 10 20 30 40 0 10 20 30 40 Hungry Sated (a) seconds since reinforcement rate per minute (b) reinforcements per hour responses/min (c) FR schedule LPs in 30 minutes Figure 1: (a) Leverpress (blue, right) and consummatory nose poke (red, left) response rates of rats leverpressing for food on a modified RI30 schedule. Hungry rats (open circles) clearly press the lever at a higher rate than sated rats (filled circles). Data from [11], averaged over 19 rats in each group. (b) The relationship between rate of responding and rate of reinforcement (reciprocal of the interval) on an RI schedule, is hyperbolic (of the form y = B · x/(x + x0)). This is an instantiation of Herrnstein’s matching law for one response (adapted from [9]). (c) Total number of leverpresses per session averaged over five 30 minute sessions by rats pressing for food on different FR schedules. Rats with nucleus accumbens 6-OHDA dopamine lesions (gray) press significantly less than control rats (black), with the difference larger for higher ratio requirements. Adapted from [12]. Here, we address these issues by constructing an RL account of behavior rates in freeoperant settings (Sections 2,3). We consider optimal control in a continuous-time Markov Decision Process (MDP), in which agents must choose both an action and the latency with which to emit it (ie how vigorously, or at what instantaneous rate to perform it). Our model treats response vigor as being determined normatively, as the outcome of a battle between the cost of behaving more expeditiously and the benefit of achieving desirable outcomes more quickly. We show that this simple, normative framework captures many classic features of animal behavior that are obscure in our and others’ earlier treatments (Section 4). These include the characteristic time-dependent profiles of response rates on tasks with different payoff scheduling [7], the hyperbolic relationship between response rate and payoff [9], and the difference in response rates between tasks in which reinforcements are allocated based on the number of responses emitted and those allocating reinforcements based on the passage of time [10]. A key feature of this model is that response rates are strongly dependent on the expected average reward rate, because this determines the opportunity cost of sloth. By influencing the value of reinforcers — and through this, the average reward rate — motivational states such as hunger influence the output response latencies (and not only response choice). Thus, in our model, hungry animals should optimally also work harder for water, since in typical circumstances, this should allow them to return more quickly to working for food. Further, we identify tonic levels of dopamine with the representation of average reward rate, and thereby suggest an account of a wealth of experiments showing that DA influences response vigor [4, 5], thus complementing existing ideas about the role of phasic DA signals in learned action selection (Section 5). 2 Free-operant behavior We consider the free-operant scenario common in experimental psychology, in which an animal is placed in an experimental chamber, and can choose freely which actions to emit and when. Most actions have no programmed consequences; however, one action (eg leverpressing; LP) is rewarded with food (which falls into a food magazine) according to an experimenter-determined schedule of reinforcement. Food delivery makes a characteristic sound, signalling its availability for harvesting via a nose poke (NP) into the magazine. The schedule of reinforcement defines the (possibly stochastic) relationship between the delivery of a reward and one or both of (a) the number of LPs, and (b) the time since the last reward was delivered. In common use are fixed-ratio (FR) schedules, in which a fixed number of LPs is required to obtain a reinforcer; random-ratio (RR) schedules, in which each LP has a constant probability of being reinforced; and random interval (RI) schedules, in which the first LP after an (exponentially distributed) interval of time has elapsed, is reinforced. Schedules are often labelled by their type and a parameter, so RI30 is a random interval schedule with the exponential waiting time having a mean of 30 seconds [7]. Different schedules induce different patterns of responding [7]. Fig 1a shows response metrics from rats leverpressing on an RI30 schedule. Leverpressing builds up to a relatively constant rate following a rather long pause after gaining each reward, during which the food is consumed. Hungry rats leverpress more vigorously than sated ones. A similar overall pattern is also characteristic of responding on RR schedules. Figure 1b shows the total number of LP responses in a 30 minute session for different interval schedules. The hyperbolic relationship between the reward rate (the inverse of the interval) and the response rate is a classic hallmark of free operant behavior [9]. 3 The model We model a free-operant task as a continuous MDP. Based on its state, the agent chooses both an action (a), and a latency (τ) at which to emit it. After time τ has elapsed, the action is completed, the agent receives rewards and incurs costs associated with its choice, and then selects a new (a, τ) pair based on its new state. We define three possible actions a ∈{LP, NP, other}, where we take a = other to include the various miscellaneous behaviors such as grooming, rearing, and sniffing which animals typically perform during the experiment. For simplicity we consider unit actions, with the latency τ related to the vigor with which this unit is performed. To account for consumption time (which is nonnegligible [11, 13]), if the agent nose-pokes and food is available, a predefined time teat passes before the next decision point (and the next state) is reached. Crucially, performing actions incurs costs as well as potentially gains rewards. Following Staddon [14], we assume one part of the cost of an action to be proportional to the vigor of its execution, ie inversely proportional to τ. The constant of proportionality Kv depends on both the previous and the current action, since switching between different action types can require travel between different parts of the experimental chamber (say, the magazine to the lever), and can thus be more costly. Each action also incurs a fixed ‘internal’ reward or cost of ρ(a) per unit, typically with other being rewarding. The reinforcement schedule defines the probability of reward delivery for each state-action-latency triplet. An available reward can be harvested by a = NP into the magazine, and we assume that the thereby obtained subjective utility U(r) of the food reward is motivation-dependent, such that food is worth more to a hungry animal than to a sated one. We consider the simplified case of a state space comprised of all the parameters relevant to the task. Specifically, the state space includes the identity of the previous action, an indicator as to whether a reward is available in the food magazine, and, as necessary, the number of LPs since the previous reinforcement (for FR) or the elapsed time since the previous LP (for RI). The transitions between the states P(S′|S, a, τ) and the reward function Pr(S, a, τ) are defined by the dynamics of the schedule of reinforcement, and all rewards and costs are harvested at state transitions and considered as point events. In the following we treat the problem of optimising a policy (which action to take and with what latency, given the state) in order to maximize the average rate of return (rewards minus costs per time). An exponentially discounted model gives the same qualitative results. In the average reward case [15, 16], the Bellman equation for the long-term differential (or 0 20 40 0 10 20 30 seconds since reinforcement rate per minute LP NP (a) 0 20 40 0 10 20 30 seconds since reinforcement rate per minute LP NP (b) 0 5 10 0 10 20 30 reinforcement rate/min #LPs in 5min LP latency (sec) (c) Figure 2: Data generated by the model captures the essence of the behavioral data: Leverpress (solid blue; circles) and nose poke (dashed red; stars) response rates on (a) an RR10 schedule and (b) a matched (yoked) RI schedule show constant LP rates which are higher for the ratio schedule. (c) The relationship between the total number of responses (circles) and rate of reinforcement is hyperbolic (solid line: hyperbolic curve fit). The mean latency to leverpress (dashed line) decreases as the rate of reinforcement increases. average-adjusted) value of state S is: V ∗(S)=max a,τ ρ(a)−Kv(aprev, a) τ +U(r)Pr(S, a, τ)−τ · r+ Z dS′P(S′|S, a, τ)V ∗(S′) ff (1) where r is the long term average reward rate (whose subtraction from the value quantifies the opportunity cost of delay). Building on ideas from [16], we suggest that the average reward rate is reported by tonic (baseline) levels of dopamine (and not serotonin [16]) in basal ganglia structures relevant for action selection, and that changes in tonic DA (eg as a result of pharmacological interventions) would thus alter the assumed average reward rate. In this paper, we eschew learning, and examine the steady state behavior that arises when actions are chosen stochastically (via the so-called softmax or Boltzmann distribution) from the optimal one-step look-ahead model-based Q(S, a, τ) state-action-latency values. For ratio schedules, the simple transition structure of the task allows the Bellman equation to be solved analytically to determine the Q values. For interval schedules, we use averagereward value iteration [15] with time discretized at a resolution of 100ms. For simulations (eg of dopaminergic manipulations) where r was assumed to change independent of any change in the task contingencies, we used value iteration to find values approximately satisfying the Bellman equation (which is no longer exactly solvable). Our overriding aim is to replicate basic aspects of free operant behavior qualitatively, in order to understand the normative foundations of response vigor. We do not fit the parameters of the model to experimental data in a quantitative way, and the results we describe below are general, robust, characteristics of the model. 4 Results Fig 2a depicts the behavior of our model on an RR10 schedule. In rough accordance with the behavior displayed by animals (which is similar to that shown in Fig 1a), the LP rate is constant over time, bar a pause for consumption. Fig 2b depicts the model’s behavior in a yoked random interval schedule, in which the intervals between rewards were set to match exactly the intervals obtained by the agent trained on the ratio schedule in Fig 2a. The response rate is again constant over time, but it is also considerably lower than that in the corresponding RR schedule, although the external reward density is similar. This phenomenon has also been observed experimentally, and although the apparent anomaly has been much discussed in the associative learning literature, its explanation is not fully resolved [10]. Our model suggests that it is the result of an optimal cost/benefit tradeoff. We can analyse this difference by considering the Q values for leverpressing at different latencies in random schedules Q(Snr, LP, τ)=ρ(LP) −Kv(LP, LP) τ −τ · r + P(Sr|τ)V ∗(Sr) + [1 −P(Sr|τ)]V ∗(Snr) (2) where we are looking at consecutive leverpresses in the absence of available reward, and Sr and Snr designate the states in which a reward is or is not available in the magazine, respectively. In ratio schedules, since P(Sr|τ) is independent of τ, the optimizing latency is τ ∗ LP = p Kv(LP, LP)/r, its inverse defining the optimal rate of leverpressing. In interval schedules, however, P(Sr|τ) = 1 −exp{−τ/T} where T is the schedule interval. Taking the derivative of eq. (2) we find that the optimal latency to leverpress τ ∗ LP satisfies Kv(LP, LP)/τ ∗2 LP −r+(1/T)[V ∗(Sr)−V ∗(Snr)]·exp{−τ ∗ LP/T} = 0. Although no longer analytically solvable, it is easily seen that this latency will always be longer than that found above for ratio schedules. Intuitively, since longer inter-response intervals increase the probability of reward per press in interval schedules but not in ratio schedules, the optimal leverpressing rate is lower in the former than in the latter. Fig 2c shows the average number of LPs in a 5 minute session for different interval schedules. This ‘molar’ measure of rate shows the well documented hyperbolic relationship (cf Fig 1b). On the ‘molecular’ level of single action choices, the mean latency ⟨τLP⟩between consecutive LPs decreases as the probability of reinforcement increases. This measure of response vigor is actually more accurate than the overall response measure, as it is not contaminated by competition with other actions, or confounded with the number of reinforcers per session for different schedules (and the time forgone when consuming them). For this reason, although we (correctly; see [13]) predict that inter-response latency should slow for higher ratio requirements, raw LP counts can actually increase, as in Fig. 1c, probably due to fewer rewards and less time spent eating [13]. 5 Drive and dopamine Having provided a qualitative account of the basic patterns of free operant rates of behavior, we turn to the main theoretical conundrum — the effects of drive and DA manipulations on response vigor. The key to understanding these is the role that the average reward r plays in the tradeoffs determining optimal response vigor. In effect, the average expected reward per unit time quantifies the opportunity cost for doing nothing (and receiving no reward) for that time; its increase thus produces general pressure for faster work. A direct consequence of making the agent hungrier is that the subjective utility of food is enhanced. This will have interrelated effects on the optimal average reward r, the optimal values V ∗, and the resultant optimal action choices and vigors. Notably, so long as the policy obtains food, its average reward rate will increase. Consider a fixed or random ratio schedule. The increase in r will increase the optimal LP rate 1/τ ∗ LP = p r/Kv(LP, LP), as the higher reward utility offsets higher procurement costs. Importantly, because the optimal τ ∗has a similar dependence on r even for actions irrelevant to obtaining food, they also become more vigorous. The explanation of this effect is presented graphically in Fig 3e. The higher r increases the cost of sloth, since every τ time without reward forgoes an expected (τ · r) mean reward. Higher average rewards penalize late actions more than they do early ones, thus tilting action selection toward faster behavior, for all pre-potent actions. Essentially, hunger encourages the agent to complete irrelevant actions faster, in order to be able to resume leverpressing more quickly. For other schedules, the same effects generally hold (although the analytical reasoning is complicated by the fact that the optimal latencies may in these cases depend not only on the new average reward but also on the new values V ∗). Fig 3a shows simulated responding on an RI25 schedule in which the internal reward for the food-irrelevant action other has been set high enough to warrant non-negligible base responding. Fig 3b shows that when 0 10 20 30 40 0 5 10 15 20 LP NP Other (a) sec from reinforcement rate per minute 0 10 20 30 40 0 5 10 15 20 (b) sec from reinforcement rate per minute 0 10 20 30 40 0 5 10 15 20 (c) sec from reinforcement rate per minute LP other 0 2 4 6 mean latency (sec) (d) τ (e) Q value/prob 1 3 9 27 0 500 1000 1500 Control 60% DA depleted (f) LPs in 30 minutes Figure 3: The effects of drive on response rates. (a) Responding on a RI25 schedule, with high internal rewards (0.35) for a=other (open circles). (b) The effects of hunger: U(r) was changed from 10 to 15. (c) The effect of an irrelevant drive (hungry animals leverpressing for water rewards): r was increased by 4% compared to (a). (d) Mean latencies to responding ⟨τ⟩for LP and other in baseline (a; black), increased hunger (b; white) and irrelevant drive (c; gray). (e) Q values for leverpressing at different latencies τ. In black (top) are the unadjusted Q values, before subtracting (τ ·r). In red (middle, solid) and green (bottom, solid) are the values adjusted for two different average reward rates. The higher reward rate penalizes late actions more, thereby causing faster responding, as shown by the corresponding softmaxed action probability curves (dashed). (f) Simulation of DA depletion: overall leverpress count over 30 minute sessions (each bar averaging 15 sessions), for different FR requirements (bottom). In black is the control condition, and in gray is simulated DA depletion, attained by lowering r by 60%. The effects of the depletion seem more pronounced in higher schedules (compare to Fig 1c), but this actually results from the interaction with the number of rewards attained (see text). the utility of food is increased by 50%, the agent chooses to leverpress more, at the expense of other actions. This illustrates the ‘directing’ effect of motivation, by which the agent is directed more forcefully toward the motivationally relevant action [17]. Furthermore, the second, ‘driving’ effect, by which motivation increases vigor globally [17], is illustrated in Fig 3d which shows that, in fact, the latency to both actions has decreased. Thus, although selected less often, when other is selected, it is performed more vigorously than it was when the agent was sated. This general drive effect can be better isolated if we examine hungry agents leverpressing for water (rather than food), without competition from actions for food. We can view our leverpressing MDP as a portion of a larger one, which also includes (for instance) occasional opportunities for visits to a home cage where food is available. Without explicitly specifying all this extra structure, a good approximation is to take hunger as again causing an increase in the global rate of reinforcement r, reflecting the increase in the utility of food received elsewhere. Fig 3c shows the effects on responding on an interval schedule, of estimating the average reward rate to be 4% higher than in Fig 3a, and deriving new Q values from the previous V ∗with this new r as illustrated in Fig 3e. As above, the adjusted vigors of all behaviors are faster (Fig 3d, gray bars), as a result of the higher ‘drive’. How do these drive effects relate to dopamine? Pharmacological and lesion studies show that enhancing DA levels (through agonists such as amphetamine) increases general activity [5, 18, 19], while depleting or antagonising DA causes a general slowing of responding (eg [4]). Fig. 1c is representative of a host of results from the lab of Salamone [4, 12] which show that lower levels of DA in the nucleus accumbens (a structure in the basal ganglia implicated in action selection) result in lower response rates. This effect seems more pronounced in higher fixed-ratio schedules, those requiring more work per reinforcer. As a result of this apparent dependence on the response requirement, Salamone and his colleagues have hypothesized that DA enables animals to overcome higher work demands. We suggest that tonic levels of DA represent the average reward rate (a role tentatively proposed for serotonin in [16]). Thus a higher tonic level of DA represents a situation akin to higher drive, in which behavior is more vigorous, and lower tonic levels of DA cause a general slowing of behavior. Fig. 3f shows the simulated response counts for different FR schedules in two conditions. The control condition is the standard model described above; DA depletion was modeled by decreasing tonic DA levels (and therefore r) to 40% of their original levels. The results match the data in Fig. 1c. Here, the apparently small effect on the number of LPs for low ratio schedules actually arises because of the large amount of time spent eating. Thus, according to the model DA is not really allowing animals to cope with higher work requirements, but rather is important for optimal choice of vigor at any work requirement, with the slowing effect of DA depletion more prominent (in the crude measure of LPs per session) when more time is spent leverpressing. 6 Discussion The present model brings the computational machinery and neural grounding of RL models fully into contact with the vast reservoir of data from free-operant tasks. Classic quantitative accounts of operant behavior (such as Herrnstein’s matching law [9], and variations such as melioration) lack RL’s normative grounding in sound control theory, and tend instead toward descriptive curve-fitting. Most of these theories do not address that fine scale (molecular) structure of behavior, and instead concentrate on fairly crude molar measures such as total number of leverpresses over long durations. In addition to the normative starting point it offers for investigations of response vigor, our theory provides a relatively fine scalpel for dissecting the temporal details of behavior, such as the distributions of interresponse intervals at particular state transitions. There is thus great scope for revealing re-analyses of many existing data sets. In particular, the effects of generalized drive have proved mixed and complex [17]. Our theory suggests that studies of inter-response intervals (eg Fig 3d) may reveal more robust changes in vigor, uncontaminated by shifts in overall action propensity. Response vigor and dopamine’s role in controlling it have appeared in previous RL models of behavior [20, 21], but only as fairly ad-hoc bolt-ons — for instance, using repeated choices between doing nothing versus something to capture response latency. Here, these aspects are wholly integrated into the explanatory framework: optimizing response vigor is treated as itself an RL problem, with a natural dopaminergic substrate. To account for immediate (unlearned) effects of motivational or dopaminergic manipulations, the main assumption we make is that tonic levels of DA can be sensitive to predicted changes in the average reward occasioned by changes in the motivational state, and that behavioral policies are in turn immediately affected. This sensitivity would be easy to embed in a temporal-difference RL system, producing flexible adaptation of response vigor. By contrast, due to the way they cache outcome values, the action choices of such RL systems are characteristically insensitive to the ‘directing’ effects of motivational manipulations [22]. In animal behavior, ‘habitual actions’ (the ones associated with the DA system) are indeed motivationally insensitive for action choice, but show a direct effect of drive on vigor [23]. Our model is easy to accommodate within a framework of temporal difference (TD) learning. Thus, it naturally preserves the link between phasic DA signals and online learning of optimal values [24]. We further elaborate this link by suggesting an additional role for tonic levels of DA in online vigor selection. A major question remains as to whether phasic responses (which are known to correlate with response latency [25]) play an additional role in determining response vigor. Further, it is pressing to reconcile the present account with our previous suggestion (based on microdialysis findings) [16] that tonic levels of DA might track average punishment. The most critical avenues to develop this work will be an account of learning, and neurally and psychologically more plausible state and temporal representations. On-line value learning should be a straightforward adaptation of existing TD models of phasic DA based on the continuous-time semi-Markov setting [26]. The representation of state is more challenging — the assumption of a fully observable state space automatically appropriate for the schedule of reinforcement is not realistic. Indeed, apparently sub-optimal actions emitted by animals, eg engaging in excessive nose-poking even when a reward has not audibly dropped into the food magazine [11], may provide clues to this issue. Finally, it will be crucial to consider the fact that animals’ decisions about vigor may translate only noisily into response times, due, for instance, to the variability of internal timing [27]. Acknowledgments This work was funded by the Gatsby Charitable Foundation, a Dan David fellowship (YN), the Royal Society (ND) and the EU BIBA project (ND and PD). We are grateful to Jonathan Williams for discussions on free operant behavior. References [1] Dickinson A. and Balleine B.W. The role of learning in the operation of motivational systems. Steven’s Handbook of Experimental Psychology Volume 3, pages 497–533. John Wiley & Sons, New York, 2002. [2] Hull C.L. Principles of behavior: An introduction to behavior theory. Appleton-Century-Crofts, New York, 1943. [3] B´elanger D. and T´etreau B. L’influence d’une motivation inappropriate sur le comportement du rat et sa fr ´equence cardiaque. Can. J. of Psych., 15:6–14, 1961. [4] Salamone J.D. and Correa M. Motivational views of reinforcement: implications for understanding the behavioral functions of nucleus accumbens dopamine. Behavioural Brain Research, 137:3–25, 2002. [5] Ikemoto S. and Panksepp J. The role of nucleus accumbens dopamine in motivated behavior: a unifying interpretation with special reference to reward-seeking. Brain Res. Rev., 31:6–41, 1999. [6] Schultz W. Predictive reward signal of dopamine neurons. J. Neurophys., 80:1–27, 1998. [7] Domjan M. The principles of learning and behavior. Brooks/Cole, Pacific Grove, California, 3rd edition, 1993. [8] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT Press, 1998. [9] Herrnstein R.J. On the law of effect. J. of the Exp. Anal. of Behav., 13(2):243–266, 1970. [10] Dawson G.R. and Dickinson A. Performance on ratio and interval schedules with matched reinforcement rates. Q. J. of Exp. Psych. B, 42:225–239, 1990. [11] Niv Y., Daw N.D., Joel D., and Dayan P. Motivational effects on behavior: Towards a reinforcement learning model of rates of responding. In CoSyNe, Salt Lake City, Utah, 2005. [12] Aberman J.E. and Salamone J.D. Nucleus accumbens dopamine depletions make rats more sensitive to high ratio requirements but do not impair primary food reinforcement. Neuroscience, 92(2):545–552, 1999. [13] Foster T.M., Blackman K.A., and Temple W. Open versus closed economies: performance of domestic hens under fixed-ratio schedules. J. of the Exp. Anal. of Behav., 67:67–89, 1997. [14] Staddon J.E.R. Adaptive dynamics. MIT Press, Cambridge, Mass., 2001. [15] Mahadevan S. Average reward reinforcement learning: Foundations, algorithms and empirical results. Machine Learning, 22:1–38, 1996. [16] Daw N.D., Kakade S., and Dayan P. Opponent interactions between serotonin and dopamine. Neural Networks, 15(4-6):603–616, 2002. [17] Bolles R.C. Theory of Motivation. Harper & Row, 1967. [18] Carr G.D. and White N.M. Effects of systemic and intracranial amphetamine injections on behavior in the open field: a detailed analysis. Pharmacol. Biochem. Behav., 27:113–122, 1987. [19] Jackson D.M., Anden N., and Dahlstrom A. A functional effect of dopamine in the nucleus accumbens and in some other dopamine-rich parts of the rat brain. Psychopharmacologia, 45:139–149, 1975. [20] Dayan P. and Balleine B.W. Reward, motivation and reinforcement learning. Neuron, 36:285–298, 2002. [21] McClure S.M., Daw N.D., and Montague P.R. A computational substrate for incentive salience. Trends in Neurosc., 26(8):423–428, 2003. [22] Daw N.D., Niv Y., and Dayan P. Uncertainty based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience, 8(12):1704–1711, 2005. [23] Dickinson A., Balleine B., Watt A., Gonzalez F., and Boakes R.A. Motivational control after extended instrumental training. Anim. Learn. and Behav., 23(2):197–206, 1995. [24] Montague P.R., Dayan P., and Sejnowski T.J. A framework for mesencephalic dopamine systems based on predictive hebbian learning. J. of Neurosci., 16(5):1936–1947, 1996. [25] Satoh T., Nakai S., Sato T., and Kimura M. Correlated coding of motivation and outcome of decision by dopamine neurons. J. of Neurosci., 23(30):9913–9923, 2003. [26] Daw N.D., Courville A.C., and Touretzky D.S. Timing and partial observability in the dopamine system. In T.G. Dietterich, S. Becker, and Z. Ghahramani, editors, NIPS, volume 14, Cambridge, MA, 2002. MIT Press. [27] Gallistel C.R. and Gibbon J. Time, rate and conditioning. Psych. Rev., 107:289–344, 2000.
|
2005
|
49
|
2,865
|
Fixing two weaknesses of the Spectral Method Kevin J. Lang Yahoo Research 3333 Empire Ave, Burbank, CA 91504 langk@yahoo-inc.com Abstract We discuss two intrinsic weaknesses of the spectral graph partitioning method, both of which have practical consequences. The first is that spectral embeddings tend to hide the best cuts from the commonly used hyperplane rounding method. Rather than cleaning up the resulting suboptimal cuts with local search, we recommend the adoption of flow-based rounding. The second weakness is that for many “power law” graphs, the spectral method produces cuts that are highly unbalanced, thus decreasing the usefulness of the method for visualization (see figure 4(b)) or as a basis for divide-and-conquer algorithms. These balance problems, which occur even though the spectral method’s quotient-style objective function does encourage balance, can be fixed with a stricter balance constraint that turns the spectral mathematical program into an SDP that can be solved for million-node graphs by a method of Burer and Monteiro. 1 Background Graph partitioning is the NP-hard problem of finding a small graph cut subject to the constraint that neither side of the resulting partitioning of the nodes is “too small”. We will be dealing with several versions: the graph bisection problem, which requires perfect 1 2 : 1 2 balance; the β-balanced cut problem (with β a fraction such as 1 3), which requires at least β : (1 −β) balance; and the quotient cut problem, which requires the small side to be large enough to “pay for” the edges in the cut. The quotient cut metric is c/ min(a, b), where c is the cutsize and a and b are the sizes of the two sides of the cut. All of the well-known variants of the quotient cut metric (e.g. normalized cut [15]) have similar behavior with respect to the issues discussed in this paper. The spectral method for graph partitioning was introduced in 1973 by Fiedler and Donath & Hoffman [6]. In the mid-1980’s Alon & Milman [1] proved that spectral cuts can be at worst quadratically bad; in the mid 1990’s Guattery & Miller [10] proved that this analysis is tight by exhibiting a family of n-node graphs whose spectral bisections cut O(n2/3) edges versus the optimal O(n1/3) edges. On the other hand, Spielman & Teng [16] have proved stronger performance guarantees for the special case of spacelike graphs. The spectral method can be derived by relaxing a quadratic integer program which encodes the graph bisection problem (see section 3.1). The solution to this relaxation is the “Fiedler vector”, or second smallest eigenvector of the graph’s discrete Laplacian matrix, whose elements xi can be interpreted as an embedding of the graph on the line. To obtain a (A) Graph with nearly balanced 8-cut (B) Spectral Embedding (C) Notional Flow-based Embedding Figure 1: The spectral embedding hides the best solution from hyperplane rounding. specific cut, one must apply a “rounding method” to this embedding. The hyperplane rounding method chooses one of the n −1 cuts which separate the nodes whose xi values lie above and below some split value ˆx. 2 Using flow to find cuts that are hidden from hyperplane rounding Theorists have long known that the spectral method cannot distinguish between deep cuts and long paths, and that this confusion can cause it to cut a graph in the wrong direction thereby producing the spectral method’s worst-case behavior [10]. In this section we will show by example that even when the spectral method is not fooled into cutting in the wrong direction, the resulting embedding can hide the best cuts from the hyperplane rounding method. This is a possible explanation for the frequently made empirical observation (see e.g. [12]) that hyperplane roundings of spectral embeddings are noisy and therefore benefit from cleanup with a local search method such as Fiduccia-Matheyses [8]. Consider the graph in figure 1(a), which has a near-bisection cutting 8 edges. For this graph the spectral method produces the embedding shown in figure 1(b), and recommends that we make a vertical cut (across the horizontal dimension which is based on the Fiedler vector). This is correct in a generalized sense, but it is obvious that no hyperplane (or vertical line in this picture) can possibly extract the optimal 8-edge cut. Some insight into why spectral embeddings tend to have this problem can be obtained from the spectral method’s electrical interpretation. In this view the graph is represented by a resistor network [7]. Current flowing in this network causes voltage drops across the resistors, thus determining the nodes’ voltages and hence their positions. When current flows through a long series of resistors, it induces a progressive voltage drop. This is what causes the excessive length of the embeddings of the horizontal girder-like structures which are blocking all vertical hyperplane cuts in figure 1(b). If the embedding method were somehow not based on current, but rather on flow, which does not distinguish between a pipe and a series of pipes, then the long girders could retract into the two sides of the embedding, as suggested by figure 1(c), and the best cut would be revealed. Because theoretical flow-like embedding methods such as [14] are currently not practical, we point out that in cases like figure 1(b), where the spectral method has not chosen an incorrect direction for the cut, one can use an S-T max flow problem with the flow running in the recommended direction (horizontally for this embedding) to extract the good cut even though it is hidden from all hyperplanes. We currently use two different flow-based rounding methods. A method called MQI looks for quotient cuts, and is already described in [13]. Another method, that we shall call Midflow, looks for β-balanced cuts. The input to Midflow is a graph and an ordering of its nodes (obtained e.g. from a spectral embedding or from the projection of any embedding onto a line). We divide the graph’s nodes into 3 sets F, L, and U. The sets F and L respectively contain the first βn and last βn nodes in the ordering, and U contains the remaining 0.001 0.002 0.003 0.004 0.01 60000 80000 100000 120000 140000 160000 180000 200000 220000 240000 quotient cut score (cutsize / size of small side) number of nodes on ’left’ side of cut (out of 324800) Hyperplane roundings of Fiedler vector 0.00138 Midflow rounding beta = 1/4 0.00145 Midflow rounding of Fiedler Vector beta = 1/3 Best hyperplane rounding of Fiedler Vector 0.00268 0.00232 Best improvement with local search neg-pos split 50-50 balance Figure 2: A typical example (see section 2.1) where flow-based rounding beats hyperplane rounding, even when the hyperplane cuts are improved with Fiduccia-Matheyses search. Note that for this spacelike graph, the best quotient cuts have reasonably good balance. U = n −2βn nodes, which are “up for grabs”. We set up an S-T max flow problem with one node for every graph node plus 2 new nodes for the source and sink. For each graph edge there are two arcs, one in each direction, with unit capacity. Finally, the nodes in F are pinned to the source and the nodes in L are pinned to sink by infinite capacity arcs. This max-flow problem can be solved by a good implementation of the push-relabel algorithm (such as Goldberg and Cherkassky’s hi pr [4]) in time that empirically is nearly linear with a very good constant factor. Figure 6 shows that solving a MidFlow problem with hi pr can be 1000 times cheaper than finding a spectral embedding with ARPACK. When the goal is finding good β-balanced cuts, MidFlow rounding is strictly more powerful than hyperplane rounding; from a given node ordering hyperplane rounding chooses the best of U + 1 candidate cuts, while MidFlow rounding chooses the best of 2U candidates, including all of those considered by hyperplane rounding. [Similarly, MQI rounding is strictly more powerful than hyperplane rounding for the task of finding good quotient cuts.] 2.1 A concrete example The plot in figure 2 shows a number of cuts in a 324,800 node nearly planar graph derived from a 700x464 pixel downward-looking view of some clouds over some mountains.1 The y-axis of the plot is quotient cut score; smaller values are better. We note in passing that the commonly used split point ˆx = 0 does not yield the best hyperplane cut. Our main point is that the two cuts generated by MidFlow rounding of the Fiedler vector (with β = 1 3 and β = 1 4) are nearly twice as good as the best hyperplane cut. Even after the best hyperplane cut has been improved by taking the best result of 100 runs of a version of Fiduccia-Matheyses local search, it is still much worse than the cuts obtained by flowbased rounding. 1The graph’s edges are unweighted but are chosen by a randomized rule which is more likely to include an edge between two neighboring pixels if they have a similar grey value. Good cuts in the graph tend to run along discontinuities in the image, as one would expect. 0.1 1 10 100 1k 10k 100k 1M quotient cut score (smaller is better) (worse balance) size of small side (better balance) SDP-LB Scatter plot showing cuts in a "power-law graph" (Yahoo Groups) Figure 3: This scatter plot of cuts in a 1.6 million node collaborative filtering graph shows a surprising relationship between cut quality and balance (see section 3). The SDP lower bound proves that all balanced cuts are worse than the unbalanced cuts seen on the left. 2.2 Effectiveness on real graphs and benchmarks We have found the flow-based Midflow and MQI rounding methods to be highly effective in practice on diverse classes of graphs including space-like graphs and power law graphs. Results for real-world power law graphs are shown in figure 5. Results for a number of FE meshes can be found on the Graph Partitioning Archive website http://staffweb.cms.gre.ac.uk/˜c.walshaw/partition, which keeps track of the best nearly balanced cuts ever found for a number of classic benchmarks. Using flow-based rounding to extract cuts from spectral-type embeddings, we have found new record cuts for the majority of the largest graphs on the site, including fe body, t60k, wing, brack2, fe tooth, fe rotor, 598a, 144, wave, m14b, and auto. It is interesting to note that the spectral method previously did not own any of the records for these classic benchmarks, although it could have if flow-based rounding had been used instead of hyperplane rounding. 3 Finding balanced cuts in “power law” graphs The spectral method does not require cuts to have perfect balance, but the denominator in its quotient-style objective function does reward balance and punish imbalance. Thus one might expect the spectral method to produce cuts with fairly good balance, and this is what does happen for the class of spacelike graphs that inform much of our intuition. However, there are now many economically important “power law” [5] graphs whose best quotient cuts have extremely bad balance. Examples at Yahoo include the web graph, social graphs based on DLBP co-authorship and Yahoo IM buddy lists, a music similarity graph, and bipartite collaborative filtering graphs relating Yahoo Groups with users, and advertisers with search phrases. To save space we show one scatter plot (figure 3) of quotient cut scores versus balance that is typical for graphs from this class. We see that apparently there is a tradeoff between these two quantities, and in fact the quotient cut score gets better as Figure 4: Left: a social graph with octopus structure as predicted by Chung and Lu [5]. Center: a “normalized cut” Spectral embedding chops off one tentacle per dimension. Right: an SDP embedding looks better and is more useful for finding balanced cuts. balance gets worse, which is exactly the opposite of what one would expect. When run on graphs of this type, the spectral method (and other quotient cut methods such as Metis+MQI [13]) wants to chop off tiny pieces. This has at least two bad practical effects. First, cutting off a tiny piece after paying for a computation on the whole graph kills the scalability of divide and conquer algorithms by causing their overall run time to increase e.g. from n log n to n2. Second, low-dimensional spectral embeddings of these graphs (see e.g. figure 4(b) are nearly useless for visualization, and are also very poor inputs for clustering schemes that use a small number of eigenvectors. These problems can be avoided by solving a semidefinite relaxation of graph bisection that has a much stronger balance constraint. This SDP (explained in the next section) has a long history, with connections to papers going all the way back to Donath and Hoffman [6] (via the concept of “eigenvalue optimization”). In 2004, Arora, Rao, and Vazirani [14] proved the best-ever approximation guarantee for graph partitioning by analysing a version of this SDP which was augmented with certain triangle inequalities that serve much the same purpose as flow (but which are too expensive to solve for large graphs). 3.1 A semidefinite program which strengthens the balance requirement The graph bisection problem can be expressed as a Quadratic Integer Program as follows. There is an n-element column vector x of indicator variables xi, each of which assigns one node to a particular side of the cut by assuming a value from the set {−1, 1}. With these indicator values, the objective function 1 4xT Lx (where L is the graph’s discrete Laplacian matrix) works out to be equal to the number of edges crossing the cut. Finally, the requirement of perfect balance is expressed by the constraint xT e = 0, where e is a vector of all ones. Since this QIP exactly encodes the graph bisection problem, solving it is NP-hard. The spectral relaxation of this QIP attains solvability by allowing the indicator variables to assume arbitrary real values, provided that their average squared magnitude is 1.0. After this change, the objective function 1 4xT Lx is now just a lower bound on the cutsize. More interestingly for the present discussion, the balance contraint xT e = 0 now permits a qualitatively different kind of balance where a tiny group of nodes moves a long way out from the origin where the nodes acquire enough leverage to counterbalance everyone else. For graphs where the best quotient cut has good balance (e.g. meshes) this does not actually happen, but for graphs whose best quotient cut has bad balance, it does happen, as can be seen in figure 4(b). These undesired solutions could be ruled out by requiring the squared magnitudes of the indicator values to be 1.0 individually instead of on average. However, in one dimension that would require picking values from the set {−1, 1}, which would once again cause the problem to be NP-hard. Fortunately, there is a way to escape from this dilemma which was brought to the attention of the CS community by the Max Cut algorithm of Goemans and Williamson [9]: if we allow the indicator variables to assume values that are r-dimensional unit vectors for some sufficiently large r,2 then the program is solvable even with the strict requirement that every vector has squared length 1.0. After a small change of notation to reflect the fact that the collected indicator variables now form an n by r matrix X rather than a vector, this idea results in the nonlinear program min 1 4L • (XXT ) : diag(XXT ) = e, eT (XXT )e = 0 (1) which becomes an SDP by a change of variables from XXT to the “Gram matrix” G: min 1 4L • G : diag(G) = e, eT Ge = 0, G ⪰0 (2) The added constraint G ⪰0 requires G to be positive semidefinite, so that it can be factored to get back to the desired matrix of indicator vectors X. 3.2 Methods for solving the SDP for large graphs Interior point methods cannot solve (2) for graphs with more than a few thousand nodes, but newer methods achieve better scaling by ensuring that all dense n by n matrices have only an implicit (and approximate) existence. A good example is Helmberg and Rendl’s program SBmethod [11], which can solve the dual of (2) for graphs with about 50,000 nodes by converting it to an equivalent “eigenvalue optimization” problem. The output of SBmethod is a low-rank approximate spectral factorization of the Gram matrix, consisting of an estimated rank r, plus an n by r matrix X whose rows are the nodes’ indicator vectors. SBmethod typically produces r-values that are much smaller than n or even √ 2n. Moreover they seem to match the true dimensionality of simple spacelike graphs. For example, for a 3-d mesh we get r = 4, which is 3 dimensions for the manifold plus one more dimension for the hypersphere that it is wrapped around. Burer and Monteiro’s direct low-rank solver SDP-LR scales even better [2]. Surprisingly, their approach is to essentially forget about the SDP (2) and instead use non-linear programming techniques to solve (1). Specifically, they use an augmented Lagrangian approach to move the constraints into the objective function, which they then minimize using limited memory BFGS. A follow-up paper [3] provides a theoretical explanation of why the method does not fall into bad local minima despite the apparent non-convexity of (1). We have successfully run Burer and Monteiro’s code on large graphs containing more than a million nodes. We typically run it several times with different small fixed values of r, and then choose the smallest r which allows the objective function to reach its best known value. On medium-size graphs this produces estimates for r which are in rough agreement with those produced by SBmethod. The run time scaling of SDP-LR is compared with that of ARPACK and hi pr in figure 6. 2In the original work r = n, but there are theoretical reasons for believing that r ∼ √ 2n is big enough [3], plus there is empirical evidence that much smaller values work in practice. 0.05 0.1 0.15 0.2 0.25 0.3 0 10k 20k 30k 40k 50k 60k 70k quotient cut score (smaller is better) (worse balance) size of small side (better balance) Spectral + Hyperplanes SDP + Hyperplanes SDP + Flow Social Graph (DBLP Coauthorship) 0 0.2 0.4 0.6 0.8 1 0 100k 200k 300k 400k 500k 600k 700k 800k 900k 1M quotient cut score (worse balance) size of small side (better balance) Spectral + Hyperplanes SDP + Hyperplanes SDP + Flow Social Graph (Yahoo Instant Messenger) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0 10k 40k 90k 160k 250k 360k 490k 640k 810k 1M quotient cut score (worse balance) size of small side (better balance) Spectral + Hyperplanes SDP + Hyperplanes SDP + Flow Bipartite Graph (Yahoo Groups vs Users) 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 0 100k 200k 300k 400k 500k 600k 700k 800k quotient cut score (worse balance) size of small side (better balance) Spectral SDP SDP + Flow Web Graph (TREC WT10G) Figure 5: Each of these four plots contains two lines showing the results of sweeping a hyperplane through a spectral embedding and through one dimension of an SDP embedding. In all four cases, the spectral line is lower on the left, and the SDP line is lower on the right, which means that Spectral produces better unbalanced cuts and the SDP produces better balanced cuts. Cuts obtained by rounding random 1-d projections of the SDP embedding using Midflow (to produce β-balanced cuts) followed by MQI (to improve the quotient cut score) are also shown; these flow-based cuts are consistently better than hyperplane cuts. 3.3 Results We have used the minbis program from Burer and Monteiro’s SDP-LR v0.130301 package (with r < 10) to approximately solve (1) for several large graphs including: a 130,000 node social graph representing co-authorship in DBLP; a 1.9 million node social graph built from the buddy lists of a subset of the users of Yahoo Instant Messenger; a 1.6 million node bipartite graph relating Yahoo Groups and users; and a 1.5 million node graph made by symmetrizing the TREC WT10G web graph. It is clear from figure 5 that in all four cases the SDP embedding leads to better balanced cuts, and that flow-based rounding works better hyperplane rounding. Also, figures 4(b) and 4(c) show 3-d Spectral and SDP embeddings of a small subset of the Yahoo IM social graph; the SDP embedding is qualitatively different and arguably better for visualization purposes. Acknowledgments We thank Satish Rao for many useful discussions. References [1] N. Alon and V.D. Milman. λ1, isoperimetric inequalities for graphs, and superconcentrators. Journal of Combinatorial Theory, Series B, 38:73–88, 1985. 0.01 0.1 1 10 100 1000 10000 100000 100 1000 10000 100000 1e+06 1e+07 run time (seconds) graph size (nodes + edges) Solving Eigenproblem with ARPACK Solving SDP with SDP-LR Bisecting graph with Metis Solving MidFlow with hi_pr Figure 6: Run time scaling on subsets of the Yahoo IM graph. Finding Spectral and SDP embeddings with ARPACK and SDP-LR requires about the same amount of time, while MidFlow rounding with hi pr is about 1000 times faster. [2] Samuel Burer and Renato D.C. Monteiro. A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Mathematical Programming (series B), 95(2):329–357, 2003. [3] Samuel Burer and Renato D.C. Monteiro. Local minima and convergence in low-rank semidefinite programming. Technical report, Department of Management Sciences, University of Iowa, September 2003. [4] Boris V. Cherkassky and Andrew V. Goldberg. On implementing the push-relabel method for the maximum flow problem. Algorithmica, 19(4):390–410, 1997. [5] F. Chung and L. Lu. Average distances in random graphs with given expected degree sequences. Proceedings of National Academy of Science, 99:15879–15882, 2002. [6] W.E. Donath and A. J. Hoffman. Lower bounds for partitioning of graphs. IBM J. Res. Develop., 17:420–425, 1973. [7] Peter G. Doyle and J. Laurie Snell. Random walks and electric networks, 1984. Mathematical Association of America; now available under the GPL. [8] C.M. Fiduccia and R.M. Mattheyses. A linear time heuristic for improving network partitions. In Design Automation Conference, pages 175–181, 1982. [9] Michel X. Goemans and David P. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. J. Assoc. Comput. Mach., 42:1115–1145, 1995. [10] Stephen Guattery and Gary L. Miller. On the quality of spectral separators. SIAM Journal on Matrix Analysis and Applications, 19(3):701–719, 1998. [11] C. Helmberg. Numerical evaluation of sbmethod. Math. Programming, 95(2):381–406, 2003. [12] Bruce Hendrickson and Robert W. Leland. A multi-level algorithm for partitioning graphs. In Supercomputing, 1995. [13] Kevin Lang and Satish Rao. A flow-based method for improving the expansion or conductance of graph cuts. In Integer Programming and Combinatorial Optimization, pages 325–337, 2003. [14] Umesh V. Vazirani Sanjeev Arora, Satish Rao. Expander flows, geometric embeddings and graph partitioning. In STOC, pages 222–231, 2004. [15] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000. [16] Daniel A. Spielman and Shang-Hua Teng. Spectral partitioning works: Planar graphs and finite element meshes. In FOCS, pages 96–105, 1996.
|
2005
|
5
|
2,866
|
Robust design of biological experiments Patrick Flaherty EECS Department University of California Berkeley, CA 94720 flaherty@berkeley.edu Michael I. Jordan Computer Science and Statistics University of California Berkeley, CA 94720 jordan@cs.berkeley.edu Adam P. Arkin Bioengineering Department, LBL, Howard Hughes Medical Institute University of California Berkeley, CA 94720 aparkin@lbl.gov Abstract We address the problem of robust, computationally-efficient design of biological experiments. Classical optimal experiment design methods have not been widely adopted in biological practice, in part because the resulting designs can be very brittle if the nominal parameter estimates for the model are poor, and in part because of computational constraints. We present a method for robust experiment design based on a semidefinite programming relaxation. We present an application of this method to the design of experiments for a complex calcium signal transduction pathway, where we have found that the parameter estimates obtained from the robust design are better than those obtained from an “optimal” design. 1 Introduction Statistical machine learning methods are making increasing inroads in the area of biological data analysis, particularly in the context of genome-scale data, where computational efficiency is paramount. Learning methods are particularly valuable for their ability to fuse multiple sources of information, aiding the biologist to interpret a phenomenon in its appropriate cellular, genetic and evolutionary context. At least as important to the biologist, however, is to use the results of data analysis to aid in the design of further experiments. In this paper we take up this challenge—we show how recent developments in computationally-efficient optimization can be brought to bear on the problem of the design of experiments for complex biological data. We present results for a specific model of calcium signal transduction in which choices must be made among 17 kinds of RNAi knockdown experiments. There are three main objectives for experiment design: parameter estimation, hypothesis testing and prediction. Our focus in this paper is parameter estimation, specifically in the setting of nonlinear kinetic models [1]. Suppose in particular that we have a nonlinear model y = f(x, θ) + ε, ε ∼N(0, σ2), where x ∈X represents the controllable conditions of the experiment (such as dose or temperature), y is the experimental measurement and θ ∈Rp is the set of parameters to be estimated. We consider a finite menu of available experiments X = {x1, . . . , xm}. Our objective is to select the best set of N experiments (with repeats) from the menu. Relaxing the problem to a continuous representation, we solve for a distribution over the design points and then multiply the weights by N at the end [2]. The experiment design is thus ξ = x1 , . . . , xm w1 , . . . , wm , m X i=1 wi = 1, wi ≥0, ∀i, (1) and it is our goal to select values of wi that satisfy an experimental design criterion. 2 Background We adopt a standard least-squares framework for parameter estimation. In the nonlinear setting this is done by making a Taylor series expansion of the model about an estimate θ0 [3] f(x, θ) ≈f(x, θ0) + V (θ −θ0), (2) where V is the Jacobian matrix of the model; the ith row of V is vT i = ∂f(xi,θ) ∂θ θ0 . The least-squares estimate of θ is ˆθ = θ0+ V T WV −1 V T W (y −f(x, θ0)), where W = diag(w). The covariance matrix for the parameter estimate is cov(ˆθ|ξ) = σ2 V T WV −1, which is the inverse of the observed Fisher information matrix. The aim of optimal experiment design methods is to minimize the covariance matrix of the parameter estimate [4, 5, 6]. There are two well-known difficulties that must be surmounted in the case of nonlinear models [6]: • The optimal design depends on an evaluation of the derivative of the model with respect to the parameters at a particular parameter estimate. Given that our goal is parameter estimation, this involves a certain circularity. • Simple optimal design procedures tend to concentrate experimental weight on only a few design points [7]. Such designs are overly optimistic about the appropriateness of the model, and provide little information about possible lack of fit over a wider experimental range. There have been three main responses to these problems: sequential experiment design [7], Bayesian methods [8], and maximin approaches [9]. In the sequential approach, a working parameter estimate is first used to construct a tentative experiment design. Data are collected under that design and the parameter estimate is updated. The procedure is iterated in stages. While heuristically reasonable, this approach is often inapplicable in practice because of costs associated with experiment set-up time. In the Bayesian approach exemplified by [8], a proper prior distribution is constructed for the parameters to be estimated. The objective function is the KL divergence between the prior distribution and the expected posterior distribution; this KL divergence is maximized (thereby maximizing the amount of expected information in the experiment design). Sensitivity to priors is a serious concern, however, particularly in the biological setting in which it can be quite difficult to choose priors for quantities such as bulk rates for a complex process. The maximin approach considers a bounded range for each parameter and finds the optimal design for the worst case parameters in that range. The major difficulties with this approach are computational, and its main applications have been to specialized problems [7]. The approach that we present here is closest in spirit to the maximin approach. We view both of the problems discussed above as arguments for a robust design, one which is insensitive to the linearization point and to model error. We work within the framework of E-optimal design (see below) and consider perturbations to the rank-one Fisher information matrix for each design point. An optimization with respect to such perturbations yields a robust semidefinite program [10, 11, 12]. 3 Optimal Experiment Design The three most common scalar measures of the size of the parameter covariance matrix in optimal experiment design are: • D-optimal design: determinant of the covariance matrix. • A-optimal design: trace of the covariance matrix. • E-optimal design: maximum eigenvalue of the covariance matrix. We adopt the E-optimal design criterion, and formulate the design problem as follows: P0 : p∗ 0 = min w λmax m X i=1 wivivT i !−1 s.t. m X i=1 wi = 1 (3) wi ≥0, ∀i, where λmax[M] is the maximum eigenvalue of a matrix M. This problem can be recast as the following semidefinite program [5]: P0 : p∗ 0 = max w,s s s.t. m X i=1 wivivT i ≥sIp (4) m X i=1 wi = 1, wi ≥0, ∀i, which forms the basis of the robust extension that we develop in the following section. 4 Robust Experiment Design The uncertain parameters appear in the experiment design optimization problem through the Jacobian matrix, V . We consider additive unstructured perturbations on the Jacobian or “data” in this problem. The uncertain observed Fisher information matrix is F(w, ∆) = Pm i=1 wi(vivT i −∆i), where ∆i is a p × p matrix for i = 1, . . . , m. We consider a spectral norm bound on the magnitude of the perturbations such that ∥blkdiag(∆1, . . . , ∆m)∥≤ρ. Incorporating the perturbations, the E-optimal experiment design problem with uncertainty based on (4) can be cast as the following minimax problem: Pρ : p∗ ρ = minw,s max∥∆∥≤ρ −s subject to Pm i=1 wi(vivT i −∆i) ≥sIp ∆= blkdiag(∆1, . . . , ∆m) Pm i=1 wi = 1, wi ≥0, ∀i. (5) We will call equation (5) an E-robust experiment design. To implement the program efficiently, we can recast the linear matrix inequality in (5) in a linear fractional representation: F(w, s, ∆) = F(w, s) + L∆R(w) + R(w)T ∆T LT ≥0, where F(w, s) = m X i=1 wivivT i −sIp, R(w) = 1 √ 2 (w ⊗Ip) L = −1 √ 2 1T m ⊗Ip , ∆= blkdiag(∆1, . . . , ∆m). Taking ∆1 = · · · = ∆m, a special case of the S-procedure [11] yields the following semidefinite program: Pρ : p∗ ρ = minw,s,τ −s subject to " Pm i=1 wivivT i −sIp −m 2 τIp wT ⊗ ρ √ 2Ip w ⊗ ρ √ 2Ip τImp # ≥0 Pm i=1 wi = 1, wi ≥0, ∀i. (7) If ρ = 0 we recover (4). Using the Schur complement the first constraint in (7) can be further simplified to m X i=1 wivivT i −ρ√m∥w∥2 ≥sIp, (8) which makes the regularization of the optimization problem (4) explicit. The uncertainty bound, ρ, serves as a weighting parameter for a Tikhonov regularization term. 5 Results We demonstrate the robust experiment design on two models of biological systems. The first model is the Michaelis-Menten model of a simple enzyme reaction system. This model, derived from mass-action kinetics, is a fundamental building block of many mechanistic models of biological systems. The second example is a model of a complex calcium signal transduction pathway in macrophage immune cells. In this example we consider RNAi knockdowns at a variety of ligand doses for the estimation of receptor level parameters. 5.1 Michaelis-Menten Reaction Model The Michaelis-Menten model is a common approximation to an enzyme-substrate reaction [13]. The basic chemical reaction that leads to this model is E +S k+1 −−⇀ ↽−− k−1 C k2 −→E +P, where E is the enzyme concentration, S is the substrate concentration and P is the product concentration. We employ mass action kinetics to develop a differential equation model for this reaction system [13]. The velocity of the reaction is defined to be the rate of product formation, V0 = ∂P ∂t t0. The initial velocity of the reaction is V0 ≈ θ1x θ2 + x, (9) where θ1 = k+2E0, θ2 = k−1 + k+2 k+1 . (10) We have taken the controllable factor, x, in this system to be the initial substrate concentration S0. The parameter θ1 is the saturating velocity and θ2 is the initial substrate concentration at which product is formed at one-half the maximal velocity. In this example θ1 = 2 and θ2 = 2 are the total enzyme and initial substrate concentrations. We consider six initial substrate concentrations as the menu of experiments, X = 1 8, 1, 2, 4, 8, 16 . Figure 1 shows the robust experiment design weights as a function of the uncertainty parameter with the Jacobian computed at the true parameter values. When ρ is small, the experimental weight is concentrated on only two design points. As ρ →ρmax the design converges to a uniform distribution over the entire menu of design points. In a sense, this uniform allocation of experimental energy is most robust to parameter uncertainty. Intermediate values of ρ yield an allocation of design points that reflects a tradeoff between robustness and nominal optimality. 10 −4 10 −3 10 −2 10 −1 10 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 ρ weight E−robust Experiment Design Weight by ρ 0.125 1 2 4 8 16 Figure 1: Michaelis-Menten model experiment design weights as a function of ρ. For moderate values of ρ we gain significantly in terms of robustness to errors in vivT i , at a moderate cost to maximal value of the minimum eigenvalues of the parameter estimate covariance matrix. Figure 2 shows the efficiency of the experiment design as a function of ρ and the prior estimate θ02 used to compute the Jacobian matrix. The E-efficiency of a design is defined to be efficiency ≜ λmax h cov ˆθ|θ, ξ0 i λmax h cov ˆθ|θ0, ξρ i. (11) If the Jacobian is computed at the correct point in parameter space the optimal design achieves maximal efficiency. As the distance between θ0 and θ grows the efficiency of the optimal design decreases rapidly. If the estimate, θ02, is eight instead of the true value, two, the efficiency of the optimal design at θ0 is 36% of the optimal design at θ. However, at the cost of a decrease in efficiency for parameter estimates close to the true parameter value we guarantee the efficiency is better for points further from the true parameters with a robust design. For example, for ρ = 0.001 the robust design is less efficient for the range 0 < θ02 < 7, but is more efficient for 7 < θ02 < 16. 5.2 Calcium Signal Transduction Model When certain small molecule ligands such as the anaphylatoxin C5a are introduced into the environment of an immune cell a complex chain of chemical reactions leads to the 0 2 4 6 8 10 12 14 16 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 θ02 Efficiency ρ=0 ρ=1e−3 ρ=1e−2 ρ=1e−1 ρ=10 Figure 2: Efficiency of robust designs as a function of ρ and perturbations in the prior parameter estimate θ02. transduction of the extracellular ligand concentration information and a transient increase in the intracellular calcium concentration. This chain of reactions can be mathematically modeled using the principles of mass-action kinetics and nonlinear ordinary differential equations. We consider specifically the model presented in [14] which was developed for the P2Y2 receptor, modifying the model for our data on the C5a receptor. The menu of available experiments is indexed by one of two different cell lines in combination with different ligand doses. The cell lines are: wild-type and a GRK2 knockdown line. GRK2 is a protein that represses signaling in the G-protein receptor complex. When its concentration is decreased with interfering RNA the repression of the signal due to GRK2 is reduced. There are 17 experiments on the menu and we choose to do 100 experiments allocated according the experiment design. For each experiment we are able measure the transient calcium spike peak height using a fluorescent calcium dye. We are concerned with estimating three C5A receptor parameters: K1, kp, kdeg which are detailed in [14]. We have selected the initial parameter estimates based on a least-squares fit to a separate data set of 67 experiments on a wild-type cell line with a ligand concentration of 250nM. We have estimated, from experimental data, the mean and variance for all of the experiments in our menu. Observations are simulated from these data to obtain the least-squares parameter estimate for the optimal, robust (ρ = 1.5 × 10−6) and uniform experiment designs. Figure 3 shows the model fits with associated 95% confidence bands for the wild-type and knockdown cell lines for the parameter estimates from the three experiment designs. A separate validation data set is generated uniformly across the design menu. Compared to the optimal design, the parameter estimates based on the robust design provide a better fit across the whole dose range for both cell types as measured by mean-squared residual error. Note also that the measured response at high ligand concentration is better fit with parameters estimated from the robust design. Near 1µM of C5a concentration the peak height is predicted to decrease slightly in the wild-type cell line, but plateaus for the GRK2 knockdown cell line. This matches the biochemical understanding that GRK2 acts as a repressor of signaling. 10 −4 10 −2 10 0 0 0.05 0.1 Optimal Design − WT 10 −4 10 −2 10 0 0 0.05 0.1 Optimal Design − GRK 10 −4 10 −2 10 0 0 0.05 0.1 Robust Design − WT 10 −4 10 −2 10 0 0 0.05 0.1 Robust Design − GRK 10 −4 10 −2 10 0 0 0.05 0.1 Uniform Design − WT [C5A] (uM) Peak Height (uM) 10 −4 10 −2 10 0 0 0.05 0.1 Uniform Design − GRK [C5A] (uM) Peak Height (uM) Figure 3: Model predictions based on the least squares parameter estimate using data observed from the optimal, robust and uniform design. The predicted peak height curve (black line) based on the robust design data is shifted to the left compared to the peak height curve based on the optimal design data and matches the validation sample (shown as blue dots) more accurately. 6 Discussion The methodology of optimal experiment design leads to efficient algorithms for the construction of designs in general nonlinear situations [15]. However, these varianceminimizing designs fail to account for uncertainty in the nominal parameter estimate and the model. We present a methodology, based on recent advances in semidefinite programming, that retains the advantages of the general purpose algorithm while explicitly incorporating uncertainty. We demonstrated this robust experiment design method on two example systems. In the Michaelis-Menten model, we showed that the E-optimal design is recovered for ρ = 0 and the uniform design is recovered as ρ →ρmax. It was also shown that the robust design is more efficient than the optimal for large perturbations of the nominal parameter estimate away from the true parameter. The second example, of a calcium signal transduction model, is a more realistic case of the need for experiment design in high-throughput biological research. The model captures some of the important kinetics of the system, but is far from complete. We require a reasonably accurate model to make further predictions about the system and drive a set of experiments to estimate critical parameters of the model more accurately. The resulting robust design spreads some experiments across the menu, but also concentrates on experiments that will help minimize the variance of the parameter estimates. These robust experiment designs were obtained using SeDuMi 1.05 [16]. The design for the calcium signal transduction model takes approximately one second on a 2GHz processor, which is less time than required to compute the Jacobian matrix for the model. Research in machine learning has led to significant advances in computationally-efficient data analysis methods, allowing increasingly complex models to be fit to biological data. Challenges in experimental design are the flip side of this coin—for complex models to be useful in closing the loop in biological research it is essential to begin to focus on the development of computationally-efficient experimental design methods. Acknowledgments We would like to thank Andy Packard for helpful discussions. We would also like to thank Robert Rebres and William Seaman for the data used in the second example. PF and APA would like to acknowledge support from the Howard Hughes Medical Institute and from the Alliance for Cellular Signaling through the NIH Grant Number 5U54 GM62114-05. MIJ would like to thank NIH R33 HG003070 for funding. References [1] I. Ford, D.M. Titterington, and C.P. Kitsos. Recent advances in nonlinear experiment design. Technometrics, 31(1):49–60, 1989. [2] L. Vandenberghe, S. Boyd, and W. S.-P. Determinant maximization with linear matrix inequality constraints. SIAM Journal on Matrix Analysis and Applications, 19(2):499–533, 1998. [3] G.A.F. Seber and C.J. Wild. Nonlinear Regression. Wiley-Interscience, Hoboken, NJ, 2003. [4] A.C. Atkinson and A.N. Donev. Optimum Experimental Designs. Oxford University Press, 1992. [5] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2003. [6] G.E.P Box, W.G. Hunter, and J.S. Hunter. Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building. John Wiley and Sons, New York, 1978. [7] S.D. Silvey. Optimal Design. Chapman and Hall, London, 1980. [8] D.V. Lindley. On the measure of information provided by an experiment. The Annals of Mathematical Statistics, 27(4):986–1005, 1956. [9] L. Pronzato and E. Walter. Robust experiment design via maximin optimization. Mathematical Biosciences, 89:161–176, 1988. [10] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38(1):49–95, 1996. [11] L. El Ghaoui, L. Oustry, and H. Lebret. Robust solutions to uncertain semidefinite programs. SIAM J. Optimization, 9(1):33–52, 1998. [12] L. El Ghaoui and H. Lebret. Robust solutions to least squares problems with uncertain data. SIAM J. Matrix Anal. Appl., 18(4):1035–1064, 1997. [13] L.A. Segel and M. Slemrod. The quasi-steady state assumption: A case study in perturbation. SIAM Review, 31(3):446–477, 1989. [14] G. Lemon, W.G. Gibson, and M.R. Bennett. Metabotropic receptor activation, desensitization and sequestrationi: modelling calcium and inositol 1,4,5-trisphosphate dynamics following receptor activation. Journal of Theoretical Biology, 223(1):93–111, 2003. [15] A.C. Atkinson. The usefulness of optimum experiment designs. JRSS B, 58(1):59–76, 1996. [16] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software, 11:625–653, 1999.
|
2005
|
50
|
2,867
|
On the Accuracy of Bounded Rationality: How Far from Optimal Is Fast and Frugal? Michael Schmitt Ludwig-Marum-Gymnasium Schlossgartenstraße 11 76327 Pfinztal, Germany mschmittm@googlemail.com Laura Martignon Institut f¨ur Mathematik und Informatik P¨adagogische Hochschule Ludwigsburg Reuteallee 46, 71634 Ludwigsburg, Germany martignon@ph-ludwigsburg.de Abstract Fast and frugal heuristics are well studied models of bounded rationality. Psychological research has proposed the take-the-best heuristic as a successful strategy in decision making with limited resources. Take-thebest searches for a sufficiently good ordering of cues (features) in a task where objects are to be compared lexicographically. We investigate the complexity of the problem of approximating optimal cue permutations for lexicographic strategies. We show that no efficient algorithm can approximate the optimum to within any constant factor, if P ̸= NP. We further consider a greedy approach for building lexicographic strategies and derive tight bounds for the performance ratio of a new and simple algorithm. This algorithm is proven to perform better than take-the-best. 1 Introduction In many circumstances the human mind has to make decisions when time and knowledge are limited. Cognitive psychology categorizes human judgments made under such constraints as being boundedly rational if they are “satisficing” (Simon, 1982) or, more generally, if they do not fall too far behind the rational standards. A class of models for human reasoning studied in the context of bounded rationality consists of simple algorithms termed “fast and frugal heuristics”. These were the topic of major psychological research (Gigerenzer and Goldstein, 1996; Gigerenzer et al., 1999). Great efforts have been put into testing these heuristics by empirical means in experiments with human subjects (Br¨oder, 2000; Br¨oder and Schiffer, 2003; Lee and Cummins, 2004; Newell and Shanks, 2003; Newell et al., 2003; Slegers et al., 2000) or in simulations on computers (Br¨oder, 2002; Hogarth and Karelaia, 2003; Nellen, 2003; Todd and Dieckmann, 2005). (See also the discussion and controversies documented in the open peer commentaries on Todd and Gigerenzer, 2000.) Among the fast and frugal heuristics there is an algorithm called “take-the-best” (TTB) that is considered a process model for human judgments based on one-reason decision making. Which of the two cities has a larger population: (a) D¨usseldorf (b) Hamburg? This is the task originally studied by Gigerenzer and Goldstein (1996) where German cities with a population of more than 100,000 inhabitants had to be compared. The available information on each city consists of the values of nine binary cues, or attributes, indicating Soccer Team State Capital License Plate Hamburg 1 1 0 Essen 0 0 1 D¨usseldorf 0 1 1 Validity 1 1/2 0 Table 1: Part of the German cities task of Gigerenzer and Goldstein (1996). Shown are profiles and validities of three cues for three cities. Cue validities are computed from the data as given here. The original data has different validities but the same cue ranking. presence or absence of a feature. The cues being used are, for instance, whether the city is a state capital, whether it is indicated on car license plates by a single letter, or whether it has a soccer team in the national league. The judgment which city is larger is made on the basis of the two binary vectors, or cue profiles, representing the two cities. TTB performs a lexicographic strategy, comparing the cues one after the other and using the first cue that discriminates as the one reason to yield the final decision. For instance, if one city has a university and the other does not, TTB would infer that the first city is larger than the second. If the cue values of both cities are equal, the algorithm passes on to the next cue. TTB examines the cues in a certain order. Gigerenzer and Goldstein (1996) introduced ecological validity as a numerical measure for ranking the cues. The validity of a cue is a real number in the interval [0, 1] that is computed in terms of the known outcomes of paired comparisons. It is defined as the number of pairs the cue discriminates correctly (i.e., where it makes a correct inference) divided by the number of pairs it discriminates (i.e., where it makes an inference, be it right or wrong). TTB always chooses a cue with the highest validity, that is, it “takes the best” among those cues not yet considered. Table 1 shows cue profiles and validities for three cities. The ordering defined by the size of their population is given by {⟨D¨usseldorf , Essen ⟩, ⟨D¨usseldorf , Hamburg ⟩, ⟨Essen , Hamburg ⟩}, where a pair ⟨a, b⟩indicates that a has less inhabitants than b. As an example for calculating the validity, the state-capital cue distinguishes the first and the third pair but is correct only on the latter. Hence, its validity has value 1/2. The order in which the cues are ranked is crucial for success or failure of TTB. In the example of D¨usseldorf and Hamburg, the car-license-plate cue would yield that D¨usseldorf (D) is larger than Hamburg (HH), whereas the soccer-team cue would correctly favor Hamburg. Thus, how successful a lexicographic strategy is in a comparison task consisting of a partial ordering of cue profiles depends on how well the cue ranking minimizes the number of incorrect comparisons. Specifically, the accuracy of TTB relies on the degree of optimality achieved by the ranking according to decreasing cue validities. For TTB and the German cities task, computer simulations have shown that TTB discriminates at least as accurate as other models (Gigerenzer and Goldstein, 1996; Gigerenzer et al., 1999; Todd and Dieckmann, 2005). TTB made as many correct inferences as standard algorithms proposed by cognitive psychology and even outperformed some of them. Partial results concerning the accuracy of TTB compared to the accuracy of other strategies have been obtained analytically by Martignon and Hoffrage (2002). Here we subject the problem of finding optimal cue orderings to a rigorous theoretical analysis employing methods from the theory of computational complexity (Ausiello et al., 1999). Obviously, TTB runs in polynomial time. Given a list of ordered pairs, it computes all cue validities in polynomially many computing steps in terms of the size of the list. We define the optimization problem MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY as the task of minimizing the number of incorrect inferences for the lexicographic strategy on a given list of pairs. We show that, unless P = NP, there is no polynomial-time approximation algorithm that computes solutions for MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY that are only a constant factor worse than the optimum, unless P = NP. This means that the approximating factor, or performance ratio, must grow with the size of the problem. As an extension of TTB we consider an algorithm for finding cue orderings that was called “TTB by Conditional Validity” in the context of bounded rationality. It is based on the greedy method, a principle widely used in algorithm design. This greedy algorithm runs in polynomial time and we derive tight bounds for it, showing that it approximates the optimum with a performance ratio proportional to the number of cues. An important consequence of this result is a guarantee that for those instances that have a solution that discriminates all pairs correctly, the greedy algorithm always finds a permutation attaining this minimum. We are not aware that this quality has been established for any of the previously studied heuristics for paired comparison. In addition, we show that TTB does not have this property, concluding that the greedy method of constructing cue permutations performs provably better than TTB. For a more detailed account and further results we refer to the complete version of this work (Schmitt and Martignon, 2006). 2 Lexicographic Strategies A lexicographic strategy is a method for comparing elements of a set B ⊆{0, 1}n. Each component 1, . . . , n of these vectors is referred to as a cue. Given a, b ∈B, where a = (a1, . . . , an) and b = (b1, . . . , bn), the lexicographic strategy searches for the smallest cue index i ∈{1, . . . , n} such that ai and bi are different. The strategy then outputs one of “ < ” or “ > ” according to whether ai < bi or ai > bi assuming the usual order 0 < 1 of the truth values. If no such cue exists, the strategy returns “ = ”. Formally, let diff : B × B →{1, . . . , n + 1} be the function where diff(a, b) is the smallest cue index on which a and b are different, or n + 1 if they are equal, that is, diff(a, b) = min{{i : ai ̸= bi} ∪{n + 1}}. Then, the function S : B × B →{“ < ”, “ = ”, “ > ”} computed by the lexicographic strategy is S(a, b) = “ < ” if diff(a, b) ≤n and adiff(a,b) < bdiff(a,b), “ > ” if diff(a, b) ≤n and adiff(a,b) > bdiff(a,b), “ = ” otherwise. Lexicographic strategies may take into account that the cues come in an order that is different from 1, . . . , n. Let π : {1, . . . , n} →{1, . . . , n} be a permutation of the cues. It gives rise to a mapping π : {0, 1}n →{0, 1}n that permutes the components of Boolean vectors by π(a1, . . . , an) = (aπ(1), . . . , aπ(n)). As π is uniquely defined given π, we simplify the notation and write also π for π. The lexicographic strategy under cue permutation π passes through the cues in the order π(1), . . . , π(n), that is, it computes the function Sπ : B × B →{“ < ”, “ = ”, “ > ”} defined as Sπ(a, b) = S(π(a), π(b)). The problem we study is that of finding a cue permutation that minimizes the number of incorrect comparisons in a given list of element pairs using the lexicographic strategy. An instance of this problem consists of a set B of elements and a set of pairs L ⊆B ×B. Each pair ⟨a, b⟩∈L represents an inequality a ≤b. Given a cue permutation π, we say that the lexicographic strategy under π infers the pair ⟨a, b⟩correctly if Sπ(a, b) ∈{“ < ”, “ = ”}, otherwise the inference is incorrect. The task is to find a permutation π such that the number of incorrect inferences in L using Sπ is minimal, that is, a permutation π that minimizes INCORRECT(π, L) = |{⟨a, b⟩∈L : Sπ(a, b) = “ > ”}|. 3 Approximability of Optimal Cue Permutations A large class of optimization problems, denoted APX, can be solved efficiently if the solution is required to be only a constant factor worse than the optimum (see, e.g., Ausiello et al., 1999). Here, we prove that, if P ̸= NP, there is no polynomial-time algorithm whose solutions yield a number of incorrect comparisons that is by at most a constant factor larger than the minimal number possible. It follows that the problem of approximating the optimal cue permutation is even harder than any problem in APX. The optimization problem is formally stated as follows. MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY Instance: A set B ⊆{0, 1}n and a set L ⊆B × B. Solution: A permutation π of the cues of B. Measure: The number of incorrect inferences in L for the lexicographic strategy under cue permutation π, that is, INCORRECT(π, L). Given a real number r > 0, an algorithm is said to approximate MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY to within a factor of r if for every instance (B, L) the algorithm returns a permutation π such that INCORRECT(π, L) ≤ r · opt(L), where opt(L) is the minimal number of incorrect comparisons achievable on L by any permutation. The factor r is also known as the performance ratio of the algorithm. The following optimization problem plays a crucial role in the derivation of the lower bound for the approximability of MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY. MINIMUM HITTING SET Instance: A collection C of subsets of a finite set U. Solution: A hitting set for C, that is, a subset U ′ ⊆U such that U ′ contains at least one element from each subset in C. Measure: The cardinality of the hitting set, that is, |U ′|. MINIMUM HITTING SET is equivalent to MINIMUM SET COVER. Bellare et al. (1993) have shown that MINIMUM SET COVER cannot be approximated in polynomial time to within any constant factor, unless P = NP. Thus, if P ̸= NP, MINIMUM HITTING SET cannot be approximated in polynomial time to within any constant factor as well. Theorem 1. For every r, there is no polynomial-time algorithm that approximates MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY to within a factor of r, unless P = NP. Proof. We show that the existence of a polynomial-time algorithm that approximates MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY to within some constant factor implies the existence of a polynomial-time algorithm that approximates MINIMUM HITTING SET to within the same factor. Then the statement follows from the equivalence of MINIMUM HITTING SET with MINIMUM SET COVER and the nonapproximability of the latter (Bellare et al., 1993). The main part of the proof consists in establishing a specific approximation preserving reduction, or AP-reduction, from MINIMUM HITTING SET to MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY. (See Ausiello et al., 1999, for a definition of the AP-reduction.). We first define a function f that is computable in polynomial time and maps each instance of MINIMUM HITTING SET to an instance of MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY. Let 1 denote the n-bit vector with a 1 everywhere and 1i1,...,iℓthe vector with 0 in positions i1, . . . , iℓand 1 elsewhere. Given the collection C of subsets of the set U = {u1, . . . , un}, the function f maps C to (B, L), where B ⊆{0, 1}n+1 is defined as follows: 1. Let (1, 0) ∈B. 2. For i = 1, . . . , n, let (1i, 1) ∈B. 3. For every {ui1, . . . , uiℓ} ∈C, let (1i1,...,iℓ, 1) ∈B. Further, the set L is constructed as L = {⟨(1, 0), (1i, 1)⟩: i = 1, . . . , n}∪{⟨(1i1,...,iℓ, 1), (1, 0)⟩: {ui1, . . . , uiℓ} ∈C}. (1) In the following, a pair from the first and second set on the right-hand side of equation (1) is referred to as an element pair and a subset pair, respectively. Obviously, the function f is computable in polynomial time. It has the following property. Claim 1. Let f(C) = (B, L). If C has a hitting set of cardinality k or less then f(C) has a cue permutation π where INCORRECT(π, L) ≤k. To prove this, assume without loss of generality that C has a hitting set U ′ of cardinality exactly k, say U ′ = {uj1, . . . , ujk}, and let U \ U ′ = {ujk+1, . . . , ujn}. Then the cue permutation j1, . . . , jk, n + 1, jk+1, . . . , jn. results in no more than k incorrect inferences in L. Indeed, consider an arbitrary subset pair ⟨(1i1,...,iℓ, 1), (1, 0)⟩. To not be an error, one of i1, . . . , iℓmust occur in the hitting set j1, . . . , jk. Hence, the first cue that distinguishes this pair has value 0 in (1i1,...,iℓ, 1) and value 1 in (1, 0), resulting in a correct comparison. Further, let ⟨(1, 0), (1i, 1)⟩be an element pair with ui ̸∈U ′. This pair is distinguished correctly by cue n + 1. Finally, each element pair ⟨(1, 0), (1i, 1)⟩with ui ∈U ′ is distinguished by cue i with a result that disagrees with the ordering given by L. Thus, only element pairs with ui ∈U ′ yield incorrect comparisons and no subset pair. Hence, the number of incorrect inferences is not larger than |U ′|. Next, we define a polynomial-time computable function g that maps each collection C of subsets of a finite set U and each cue permutation π for f(C) to a subset of U. Given that f(C) = (B, L), the set g(C, π) ⊆U is defined as follows: 1. For every element pair ⟨(1, 0), (1i, 1)⟩∈L that is compared incorrectly by π, let ui ∈g(C, π). 2. For every subset pair ⟨(1i1,...,iℓ, 1), (1, 0)⟩∈L that is compared incorrectly by π, let one of the elements ui1, . . . , uiℓ∈g(C, π). Clearly, the function g is computable in polynomial time. It satisfies the following condition. Claim 2. Let f(C) = (B, L). If INCORRECT(π, L) ≤k then g(C, π) is a hitting set of cardinality k or less for C. Obviously, if INCORRECT(π, L) ≤k then g(C, π) has cardinality at most k. To show that it is a hitting set, assume the subset {ui1, . . . , uiℓ} ∈C is not hit by g(C, π). Then neither of ui1, . . . , uiℓis in g(C, π). Hence, we have correct comparisons for the element pairs corresponding to ui1, . . . , uiℓand for the subset pair corresponding to {ui1, . . . , uiℓ}. As the subset pair is distinguished correctly, one of the cues i1, . . . , iℓmust be ranked before cue n + 1. But then at least one of the element pairs for ui1, . . . , uiℓyields an incorrect comparison. This contradicts the assertion that the comparisons for these element pairs are all correct. Thus, g(C, π) is a hitting set and the claim is established. Assume now that there exists a polynomial-time algorithm A that approximates MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY to within a factor of r. Consider the algorithm that, for a given instance C of MINIMUM HITTING SET as input, calls algorithm A with input (B, L) = f(C), and returns g(C, π) where π is the output provided by A. Clearly, this new algorithm runs in polynomial time. We show that it approximates MINIMUM Algorithm 1 GREEDY CUE PERMUTATION Input: a set B ⊆{0, 1}n and a set L ⊆B × B Output: a cue permutation π for n cues I := {1, . . . , n}; for i = 1, . . . , n do let j ∈I be a cue where INCORRECT(j, L) = minj′∈I INCORRECT(j′, L); π(i) := j; I := I \ {j}; L := L \ {⟨a, b⟩: aj ̸= bj} end for. HITTING SET to within a factor of r. By the assumed approximation property of algorithm A, we have INCORRECT(π, L) ≤ r · opt(L). Together with Claim 2, this implies that g(π, C) is a hitting set for C satisfying |g(C, π)| ≤ r · opt(L). From Claim 1 we obtain opt(L) ≤opt(C) and, thus, |g(C, π)| ≤ r · opt(C). Thus, the proposed algorithm for MINIMUM HITTING SET violates the approximation lower bound that holds for this problem under the assumption P ̸= NP. This proves the statement of the theorem. 4 Greedy Approximation of Optimal Cue Permutations The so-called greedy approach to the solution of an approximation problem is helpful when it is not known which algorithm performs best. It is a simple heuristic that in practice often provides satisfactory solutions in many situations. The algorithm GREEDY CUE PERMUTATION that we introduce here is based on the greedy method. The idea is to select the first cue according to which single cue makes a minimum number of incorrect inferences (choosing one arbitrarily if there are two or more). After that the algorithm removes those pairs that are distinguished by the selected cue, which is reasonable as the distinctions drawn by this cue cannot be undone by later cues. This procedure is then repeated on the set of pairs left. The description of GREEDY CUE PERMUTATION is given as Algorithm 1. It employs an extension of the function INCORRECT applicable to single cues, such that for a cue i we have INCORRECT(i, L) = |{⟨a, b⟩∈L : ai > bi}|. It is evident that Algorithm 1 runs in polynomial time, but how good is it? The least one should demand from a good heuristic is that, whenever a minimum of zero is attainable, it finds such a solution. This is indeed the case with GREEDY CUE PERMUTATION as we show in the following result. Moreover, it asserts a general performance ratio for the approximation of the optimum. Theorem 2. The algorithm GREEDY CUE PERMUTATION approximates MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY to within a factor of n, where n is the number of cues. In particular, it always finds a cue permutation with no incorrect inferences if one exists. Proof. We show by induction on n that the permutation returned by the algorithm makes a number of incorrect inferences no larger than n · opt(L). If n = 1, the optimal cue ⟨001 , 010 ⟩ ⟨010 , 100 ⟩ ⟨010 , 101 ⟩ ⟨100 , 111 ⟩ Figure 1: A set of lexicographically ordered pairs with nondecreasing cue validities (1, 1/2, and 2/3). The cue ordering of TTB (1, 3, 2) causes an incorrect inference on the first pair. By Theorem 2, GREEDY CUE PERMUTATION finds the lexicographic ordering. permutation is definitely found. Let n > 1. Clearly, as the incorrect inferences of a cue cannot be reversed by other cues, there is a cue j with INCORRECT(j, L) ≤opt(L). The algorithm selects such a cue in the first round of the loop. During the rest of the rounds, a permutation of n −1 cues is constructed for the set of remaining pairs. Let j be the cue that is chosen in the first round, I′ = {1, . . . , j −1, j + 1, . . . , n}, and L′ = L \ {⟨a, b⟩: aj ̸= bj}. Further, let optI′(L′) denote the minimum number of incorrect inferences taken over the permutations of I′ on the set L′. Then, we observe that opt(L) ≥opt(L′) = optI′(L′). The inequality is valid because of L ⊇L′. (Note that opt(L′) refers to the minimum taken over the permutations of all cues.) The equality holds as cue j does not distinguish any pair in L′. By the induction hypothesis, rounds 2 to n of the loop determine a cue permutation π′ with INCORRECT(π′, L′) ≤(n −1) · optI′(L′). Thus, the number of incorrect inferences made by the permutation π finally returned by the algorithm satisfies INCORRECT(π, L) ≤ INCORRECT(j, L) + (n −1) · optI′(L′), which is, by the inequalities derived above, not larger than opt(L) + (n −1) · opt(L) as stated. Corollary 3. On inputs that have a cue ordering without incorrect comparisons under the lexicographic strategy, GREEDY CUE PERMUTATION can be better than TTB. Proof. Figure 1 shows a set of four lexicographically ordered pairs. According to Theorem 2, GREEDY CUE PERMUTATION comes up with the given permutation of the cues. The validities are 1, 1/2, and 2/3. Thus, TTB ranks the cues as 1, 3, 2 whereupon the first pair is inferred incorrectly. Finally, we consider lower bounds on the performance ratio of GREEDY CUE PERMUTATION. The proof of this claim is omitted here. Theorem 4. The performance ratio of GREEDY CUE PERMUTATION is at least max{n/2, |L|/2}. 5 Conclusions The result that the optimization problem MINIMUM INCORRECT LEXICOGRAPHIC STRATEGY cannot be approximated in polynomial time to within any constant factor answers a long-standing question of psychological research into models of bounded rationality: How accurate are fast and frugal heuristics? It follows that no fast, that is, polynomialtime, algorithm can approximate the optimum well, under the widely accepted assumption that P ̸= NP. A further question is concerned with a specific fast and frugal heuristic: How accurate is TTB? The new algorithm GREEDY CUE PERMUTATION has been shown to perform provably better than TTB. In detail, it always finds accurate solutions when they exist, in contrast to TTB. With this contribution we pose a challenge to cognitive psychology: to study the relevance of the greedy method as a model for bounded rationality. Acknowledgment. The first author has been supported in part by the Deutsche Forschungsgemeinschaft (DFG). References Ausiello, G., Crescenzi, P., Gambosi, G., Kann, V., Marchetti-Spaccamela, A., and Protasi, M. (1999). Complexity and Approximation: Combinatorial Problems and Their Approximability Properties. Springer-Verlag, Berlin. Bellare, M., Goldwasser, S., Lund, C., and Russell, A. (1993). Efficient probabilistically checkable proofs and applications to approximation. In Proceedings of the 25th Annual ACM Symposium on Theory of Computing, pages 294–304. ACM Press, New York, NY. Br¨oder, A. (2000). Assessing the empirical validity of the “take-the-best” heuristic as a model of human probabilistic inference. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26:1332–1346. Br¨oder, A. (2002). Take the best, Dawes’ rule, and compensatory decision strategies: A regressionbased classification method. Quality & Quantity, 36:219–238. Br¨oder, A. and Schiffer, S. (2003). Take the best versus simultaneous feature matching: Probabilistic inferences from memory and effects of representation format. Journal of Experimental Psychology: General, 132:277–293. Gigerenzer, G. and Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103:650–669. Gigerenzer, G., Todd, P. M., and the ABC Research Group (1999). Simple Heuristics That Make Us Smart. Oxford University Press, New York, NY. Hogarth, R. M. and Karelaia, N. (2003). “Take-the-best” and other simple strategies: Why and when they work “well” in binary choice. DEE Working Paper 709, Universitat Pompeu Fabra, Barcelona. Lee, M. D. and Cummins, T. D. R. (2004). Evidence accumulation in decision making: Unifying the “take the best” and the “rational” models. Psychonomic Bulletin & Review, 11:343–352. Martignon, L. and Hoffrage, U. (2002). Fast, frugal, and fit: Simple heuristics for paired comparison. Theory and Decision, 52:29–71. Nellen, S. (2003). The use of the “take the best” heuristic under different conditions, modeled with ACT-R. In Detje, F., D¨orner, D., and Schaub, H., editors, Proceedings of the Fifth International Conference on Cognitive Modeling, pages 171–176, Universit¨atsverlag Bamberg, Bamberg. Newell, B. R. and Shanks, D. R. (2003). Take the best or look at the rest? Factors influencing “One-Reason” decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29:53–65. Newell, B. R., Weston, N. J., and Shanks, D. R. (2003). Empirical tests of a fast-and-frugal heuristic: Not everyone “takes-the-best”. Organizational Behavior and Human Decision Processes, 91:82– 96. Schmitt, M. and Martignon, L. (2006). On the complexity of learning lexicographic strategies. Journal of Machine Learning Research, 7(Jan):55–83. Simon, H. A. (1982). Models of Bounded Rationality, Volume 2. MIT Press, Cambridge, MA. Slegers, D. W., Brake, G. L., and Doherty, M. E. (2000). Probabilistic mental models with continuous predictors. Organizational Behavior and Human Decision Processes, 81:98–114. Todd, P. M. and Dieckmann, A. (2005). Heuristics for ordering cue search in decision making. In Saul, L. K., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems 17, pages 1393–1400. MIT Press, Cambridge, MA. Todd, P. M. and Gigerenzer, G. (2000). Pr´ecis of “Simple Heuristics That Make Us Smart”. Behavioral and Brain Sciences, 23:727–741.
|
2005
|
51
|
2,868
|
Assessing Approximations for Gaussian Process Classification Malte Kuss and Carl Edward Rasmussen Max Planck Institute for Biological Cybernetics Spemannstraße 38, 72076 T¨ubingen, Germany {kuss,carl}@tuebingen.mpg.de Abstract Gaussian processes are attractive models for probabilistic classification but unfortunately exact inference is analytically intractable. We compare Laplace’s method and Expectation Propagation (EP) focusing on marginal likelihood estimates and predictive performance. We explain theoretically and corroborate empirically that EP is superior to Laplace. We also compare to a sophisticated MCMC scheme and show that EP is surprisingly accurate. In recent years models based on Gaussian process (GP) priors have attracted much attention in the machine learning community. Whereas inference in the GP regression model with Gaussian noise can be done analytically, probabilistic classification using GPs is analytically intractable. Several approaches to approximate Bayesian inference have been suggested, including Laplace’s approximation, Expectation Propagation (EP), variational approximations and Markov chain Monte Carlo (MCMC) sampling, some of these in conjunction with generalisation bounds, online learning schemes and sparse approximations. Despite the abundance of recent work on probabilistic GP classifiers, most experimental studies provide only anecdotal evidence, and no clear picture has yet emerged, as to when and why which algorithm should be preferred. Thus, from a practitioners point of view probabilistic GP classification remains a jungle. In this paper, we set out to understand and compare two of the most wide-spread approximations: Laplace’s method and Expectation Propagation (EP). We also compare to a sophisticated, but computationally demanding MCMC scheme to examine how close the approximations are to ground truth. We examine two aspects of the approximation schemes: Firstly the accuracy of approximations to the marginal likelihood which is of central importance for model selection and model comparison. In any practical application of GPs in classification (usually multiple) parameters of the covariance function (hyperparameters) have to be handled. Bayesian model selection provides a consistent framework for setting such parameters. Therefore, it is essential to evaluate the accuracy of the marginal likelihood approximations as a function of the hyperparameters, in order to assess the practical usefulness of the approach Secondly, we need to assess the quality of the approximate probabilistic predictions. In the past, the probabilistic nature of the GP predictions have not received much attention, the focus being mostly on classification error rates. This unfortunate state of affairs is caused primarily by typical benchmarking problems being considered outside of a realistic context. The ability of a classifier to produce class probabilities or confidences, have obvious relevance in most areas of application, eg. medical diagnosis. We evaluate the predictive distributions of the approximate methods, and compare to the MCMC gold standard. 1 The Gaussian Process Model for Binary Classification Let y ∈{−1, 1} denote the class label of an input x. Gaussian process classification (GPC) is discriminative in modelling p(y|x) for given x by a Bernoulli distribution. The probability of success p(y = 1|x) is related to an unconstrained latent function f(x) which is mapped to the unit interval by a sigmoid transformation, eg. the logit or the probit. For reasons of analytic convenience we exclusively use the probit model p(y = 1|x) = Φ(f(x)), where Φ denotes the cumulative density function of the standard Normal distribution. In the GPC model Bayesian inference is performed about the latent function f in the light of observed data D = {(yi, xi)|i = 1, . . . , m}. Let fi = f(xi) and f = [f1, . . . , fm]⊤ be shorthand for the values of the latent function and y = [y1, . . . , ym]⊤and X = [x1, . . . , xm]⊤collect the class labels and inputs respectively. Given the latent function the class labels are independent Bernoulli variables, so the joint likelihood factories: p(y|f) = m Y i=1 p(yi|fi) = m Y i=1 Φ(yifi), and depends on f only through its value at the observed inputs. We use a zero-mean Gaussian process prior over the latent function f with a covariance function k(x, x′|θ), which may depend on hyperparameters θ [1]. The functional form and parameters of the covariance function encodes assumptions about the latent function, and adaptation of these is part of the inference. The posterior distribution over latent function values f at the observed X for given hyperparameters θ becomes: p(f|D, θ) = N(f|0, K) p(D|θ) m Y i=1 Φ(yifi), where p(D|θ) = Z p(y|f)p(f|X, θ)df, denotes the marginal likelihood. Unfortunately neither the marginal likelihood, nor the posterior itself, or predictions can be computed analytically, so approximations are needed. 2 Approximate Bayesian Inference For the GPC model approximations are either based on a Gaussian approximation to the posterior p(f|D, θ) ≈q(f|D, θ) = N(f|m, A) or involve Markov chain Monte Carlo (MCMC) sampling [2]. We compare Laplace’s method and Expectation Propagation (EP) which are two alternative approaches to finding parameters m and A of the Gaussian q(f|D, θ). Both methods also allow approximate evaluation of the marginal likelihood, which is useful for ML-II hyperparameter optimisation. Laplace’s approximation (LA) is found by making a second order Taylor approximation of the (un-normalised) log posterior [3]. The mean m is placed at the mode (MAP) and the covariance A equals the negative inverse Hessian of the log posterior density at m. The EP approximation [4] also gives a Gaussian approximation to the posterior. The parameters m and A are found in an iterative scheme by matching the approximate marginal moments of p(fi|D, θ) by the marginals of the approximation N(fi|mi, Aii). Although we cannot prove the convergence of EP, we conjecture that it always converges for GPC with probit likelihood, and have never encountered an exception. A key insight is that a Gaussian approximation to the GPC posterior is equivalent to a GP approximation to the posterior distribution over latent functions. For a test input x∗the −4 0 4 8 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 p(f|y) −4 0 4 8 0 0.2 0.4 0.6 0.8 1 f p(y|f) Likelihood p(y|f) Prior p(f) Posterior p(f|y) Laplace q(f|y) EP q(f|y) fi fj . (a) (b) Figure 1: Panel (a) provides a one-dimensional illustration of the approximations. The prior N(f|0, 52) combined with the probit likelihood (y = 1) results in a skewed posterior. The likelihood uses the right axis, all other curves use the left axis. Laplace’s approximation peaks at the posterior mode, but places far too much mass over negative values of f and too little at large positive values. The EP approximation matches the first two posterior moments, which results in a larger mean and a more accurate placement of probability mass compared to Laplace’s approximation. In Panel (b) we caricature a high dimensional zeromean Gaussian prior as an ellipse. The gray shadow indicates that for a high dimensional Gaussian most of the mass lies in a thin shell. For large latent signals (large entries in K), the likelihood essentially cuts off regions which are incompatible with the training labels (hatched area), leaving the upper right orthant as the posterior. The dot represents the mode of the posterior, which remains close to the origin. approximate predictive latent and class probabilities are: q(f∗|D, θ, x∗) = N(µ∗, σ2 ∗), and q(y∗=1|D, x∗) = Φ(µ∗/ p 1 + σ2 ∗), where µ∗= k⊤ ∗K−1m and σ2 ∗= k(x∗, x∗)−k⊤ ∗(K−1 −K−1AK−1)k∗, where the vector k∗= [k(x1, x∗), . . . , k(xm, x∗)]⊤collects covariances between x∗and training inputs X. MCMC sampling has the advantage that it becomes exact in the limit of long runs and so provides a gold standard by which to measure the two analytic methods described above. Although MCMC methods can in principle be used to do inference over f and θ jointly [5], we compare to methods using ML-II optimisation over θ, thus we use MCMC to integrate over f only. Good marginal likelihood estimates are notoriously difficult to obtain; in our experiments we use Annealed Importance Sampling (AIS) [6], combining several Thermodynamic Integration runs into a single (unbiased) estimate of the marginal likelihood. Both analytic approximations have a computational complexity which is cubic O(m3) as common among non-sparse GP models due to inversions m × m matrices. In our implementations LA and EP need similar running times, on the order of a few minutes for several hundred data-points. Making AIS work efficiently requires some fine-tuning and a single estimate of p(D|θ) can take several hours for data sets of a few hundred examples, but this could conceivably be improved upon. 3 Structural Properties of the Posterior and its Approximations Structural properties of the posterior can best be understood by examining its construction. The prior is a correlated m-dimensional Gaussian N(f|0, K) centred at the origin. Each likelihood term p(yi|fi) softly truncates the half-space from the prior that is incompatible with the observed label, see Figure 1. The resulting posterior is unimodal and skewed, similar to a multivariate Gaussian truncated to the orthant containing y. The mode of the posterior remains close to the origin, while the mass is placed in accordance with the observed class labels. Additionally, high dimensional Gaussian distributions exhibit the property that most probability mass is contained in a thin ellipsoidal shell – depending on the covariance structure – away from the mean [7, ch. 29.2]. Intuitively this occurs since in high dimensions the volume grows extremely rapidly with the radius. As an effect the mode becomes less representative (typical) for the prior distribution as the dimension increases. For the GPC posterior this property persists: the mode of the posterior distribution stays relatively close to the origin, still being unrepresentative for the posterior distribution, while the mean moves to the mass of the posterior making mean and mode differ significantly. We cannot generally assume the posterior to be close to Gaussian, as in the often studied limit of low-dimensional parametric models with large amounts of data. Therefore in GPC we must be aware of making a Gaussian approximation to a non-Gaussian posterior. From the properties of the posterior it can be expected that Laplace’s method places m in the right orthant but too close to the origin, such that the approximation will overlap with regions having practically zero posterior mass. As an effect the amplitude of the approximate latent posterior GP will be underestimated systematically, leading to overly cautious predictive distributions. The EP approximation does not rely on a local expansion, but assumes that the marginal distributions can be well approximated by Gaussians. This assumption will be examined empirically below. 4 Experiments In this section we compare and inspect approximations for GPC using various benchmark data sets. The primary focus is not to optimise the absolute performance of GPC models but to compare the relative accuracy of approximations and to validate the arguments given in the previous section. In all experiments we use a covariance function of the form: k(x, x′|θ) = σ2 exp −1 2 ∥x −x′∥2 /ℓ2 , (1) such that θ = [σ, ℓ]. We refer to σ2 as the signal variance and to ℓas the characteristic length-scale. Note that for many classification tasks it may be reasonable to use an individual length scale parameter for every input dimension (ARD) or a different kind of covariance function. Nevertheless, for the sake of presentability we use the above covariance function and we believe the conclusions about the accuracy of approximations to be independent of this choice, since it relies on arguments which are independent of the form of the covariance function. As measure of the accuracy of predictive probabilities we use the average information in bits of the predictions about the test targets in excess of that of random guessing. Let p∗= p(y∗=1|D, θ, x∗) be the model’s prediction, then we average: I(p∗ i , yi) = yi+1 2 log2(p∗ i ) + 1−yi 2 log2(1 −p∗ i ) + H (2) over all test cases, where H is the entropy of the training labels. The error rate E is equal to the percentage of erroneous class assignments if prediction is understood as a decision problem with symmetric costs. For the first set of experiments presented here the well-known USPS digits and the Ionosphere data set were used. A binary sub-problem from the USPS digits is defined by only considering 3’s vs. 5’s (which is probably the hardest of the binary sub-problems) and dividing the data into 767 cases for training and 773 for testing. The Ionosphere data is split into 200 training and 151 test cases. We do an exhaustive investigation on a fine regular grid of values for the log hyperparameters. For each θ on the grid we compute the approximated log marginal likelihood by LA, EP and AIS. Additionally we compute the respective predictive performance (2) on the test set. Results are shown in Figure 2. −200 −200 −150 −150 −130 −130 −115 −115 −105 −100 log lengthscale, log(l) log magnitude, log(σf) Log marginal likelihood 2 3 4 5 0 1 2 3 4 5 −200 −200 −160 −160 −130 −115 −105 −105 −100 −95 −92 log lengthscale, log(l) log magnitude, log(σf) Log marginal likelihood 2 3 4 5 0 1 2 3 4 5 2 3 4 5 0 1 2 3 4 5 log lengthscale, log(l) log magnitude, log(σf) Log marginal likelihood −92 −95 −100 −105 −105 −115 −130 −160 −160 −200 −200 (1a) (1b) (1c) 0.25 0.25 0.5 0.5 0.7 0.7 0.8 0.8 0.84 log lengthscale, log(l) log magnitude, log(σf) Information about test targets in bits 2 3 4 5 0 1 2 3 4 5 0.25 0.5 0.7 0.7 0.8 0.8 0.84 0.84 0.86 0.86 0.88 0.89 log lengthscale, log(l) log magnitude, log(σf) Information about test targets in bits 2 3 4 5 0 1 2 3 4 5 0.25 0.5 0.7 0.7 0.8 0.84 0.84 0.86 0.86 0.88 0.89 log lengthscale, log(l) log magnitude, log(σf) Information about test targets in bits 2 3 4 5 0 1 2 3 4 5 (2a) (2b) (2c) −200 −150 −120 −120 −100 −100 −90 −80 −75 −70 log lengthscale, log(l) log magnitude, log(σf) Log marginal likelihood −1 0 1 2 3 4 5 −1 0 1 2 3 4 5 −120 −120 −100 −100 −90 −90 −80 −75 −75 −70 −65 log lengthscale, log(l) log magnitude, log(σf) Log marginal likelihood −1 0 1 2 3 4 5 −1 0 1 2 3 4 5 −120 −120 −100 −100 −90 −90 −80 −75 −75 −70 −65 log lengthscale, log(l) log magnitude, log(σf) Log marginal likelihood −1 0 1 2 3 4 5 −1 0 1 2 3 4 5 (3a) (3b) (3c) 0 0 0.1 0.1 0.2 0.2 0.3 0.4 0.5 0.55 log lengthscale, log(l) log magnitude, log(σf) Information about test targets in bits −1 0 1 2 3 4 5 −1 0 1 2 3 4 5 0 0 0.1 0.1 0.2 0.2 0.3 0.3 0.4 0.5 0.5 0.55 0.6 log lengthscale, log(l) log magnitude, log(σf) Information about test targets in bits −1 0 1 2 3 4 5 −1 0 1 2 3 4 5 0 0 0.1 0.1 0.2 0.2 0.3 0.3 0.4 0.5 0.5 0.5 0.5 0.55 0.6 log lengthscale, log(l) log magnitude, log(σf) Information about test targets in bits −1 0 1 2 3 4 5 −1 0 1 2 3 4 5 (4a) (4b) (4c) Figure 2: Comparison of marginal likelihood approximations and predictive performances of different approximation techniques for USPS 3s vs. 5s (upper half) and the Ionosphere data (lower half). The columns correspond to LA (a), EP (b), and MCMC (c). The rows show estimates of the log marginal likelihood (rows 1 & 3) and the corresponding predictive performance (2) on the test set (rows 2 & 4) respectively. −16 −14 −12 −10 −8 −6 −4 −2 0 2 4 0 0.05 0.1 0.15 0.2 f MCMC samples Laplace p(f|D) EP p(f|D) (a) −40 −35 −30 −25 −20 −15 −10 −5 0 5 10 15 0 0.02 0.04 0.06 f MCMC samples Laplace p(f|D) EP p(f|D) (b) 0 2 4 6 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 xi p(xi) (c) Figure 3: Panel (a) and (b) show two marginal distributions p(fi|D, θ) from a GPC posterior and its approximations. The true posterior is approximated by a normalised histogram of 9000 samples of fi obtained by MCMC sampling. Panel (c) shows a histogram of samples of a marginal distribution of a truncated high-dimensional Gaussian. The line describes a Gaussian with mean and variance estimated from the samples. For all three approximation techniques we see an agreement between marginal likelihood estimates and test performance, which justifies the use of ML-II parameter estimation. But the shape of the contours and the values differ between the methods. The contours for Laplace’s method appear to be slanted compared to EP. The marginal likelihood estimates of EP and AIS agree surprisingly well1, given that the marginal likelihood comes as a 767 respectively 200 dimensional integral. The EP predictions contain as much information about the test cases as the MCMC predictions and significantly more than for LA. Note that for small signal variances (roughly ln(σ2) < 1) LA and EP give very similar results. A possible explanation is that for small signal variances the likelihood does not truncate the prior but only down-weights the tail that disagrees with the observation. As an effect the posterior will be less skewed and both approximations will lead to similar results. For the USPS 3’s vs. 5’s we now inspect the marginal distributions p(fi|D, θ) of single latent function values under the posterior approximations for a given value of θ. We have chosen the values ln(σ) = 3.35 and ln(ℓ) = 2.85 which are between the ML-II estimates of EP and LA. Hybrid MCMC was used to generate 9000 samples from the posterior p(f|D, θ). For LA and EP the approximate marginals are q(fi|D, θ) = N(fi|mi, Aii) where m and A are found by the respective approximation techniques. In general we observe that the marginal distributions of MCMC samples agree very well with the respective marginal distributions of the EP approximation. For Laplace’s approximation we find the mean to be underestimated and the marginal distributions to overlap with zero far more than the EP approximations. Figure (3a) displays the marginal distribution and its approximations for which the MCMC samples show maximal skewness. Figure (3b) shows a typical example where the EP approximation agrees very well with the MCMC samples. We show this particular example because under the EP approximation p(yi = 1|D, θ) < 0.1% but LA gives a wrong p(yi = 1|D, θ) ≈18%. In the experiment we saw that the marginal distributions of the posterior often agree very 1Note that the agreement between the two seems to be limited by the accuracy of the MCMC runs, as judged by the regularity of the contour lines; the tolerance is less than one unit on a (natural) log scale. well with a Gaussian approximation. This seems to contradict the description given in the previous section were we argued that the posterior is skewed by construction. In order to inspect the marginals of a truncated high-dimensional multivariate Gaussian distribution we made an additional synthetic experiment. We constructed a 767 dimensional Gaussian N(x|0, C) with a covariance matrix having one eigenvalue of 100 with eigenvector 1, and all other eigenvalues are 1. We then truncate this distribution such that all xi ≥0. Note that the mode of the truncated Gaussian is still at zero, whereas the mean moves towards the remaining mass. Figure (3c) shows a normalised histogram of samples from a marginal distribution of one xi. The samples agree very well with a Gaussian approximation. In the previous section we described the somewhat surprising property, that for a truncated high-dimensional Gaussian, resembling the posterior, the mode (used by LA) may not be particularly representative of the distribution. Although the marginal is also truncated, it is still exceptionally well modelled by a Gaussian – however, the Laplace approximation centred on the origin would be completely inappropriate. In a second set of experiments we compare the predictive performance of LA and EP for GPC on several well known benchmark problems. Each data set is randomly split into 10 folds of which one at a time is left out as a test set to measure the predictive performance of a model trained (or selected) on the remaining nine folds. All performance measures are averages over the 10 folds. For GPC we implement model selection by ML-II hyperparameter estimation, reporting results given the θ that maximised the respective approximate marginal likelihoods p(D|θ). In order to get a better picture of the absolute performance we also compare to results obtained by C-SVM classification. The kernel we used is equivalent to the covariance function (1) without the signal variance parameter. For each fold the parameters C and ℓare found in an inner loop of 5-fold cross-validation, in which the parameter grids are refined until the performance stabilises. Predictive probabilities for test cases are obtained by mapping the unthresholded output of the SVM to [0, 1] using a sigmoid function [8]. Results are summarised in Table 1. Comparing Laplace’s method to EP the latter shows to be more accurate both in terms of error rate and information. While the error rates are relatively similar the predictive distribution obtained by EP shows to be more informative about the test targets. Note that for GPC the error rate only depends of the sign of the mean µ∗of the approximated posterior over latent functions and not the entire posterior predictive distribution. As to be expected, the length of the mean vector ∥m∥shows much larger values for the EP approximations. Comparing EP and SVMs the results are mixed. For the Crabs data set all methods show the same error rate but the information content of the predictive distributions differs dramatically. For some test cases the SVM predicts the wrong class with large certainty. 5 Summary & Conclusions Our experiments reveal serious differences between Laplace’s method and EP when used in GPC models. From the structural properties of the posterior we described why LA systematically underestimates the mean m. The resulting posterior GP over latent functions will have too small amplitude, although the sign of the mean function will be mostly correct. As an effect LA gives over-conservative predictive probabilities, and diminished information about the test labels. This effect has been show empirically on several real world examples. Large resulting discrepancies in the actual posterior probabilities were found, even at the training locations, which renders the predictive class probabilities produced under this approximation grossly inaccurate. Note, the difference becomes less dramatic if we only consider the classification error rates obtained by thresholding p∗at 1/2. For this particular task, we’ve seen the the sign of the latent function tends to be correct (at least at the training locations). Laplace EP SVM Data Set m n E% I ∥m∥ E% I ∥m∥ E% I Ionosphere 351 34 8.84 0.591 49.96 7.99 0.661 124.94 5.69 0.681 Wisconsin 683 9 3.21 0.804 62.62 3.21 0.805 84.95 3.21 0.795 Pima Indians 768 8 22.77 0.252 29.05 22.63 0.253 47.49 23.01 0.232 Crabs 200 7 2.0 0.682 112.34 2.0 0.908 2552.97 2.0 0.047 Sonar 208 60 15.36 0.439 26.86 13.85 0.537 15678.55 11.14 0.567 USPS 3 vs 5 1540 256 2.27 0.849 163.05 2.21 0.902 22011.70 2.01 0.918 Table 1: Results for benchmark data sets. The first three columns give the name of the data set, number of observations m and dimension of inputs n. For Laplace’s method and EP the table reports the average error rate E%, the average information I (2) and the average length ∥m∥of the mean vector of the Gaussian approximation. For SVMs the error rate and the average information about the test targets are reported. Note that for the Crabs data set we use the sex (not the colour) of the crabs as class label. The EP approximation has shown to give results very close to MCMC both in terms of predictive distributions and marginal likelihood estimates. We have shown and explained why the marginal distributions of the posterior can be well approximated by Gaussians. Further, the marginal likelihood values obtained by LA and EP differ systematically which will lead to different results of ML-II hyperparameter estimation. The discrepancies are similar for different tasks. Using AIS we were able to show the accuracy of marginal likelihood estimates, which to the best of our knowledge has never been done before. In summary, we found that EP is the method of choice for approximate inference in binary GPC models, when the computational cost of MCMC is prohibitive. In contrast, the Laplace approximation is so inaccurate that we advise against its use, especially when predictive probabilities are to be taken seriously. Further experiments and a detailed description of the approximation schemes can be found in [2]. Acknowledgements Both authors acknowledge support by the German Research Foundation (DFG) through grant RA 1030/1. This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST2002-506778. This publication only reflects the authors’ views. References [1] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors, NIPS 8, pages 514–520. MIT Press, 1996. [2] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process classification. Journal of Machine Learning Research, 6:1679–1704, 2005. [3] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342–1351, 1998. [4] T. P. Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, Department of Electrical Engineering and Computer Science, MIT, 2001. [5] R. M. Neal. Regression and classification using Gaussian process priors. In J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. M. Smith, editors, Bayesian Statistics 6, pages 475–501. Oxford University Press, 1998. [6] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11:125–139, 2001. [7] D. J. C. MacKay. Information Theory, Inference and Learning Algorithms. CUP, 2003. [8] J. C. Platt. Probabilities for SV machines. In Advances in Large Margin Classifiers, pages 61–73. The MIT Press, 2000.
|
2005
|
52
|
2,869
|
Fast Online Policy Gradient Learning with SMD Gain Vector Adaptation Nicol N. Schraudolph Jin Yu Douglas Aberdeen Statistical Machine Learning, National ICT Australia, Canberra {nic.schraudolph,douglas.aberdeen}@nicta.com.au Abstract Reinforcement learning by direct policy gradient estimation is attractive in theory but in practice leads to notoriously ill-behaved optimization problems. We improve its robustness and speed of convergence with stochastic meta-descent, a gain vector adaptation method that employs fast Hessian-vector products. In our experiments the resulting algorithms outperform previously employed online stochastic, offline conjugate, and natural policy gradient methods. 1 Introduction Policy gradient reinforcement learning (RL) methods train controllers by estimating the gradient of a long-term reward measure with respect to the parameters of the controller [1]. The advantage of policy gradient methods, compared to value-based RL, is that we avoid the often redundant step of accurately estimating a large number of values. Policy gradient methods are particularly appealing when large state spaces make representing the exact value function infeasible, or when partial observability is introduced. However, in practice policy gradient methods have shown slow convergence [2], not least due to the stochastic nature of the gradients being estimated. The stochastic meta-descent (SMD) gain adaptation algorithm [3, 4] can considerably accelerate the convergence of stochastic gradient descent. In contrast to other gain adaptation methods, SMD copes well not only with stochasticity, but also with non-i.i.d. sampling of observations, which necessarily occurs in RL. In this paper we derive SMD in the context of policy gradient RL, and obtain over an order of magnitude improvement in convergence rate compared to previously employed policy gradient algorithms. 2 Stochastic Meta-Descent 2.1 Gradient-based gain vector adaptation Let R be a scalar objective function we wish to maximize with respect to its adaptive parameter vector θ ∈Rn, given a sequence of observations xt ∈X at time t = 1, 2, . . . Where R is not available or expensive to compute, we use the stochastic approximation Rt : Rn× X →R of R instead, and maximize the expectation Et[Rt(θt, xt)]. Assuming that Rt is twice differentiable wrt. θ, with gradient and Hessian given by gt = ∂ ∂θRt(θ, xt)|θ=θt and Ht = ∂2 ∂θ ∂θ⊤Rt(θ, xt)|θ=θt , (1) respectively, we maximize Et[Rt(θ)] by the stochastic gradient ascent θt+1 = θt + γt · gt , (2) where · denotes element-wise (Hadamard) multiplication. The gain vector γt ∈(R+)n serves as a diagonal conditioner, providing each element of θ with its own positive gradient step size. We adapt γ by a simultaneous meta-level gradient ascent in the objective Rt. A straightforward implementation of this idea is the delta-delta algorithm [5], which would update γ via γt+1 = γt + µ∂Rt+1(θt+1) ∂γt = γt + µ∂Rt+1(θt+1) ∂θt+1 · ∂θt+1 ∂γt = γt + µgt+1 · gt , (3) where µ ∈R is a scalar meta-step size. In a nutshell, gains are decreased where a negative autocorrelation of the gradient indicates oscillation about a local minimum, and increased otherwise. Unfortunately such a simplistic approach has several problems: Firstly, (3) allows gains to become negative. This can be avoided by updating γ multiplicatively, e.g. via the exponentiated gradient algorithm [6]. Secondly, delta-delta’s cure is worse than the disease: individual gains are meant to address ill-conditioning, but (3) actually squares the condition number. The autocorrelation of the gradient must therefore be normalized before it can be used. A popular (if extreme) form of normalization is to consider only the sign of the autocorrelation. Such sign-based methods [5, 7–9], however, do not cope well with stochastic approximation of the gradient since the non-linear sign function does not commute with the expectation operator [10]. More recent algorithms [3, 4, 10] therefore use multiplicative (hence linear) normalization factors to condition the meta-level update. Finally, (3) fails to take into account that gain changes affect not only the current, but also future parameter updates. In recognition of this shortcoming, gt in (3) is often replaced with a running average of past gradients. Though such ad-hoc smoothing does improve performance, it does not properly capture long-term dependences, the average still being one of immediate, single-step effects. By contrast, Sutton [11] modeled the long-term effect of gains on future parameter values in a linear system by carrying the relevant partials forward in time, and found that the resulting gain adaptation can outperform a less than perfectly matched Kalman filter. Stochastic meta-descent (SMD) extends this approach to arbitrary twice-differentiable nonlinear systems, takes into account the full Hessian instead of just the diagonal, and applies a decay to the partials being carried forward. 2.2 The SMD Algorithm SMD employs two modifications to address the problems described above: it adjusts gains in log-space, and optimizes over an exponentially decaying trace of gradients. Thus ln γ is updated as follows: ln γt+1 = ln γt + µ t X i=0 λi ∂R(θt+1) ∂ln γt−i = ln γt + µ∂R(θt+1) ∂θt+1 · t X i=0 λi ∂θt+1 ∂ln γt−i =: ln γt + µ gt+1 · vt+1, (4) where the vector v ∈Rn characterizes the long-term dependence of the system parameters on their gain history over a time scale governed by the decay factor 0 ≤λ ≤1. Elementwise exponentiation of (4) yields the desired multiplicative update γt+1 = γt · exp(µ gt+1 · vt+1) ≈γt · max( 1 2, 1 + µ gt+1 · vt+1). (5) The linearization eu ≈max( 1 2, 1+u) eliminates an expensive exponentiation for each gain update, improves its robustness by reducing the effect of outliers (|u| ≫0), and ensures that γ remains positive. To compute the gradient trace v efficiently, we expand θt+1 in terms of its recursive definition (2): vt+1 = t X i=0 λi ∂θt+1 ∂ln γt−i = t X i=0 λi ∂θt ∂ln γt−i + t X i=0 λi ∂(γt · gt) ∂ln γt−i (6) ≈λvt + γt · gt + γt · " ∂gt ∂θt t X i=0 λi ∂θt ∂ln γt−i # Noting that ∂gt ∂θt is the Hessian Ht of Rt(θt), we arrive at the simple iterative update vt+1 = λvt + γt · (gt + λHtvt) ; v0 = 0 . (7) Although the Hessian of a system with n parameters has O(n2) entries, efficient indirect methods from algorithmic differentiation are available to compute its product with an arbitrary vector in the same time as 2–3 gradient evaluations [12, 13]. To improve stability, SMD employs an extended Gauss-Newton approximation of Ht for which a similar (even faster) technique is available [4]. An iteration of SMD — comprising (5), (2), and (7) — thus requires less than 3 times the floating-point operations of simple gradient ascent. The extra computation is typically more than compensated for by the faster convergence of SMD. Fast convergence minimizes the number of expensive world interactions required, which in RL is typically of greater concern than computational cost. 3 Policy Gradient Reinforcement Learning A Markov decision process (MDP) consists of a finite1 set of states s ∈S of the world, actions a ∈A available to the agent in each state, and a (possibly stochastic) reward function r(s) for each state s. In a partially observable MDP (POMDP), the controller sees only an observation x ∈X of the current state, sampled stochastically from an unknown distribution P(x|s). Each action a determines a stochastic matrix P (a) = [P(s′|s, a)] of transition probabilities from state s to state s′ given action a. The methods discussed in this paper do not assume explicit knowledge of P (a) or of the observation process. All policies are stochastic, with a probability of choosing action a given state s, and parameters θ ∈Rn of P(a|θ, s). The evolution of the state s is Markovian, governed by an |S| × |S| transition probability matrix P (θ) = [P(s′|θ, s)] with entries given by P(s′|θ, s) = X a∈A P(a|θ, s) P(s′|s, a) . (8) 3.1 GPOMDP Monte Carlo estimates of gradient and hessian GPOMDP is an infinite-horizon policy gradient method [1] to compute the gradient of the long-term average reward R(θ) := lim T →∞ 1 T Eθ " T X t=1 r(st) # , (9) with respect the policy parameters θ. The expectation Eθ is over the distribution of state trajectories {s0, s1, . . . } induced by P (θ). Theorem 1 (1) Let I be the identity matrix, and u a column vector of ones. The gradient of the long-term average reward wrt. a policy parameter θi is ∇θiR(θ) = π(θ)⊤∇θiP (θ)[I −P (θ) + uπ(θ)⊤]−1r , (10) where π(θ) is the stationary distribution of states induced by θ. 1For uncountably infinite state spaces, the derivation becomes more complex without substantially altering the resulting algorithms. Note that (10) requires knowledge of the underlying transition probabilities P (θ), and the inversion of a potentially large matrix. The GPOMDP algorithm instead computes a Monte-Carlo approximation of (10): the agent interacts with the environment, producing an observation, action, reward sequence {x1, a1, r1, x2, . . . , xT , aT , rT }.2 Under mild technical assumptions, including ergodicity and bounding all the terms involved, Baxter and Bartlett [1] obtain b∇θR = 1 T T −1 X t=0 ∇θ ln P(at|θ, st) T X τ=t+1 βτ−t−1r(sτ) , (11) where a discount factor β ∈[0, 1) implicitly assumes that rewards are exponentially more likely to be due to recent actions. Without it, rewards would be assigned over a potentially infinite horizon, resulting in gradient estimates with infinite variance. As β decreases, so does the variance, but the bias of the gradient estimate increases [1]. In practice, (11) is implemented efficiently via the discounted eligibility trace et = βet−1 + δt , where δt := ∇θ P(at|θ, st)/ P(at|θ, st) . (12) Now gt = rtet is the gradient of R(θ) arising from assigning the instantaneous reward to all log action gradients, where β gives exponentially more credit to recent actions. Likewise, Baxter and Bartlett [1] give the Monte Carlo estimate of the Hessian as Ht = rt(Et + ete⊤ t ), using an eligibility trace matrix Et = βEt−1 + Gt −δtδ⊤ t , where Gt := ∇2 θ P(at|θ, st)/ P(at|θ, st) . (13) Maintaining E would be O(n2), thus computationally expensive for large policy parameter spaces. Noting that SMD only requires the product of Ht with a vector v, we instead use Htv = rt[dt + et(e⊤ t v)] , where dt = βdt−1 + Gtv −δt(δ⊤ t v) (14) is an eligibility trace vector that can be maintained in O(n). We describe the efficient computation of Gtv in (14) for a specific action selection method in Section 3.3 below. 3.2 GPOMDP-Based optimization algorithms Baxter et al. [2] proposed two optimization algorithms using GPOMDP’s policy gradient estimates gt: OLPOMDP is a simple online stochastic gradient descent (2) with scalar gain γt. Alternatively, CONJPOMDP performs Polak-Ribi`ere conjugation of search directions, using a noise-tolerant line search to find the approximately best scalar step size in a given search direction. Since conjugate gradient methods are very sensitive to noise [14], CONJPOMDP must average gt over many steps to obtain a reliable gradient measurement; this makes the algorithm inherently inefficient (cf. Section 4). OLPOMDP, on the other hand, is robust to noise but converges only very slowly. We can, however, employ SMD’s gain vector adaptation to greatly accelerate it while retaining the benefits of high noise tolerance and online learning. Experiments (Section 4) show that the resulting SMDPOMDP algorithm can greatly outperform OLPOMDP and CONJPOMDP. Kakade [15] has applied natural gradient [16] to GPOMDP, premultiplying the policy gradient by the inverse of the online estimate Ft = (1 −1 t )Ft−1 + 1 t (δtδ⊤ t + ϵI) (15) of the Fisher information matrix for the parameter update: θt+1 = θt +γ0 ·rtF −1 t et. This approach can yield very fast convergence on small problems, but in our experience does not scale well at all to larger, more realistic tasks; see our experiments in Section 4. 2We use rt as shorthand for r(st), making it clear that only the reward value is known, not the underlying state st. 3.3 Softmax action selection For discrete action spaces, a vector of action probabilities zt := P(at|yt) can be generated from the output yt := f(θt, xt) of a parameterised function f : Rn× X →R| A | (such as a neural network) via the softmax function: zt := softmax(yt) = eyt P| A | m=1[eyt]m. (16) Given action at ∼zt, GPOMDP’s instantaneous log-action gradient wrt. y is then ˜gt := ∇y[zt]at/[zt]at = uat −zt , (17) where ui is the unity vector in direction i. The action gradient wrt. θ is obtained by backpropagating ˜gt through f’s adjoint system [13], performing an efficient multiplication by the transposed Jacobian of f. The resulting gradient δt := J⊤ f ˜gt is then accumulated in the eligibility trace (12). GPOMDP’s instantaneous Hessian for softmax action selection is ˜ Ht := ∇2 y[zt]at/[zt]at = (uat −zt)(uat −zt)⊤+ ztz⊤ t −diag(zt) . (18) It is indefinite but reasonably well-behaved: the Gerschgorin circle theorem can be employed to show that its eigenvalues must all lie in the interval [−1 4, 2]. Furthermore, its expectation over possible actions is zero: Ezt( ˜ Ht) = [diag(zt) −2ztz⊤ t + ztz⊤ t ] + ztz⊤ t −diag(zt) = 0 . (19) The extended Gauss-Newton matrix-vector product [4] employed by SMD is then given by Gtvt := J⊤ f ˜ HtJfvt , (20) where the multiplication by the Jacobian off (resp. its transpose) is implemented efficiently by propagating vt through f’s tangent linear (resp. adjoint) system [13]. Algorithm 1 SMDPOMDP with softmax action selection 1. Given (a) an ergodic POMDP with observations xt ∈X, actions at ∈A, bounded rewards rt ∈R, and softmax action selection (b) a differentiable parametric map f : Rn× X →R|A| (neural network) (c) f’s adjoint (u →J ⊤ f u) and tangent linear (v →Jf v) maps (d) free parameters: µ ∈R+; β, λ ∈[0, 1]; γ0 ∈Rn +; θ1 ∈Rn 2. Initialize in Rn: e0 = d0 = v0 = 0 3. For t = 1 to ∞: (a) interact with POMDP: i. observe feature vector xt ii. compute zt := softmax(f(θt, xt)) iii. perform action at ∼zt iv. observe reward rt (b) maintain eligibility traces: i. δt := J ⊤ f (uat −zt) ii. pt := Jf vt iii. qt := (uat −zt)(δ⊤ t vt) + zt(z⊤ t pt) −zt · pt iv. et = βet−1 + δt v. dt = βdt−1 + J ⊤ f qt −δt(δ⊤ t vt) (c) update SMD parameters: i. γt = γt−1 · max( 1 2, 1 + µ rtet · vt) ii. θt+1 = θt + rtγt · et iii. vt+1 = λvt + rtγt · [(1 + λe⊤ t vt)et + λdt] 6/18 12/18 12/18 6/18 5/18 5/18 r = 0 r = 1 r = 0 6 12/18 12 6/18 5 5/18 r = -1 r = 8 r = 1 Fig. 1: Left: Baxter et al.’s simple 3-state POMDP. States are labelled with their observable features and instantaneous reward r; arrows indicate the 80% likely transition for the first (solid) resp. second (dashed) action. Right: our modified, more difficult 3-state POMDP. 4 Experiments 4.1 Simple Three-State POMDP Fig. 1 (left) depicts the simple 3-state POMDP used by Baxter et al. [2, Tables 1&2]. Of the two possible transitions from each state, the preferred one occurs with 80% probability, the other with 20%. The preferred transition is determined by the action of a simple probabilistic adaptive controller that receives two state-dependent feature values as input, and is trained to maximize the expected average reward by policy gradient methods. Using the original code of Baxter et al. [2], we replicated their experimental results for the OLPOMDP and CONJPOMDP algorithms on this simple POMDP. We can accurately reproduce all essential features of their graphed results on this problem [2, Figures 7&8]. We then implemented SMDPOMDP (Algorithm 1), and ran a comparison of algorithms, using the best free parameter settings found by Baxter et al. [2] (in particular: β = 0, γ0 = 1), and µ = λ = 1 for SMDPOMDP. We always match random seeds across algorithms. Baxter et al. [2] collect and plot results for CONJPOMDP in terms of its T parameter, which specifies the number of Markov chain iterations per gradient evaluation. For a fair comparison of convergence speed we added code to record the total number of Markov chain iterations consumed by CONJPOMDP, and plot performance for all three algorithms in those terms, with error bars along both axes for CONJPOMDP. The results are shown in Fig. 2 (left), averaged over 500 runs. While early on CONJPOMDP on average reaches a given level of performance about three times faster than OLPOMDP, it does so at the price of far higher variance. Moreover, CONJPOMDP is the only algorithm that fails to asymptotically approach optimal performance (R = 0.8; Fig. 2 left, inset). Once its step size adaptation gets going, SMDPOMDP converges asymptotically to the op1 10 100 1000 10000 Total Markov Chain Iterations 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Average Reward smd ol conj 0.78 0.79 0.8 100 1000 10000 1e+5 1e+6 1e+7 Total Markov Chain Iterations 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 Average Reward smd ol conj ng Fig. 2: Left: The POMDP of Fig. 1 (left) is easy to learn. CONJPOMDP converges faster but to asymptotically inferior solutions (see inset) than the two online algorithms. Right: SMDPOMDP outperforms OLPOMDP and CONJPOMDP on the difficult POMDP of Fig. 1 (right). Natural policy gradient has rapid early convergence but diverges asymptotically. timal policy about three times faster than OLPOMDP in terms of Markov chain iterations, making the two algorithms roughly equal in terms of computational expense. CONJPOMDP on average performs less than two iterations of conjugate gradient in each run. While this is perfectly understandable — the controller only has two trainable parameters — it bears keeping in mind that the performance of CONJPOMDP here is almost entirely governed by the line search rather than the conjugation of search directions. 4.2 Modified Three-State POMDP The three-state POMDP employed by Baxter et al. [2] has the property that greedy maximization of instantaneous reward leads to the optimal policy. Non-trivial temporal credit assignment — the hallmark of reinforcement learning — is not needed. The best results are obtained with the eligibility trace turned off (β = 0). To create a more challenging problem, we rearranged the POMDP’s state transitions and reward structure so that the instantaneous reward becomes deceptive (Fig. 1, right). We also multiplied one state feature by 18 to create an ill-conditioned input to the controller, while leaving the actions and relative transition probabilities (80% resp. 20%) unchanged. In our modified POMDP, the high-reward state can only be reached through an intermediate state with negative reward. Fig. 2 (right) shows our experimental results for this harder POMDP, averaged over 100 runs. Free parameters were tuned to θ1 ∈[−0.1, 0.1], β = 0.6, γ0 = 0.001; T = 105 for CONJPOMDP; µ = 0.002, λ = 1 for SMDPOMDP. CONJPOMDP now performs the worst, which is expected because conjugation of directions is known to collapse in the presence of noise [14]. SMDPOMDP converges about 20 times faster than OLPOMDP because its adjustable gains compensate for the ill-conditioned input. Kakade’s natural gradient (using ϵ = 0.01) performs extremely well early on, taking 2–3 times fewer iterations than SMDPOMDP to reach optimal performance (R = 2.6). It does, however, diverge asymptotically. 4.3 Puck World We also implemented the Puck World benchmark of Baxter et al. [2], with the free parameters settings θ1 ∈[−0.1, 0.1], β = 0.95 γ0 = 2 · 10−6; T = 106 for CONJPOMDP; µ = 100, λ = 0.999 for SMDPOMDP; ϵ = 0.01 for natural policy gradient. To improve its stability, we modified SMD here to track instantaneous log-action gradients δt instead of noisy rtet estimates of ∇θR. CONJPOMDP used a quadratic weight penalty of initially 0.5, with the adaptive reduction schedule described by Baxter et al. [2, page 369]; the online algorithms did not require a weight penalty. Fig. 3 shows our results averaged over 100 runs, except for natural policy gradient where only a single typical run is shown. This is because its O(n3) time complexity per iteration3 3The Sherman-Morrison formula cannot be used here because of the diagonal term in (15). 1e+6 1e+7 1e+8 1e-6 1e-7 SMD Gains Fig. 3: The action-gradient version of SMDPOMDP yields better asymptotic results on PuckWorld than OLPOMDP; CONJPOMDP is inefficient; natural policy gradient even more so. 1e+5 1e+6 1e+7 1e+8 Iterations -60 -50 -40 -30 -20 -10 Average Reward smd ol conj ng makes natural policy gradient intolerably slow for this task, where n = 88. Moreover, its convergence is quite poor here in terms of the number of iterations required as well. CONJPOMDP is again inferior to the best online algorithms by over an order of magnitude. Early on, SMDPOMDP matches OLPOMDP, but then reaches superior solutions with small variance. SMDPOMDP-trained controllers achieve a long-term average reward of -6.5, significantly above the optimum of -8 hypothesized by Baxter et al. [2, page 369] based on their experiments with CONJPOMDP. 5 Conclusion On several non-trivial RL problems we find that our SMDPOMDP consistently outperforms OLPOMDP, which in turn outperforms CONJPOMDP. Natural policy gradient can converge rapidly, but is too unstable and computationally expensive for all but very small controllers. Acknowledgements We are indebted to John Baxter for his code and helpful comments. National ICT Australia is funded by the Australian Government’s Backing Australia’s Ability initiative, in part through the Australian Research Council. This work is also supported by the IST Program of the European Community, under the Pascal Network of Excellence, IST-2002-506778. References [1] J. Baxter and P. L. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research, 15:319–350, 2001. [2] J. Baxter, P. L. Bartlett, and L. Weaver. Experiments with infinite-horizon, policy-gradient estimation. Journal of Artificial Intelligence Research, 15:351–381, 2001. [3] N. N. Schraudolph. Local gain adaptation in stochastic gradient descent. In Proc. Intl. Conf. Artificial Neural Networks, pages 569–574, Edinburgh, Scotland, 1999. IEE, London. [4] N. N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. Neural Computation, 14(7):1723–1738, 2002. [5] R. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks, 1:295–307, 1988. [6] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. In Proc. 27th Annual ACM Symposium on Theory of Computing, pages 209–218. ACM Press, New York, NY, 1995. [7] T. Tollenaere. SuperSAB: Fast adaptive back propagation with good scaling properties. Neural Networks, 3:561–573, 1990. [8] F. M. Silva and L. B. Almeida. Acceleration techniques for the backpropagation algorithm. In L. B. Almeida and C. J. Wellekens, editors, Neural Networks: Proc. EURASIP Workshop, volume 412 of Lecture Notes in Computer Science, pages 110–119. Springer Verlag, 1990. [9] M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In Proc. Intl. Conf. Neural Networks, pages 586–591. IEEE, 1993. [10] L. B. Almeida, T. Langlois, J. D. Amaral, and A. Plakhov. Parameter adaptation in stochastic optimization. In D. Saad, editor, On-Line Learning in Neural Networks, Publications of the Newton Institute, chapter 6, pages 111–134. Cambridge University Press, 1999. [11] R. S. Sutton. Gain adaptation beats least squares? In Proceedings of the 7th Yale Workshop on Adaptive and Learning Systems, pages 161–166, 1992. [12] B. A. Pearlmutter. Fast exact multiplication by the Hessian. Neural Comput., 6(1):147–60, 1994. [13] A. Griewank. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. Frontiers in Applied Mathematics. SIAM, Philadelphia, 2000. [14] N. N. Schraudolph and T. Graepel. Combining conjugate direction methods with stochastic approximation of gradients. In C. M. Bishop and B. J. Frey, editors, Proc. 9th Intl. Workshop Artificial Intelligence and Statistics, pages 7–13, Key West, Florida, 2003. [15] S. Kakade. A natural policy gradient. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, pages 1531–1538. MIT Press, 2002. [16] S. Amari. Natural gradient works efficiently in learning. Neural Comput., 10(2):251–276, 1998.
|
2005
|
53
|
2,870
|
A Bayes Rule for Density Matrices Manfred K. Warmuth∗ Computer Science Department University of California at Santa Cruz manfred@cse.ucsc.edu Abstract The classical Bayes rule computes the posterior model probability from the prior probability and the data likelihood. We generalize this rule to the case when the prior is a density matrix (symmetric positive definite and trace one) and the data likelihood a covariance matrix. The classical Bayes rule is retained as the special case when the matrices are diagonal. In the classical setting, the calculation of the probability of the data is an expected likelihood, where the expectation is over the prior distribution. In the generalized setting, this is replaced by an expected variance calculation where the variance is computed along the eigenvectors of the prior density matrix and the expectation is over the eigenvalues of the density matrix (which form a probability vector). The variances along any direction is determined by the covariance matrix. Curiously enough this expected variance calculation is a quantum measurement where the co-variance matrix specifies the instrument and the prior density matrix the mixture state of the particle. We motivate both the classical and the generalized Bayes rule with a minimum relative entropy principle, where the Kullbach-Leibler version gives the classical Bayes rule and Umegaki’s quantum relative entropy the new Bayes rule for density matrices. 1 Introduction In [TRW05] various on-line updates were generalized from vector parameters to matrix parameters. Following [KW97], the updates were derived by minimizing the loss plus a divergence to the last parameter. In this paper we use the same method for deriving a Bayes rule for density matrices (symmetric positive definite matrices of trace one). When the parameters are probability vectors over the set of models, then the “classical” Bayes rule can be derived using the relative entropy as the divergence (e.g.[KW99, SWRL03]). Analogously we now use the quantum relative entropy, introduced by Umegaki, to derive the generalized Bayes rule. ∗Supported by NSF grant CCR 9821087. Some of this work was done while visiting National ICT Australia in Canberra Figure 1: We update the prior four times based on the same data likelihood vector P(y|Mi). The initial posteriors are close to the prior but eventually the posteriors focus their weight on argmaxi P(y|Mi). The classical Bayes rule may be seen as a soft maximum calculation. Figure 2: We depict seven iterations of the generalized Bayes rule with the bold NWSE ellipse as the prior density and the bolddashed SE-NW ellipse as data covariance matrix. The posterior density matrices (dashed) gradually move from the prior to the longest axis of the covariance matrix. The new rule uses matrix logarithms and exponentials to avoid the fact that symmetric positive definite matrices are not closed under the matrix product. The rule is strikingly similar to the classical Bayes rule and retains the latter as a special case when the matrices are diagonal. Various cancellations occur when the classical Bayes rule is applied iteratively and similar cancellations happen with the new rule. We shall see that the classical Bayes rule may be seen a soft maximum calculation and the new rule as a soft calculation of the eigenvector with the largest eigenvalue (See figures 1 and 2). The mathematics applied in this paper is most commonly used in quantum physics. For example, the data likelihood becomes a quantum measurement. It is tempting to call the new rule the “quantum Bayes rule”. However, we have no physical interpretation of the this rule. The measurement does not collapse our state and we don’t use the unitary evolution of a state to model the rule. Also, the term “quantum Bayes rule” has been claimed before in [SBC01] where the classical Bayes rule is used to update probabilities that happen to arise in the context of quantum physics. In contrast, in this paper our parameters are density matrices. Our work is most closely related to a paper by Cerf and Adam [CA99] who also give a formula for conditional densities that relies on the matrix exponential and logarithm. However they are interested in the multivariate case (which requires the use of tensors) and their motivation is to obtain a generalization of a conditional quantum entropy. We hope to build on the great body of work done with the classical Bayes rule in the statistics community and therefore believe that this line of research holds great promise. 2 The Classical Bayes Rule To establish a common notation we begin by introducing the familiar Bayes rule. Assume we have n models M1, . . . , Mn. In the classical setup, model Mi is chosen with prior probability P(Mi) and then Mi generates a datum y with probability P(y|Mi). After observing y, the posterior probabilities of model Mi are calculated via Bayes Rule: P(Mi|y) = P(Mi)P(y|Mi) P j P(Mj)P(y|Mj). (1) Figure 3: An ellipse S in R2: The eigenvectors are the directions of the axes and the eigenvalues their lengths. Ellipses are weighted combinations of the onedimensional degenerate ellipses (dyads) corresponding to the axes. (For unit u, the dyad uu⊤is a degenerate one-dimensional ellipse with its single axis in direction u.) The solid curve of the ellipse is a plot of Su and the outer dashed figure eight is direction u times the variance u⊤Su. At the eigenvectors, this variance equals the eigenvalues and touches the ellipse. Figure 4: When the ellipse S and T don’t have the same span, then S ⊙T lies in the intersection of both spans and is a degenerate ellipse of dimension one (bold line). This generalizes the following intersection property of the matrix product when S and T are both diagonal (here of dimension four): diag(S) diag(T ) diag(ST ) 0 0 0 a 0 0 0 b 0 a b ab . See Figure 1 for a bar plot of the effect of the update on the posterior. By the Theorem of Total Probability, the expected likelihood in the denominator equals P(y). In a moment we will replace this expected likelihood by an expected variance. 3 Density Matrices as Priors We now let our prior D be an arbitrary symmetric positive1 definite matrix of trace one. Such matrices are called density matrices in quantum physics. An outer product uuT , where u has unit length is called a dyad. Any mixture P i αi aia⊤ i of dyads aia⊤ i is a density matrix as long as the coefficients αi are non-negative and sum to one. This is true even if the number of dyads is larger or smaller than the dimension of D. The trace of such a mixture is one because dyads have trace one and P i αi = 1. Of course any density matrix D can be decomposed based on an eigensystem. That is, D = DδD⊤where DD⊤= I. Now the vector of eigenvalues (δi) forms a probability vector equal to the dimension of the density. In quantum physics, the dyads are called pure states and density matrices are mixtures over such states. Note that in this paper we want to address the statistics community and use linear algebra notation instead of Dirac notation. The probability vector (P(Mi)) can be represented as a diagonal matrix diag((P(Mi))) = P i P(Mi) eie⊤ i , where ei denotes the ith standard basis vector. This means that 1We use the convention that positive definite matrices have non-negative eigenvalues and strictly positive definite matrices have positive eigenvalues. probability vectors are special density matrices where the eigenvectors are fixed to the standard basis vectors. 4 Co-variance Matrices and Basic Notation In this paper we replace the (conditional) data likelihoods P(y|Mi) by a data covariance matrix D(y|.) (symmetric positive definite matrix). We now discuss such matrices in more detail. A covariance matrix S can be depicted as an ellipse {Su : ||u||2 ≤1} centered at the origin, where the eigenvectors form the principal axes and the eigenvalues are the lengths of the axes (See Figure 3). Assume S is the covariance matrix of some random cost vector c ∈Rn, i.e. S = E (c −E(c)(c −E(c))⊤ . Note that a covariance matrix S is diagonal if the components of the cost vector are independent. The variance of the cost vector c along a unit vector u has the form V(c⊤u) = E( c⊤u −E(c⊤u) 2 ) = E( (c⊤−E(c⊤)) u 2 ) = u⊤Su and the variance along an eigenvector is the corresponding eigenvalue (See Figure 3). Using this interpretation, the matrix S may be seen as a mapping S(.) from the unit ball to R≥0, i.e. S(u) = u⊤Su. A second interpretation of the scalar u⊤Su is the square length of u w.r.t. the basis √ S, that is u⊤Su = u⊤√ S √ Su = || √ Su||2 2. Thirdly, uT Su is a quantum measurement of the pure state u with an instrument represented by S. Since the square length of u w.r.t. any orthogonal basis S is one, any such basis turns the unit vector into an n-dimensional probability vector ((u⊤si)2). Now u⊤Su is the expected eigenvalue w.r.t. this probability vector: u⊤Su = P i σi(u⊤si)2. The trace tr(A) of a square matrix A is the sum of its diagonal elements Aii. Recall that tr(AB) = tr(BA) for any matrices A ∈Rn×m, B ∈Rm×n. The trace is unitarily invariant, i.e. for any orthogonal matrix U, tr(UAU ⊤) = tr(U ⊤UA) = tr(A). Also, tr(uu⊤A) = tr(u⊤Au) = u⊤Au. Therefore the trace of a square matrix may be seen as the total variance along any set of orthogonal directions: tr(A) = tr(IA) = tr( X i uiu⊤ i A) = X i u⊤ i Aui. In particular, the trace of a square matrix is the sum of its eigenvalues. The matrix exponential exp(S) of the symmetric matrix S = SσS⊤is defined as S exp(σ)S⊤, where exp(σ) is obtained by exponentiating the diagonal entries (eigenvalues). The matrix logarithm log(S) is defined similarly but now S must be strictly positive definite. Clearly, the two functions are inverses of each other. It is important to remember that exp (S + T ) = exp(S) exp(T ) only holds iffthe two symmetric matrices commute2, i.e. ST = T S. However, the following trace inequality, known as the Golden-Thompson inequality [Bha97], always holds: tr(exp S exp T ) ≥tr(exp (S + T )). (2) 5 The Generalized Bayes Rule The following experiment underlies the more general setup: If the prior is D(.) = P i δi didi ⊤, then the dyad (or pure state) didi ⊤is chosen with probability δi and a random variable c⊤di is observed where c has covariance matrix D(y|.). 2This occurs iffthe two symmetric matrices have the same eigensystem. In our generalization we replace the expected data likelihood P(y) = P i P(Mi)P(y|Mi) by the following trace: tr(D(.)D(y|.)) = tr( X i δi didi ⊤D(y|.)) = X i δi di ⊤D(y|.)di. Recall that di ⊤D(y|.)di is the variance of c in direction di: i.e. V(c⊤di). Therefore the above trace is the expected variance along the eigenvectors of the density matrix weighted by the eigenvalues. Curiously enough, this trace computation is a quantum measurement, where D(y|.) represents the instrument and D(.) the mixture state of the particle. In the generalized Bayes rule we cannot simply multiply the prior density matrix with the covariance matrix that corresponds to the data likelihood. This is because a product of two symmetric positive definite matrices may be neither symmetric nor positive definite. Instead we define the operation ⊙on the cone of symmetric positive definite matrices. We begin by defining this operation for the case when the matrices S and T are strictly positive definite (and symmetric): S ⊙T := exp(log S + log T ). (3) The matrix log of both matrices produces symmetric matrices that sum to a symmetric matrix. Finally the matrix exponential of the sum produces again a symmetric positive matrix. Note that the matrix log is not defined when the matrix has a zero eigenvalue. However for arbitrary symmetric positive definite matrices one can define the operation ⊙as the following limit: S ⊙T := lim n→∞(S1/nT 1/n)n. This limit is the Lie Product Formula [Bha97] when S and T are both strictly positive, but it exists even if the matrices don’t have full rank and by Theorem 1.2 of [Sim79], range(S ⊙T ) = range(S) ∩range(T ). Assume that k is the dimension of range(S) ∩range(T ), that B is an orthonormal basis of range(S) ∩range(T ) (i.e. B ∈Rn×k, BT B = Ik, and range(B) = range(S) ∩range(T )) and that log+ denotes the modified matrix logarithm that takes logs of the non-zero eigenvalues but leaves zero eigenvalues unchanged. Then by the same theorem3, S ⊙T = B exp(BT (log+ S + log+ T )B) BT . (4) When both matrices have the same eigensystem, then ⊙becomes the matrix product. One can show that ⊙is associative, commutative, has the identity matrix I as its neutral element and for any strictly positive definite and symmetric matrix S, S ⊙S−1 = I. Finally, (cS) ⊙T = c(S ⊙T ), for any non-negative scalar. Using this new product operation, the generalized Bayes rule becomes: D(.|y) = D(.) ⊙D(y|.) tr(D(.) ⊙D(y|.)). (5) Normalizing by the trace assures that the trace of the posterior density matrix is one. As we see in Figure 2, this posterior moves toward the largest axis of the data covariance matrix and the new rule can be interpreted as a soft calculation of the 3The log+ S term in the formula can be replaced by ˜ B log( ˜ BT S ˜ B) ˜ BT , where ˜ B is an orthonormal basis of range(S), and similarly for log+ T . Figure 5: Assume the prior density matrix is the circle D(.) = „ 1 2 0 0 1 2 « and the data covariance matrix the degenerate NE-SW ellipse D(y|.) = 1 2 „ 1 −1 −1 1 « = U „ 0 0 0 1 « U ⊤, where U = „ 1 √ 2 1 √ 2 1 √ 2 −1 √ 2 « . Now for all diagonal matrices S(.), tr(S(.) D(y|.)) = 1 2, i.e. largest eigenvalue is not “visible” in basis I. But tr 0 B B @ U ( 0 0 0 1 ) U ⊤ | {z } D(.|y) of new rule D(y|.) 1 C C A = 1. eigenvector with maximum eigenvalue. When the matrices D(.) and D(y|.) have the same eigensystem, then ⊙becomes the matrix multiplication. In particular, when the prior is diag((P(Mi))) and the covariance matrix diag((P(y|Mi)), then the new rule realizes the classical rule and computes diag((P(Mi|y)). Figure 5 gives an example that shows how the off-diagonal elements can be exploited by the new rule. In the classical Bayes rule, the normalization factor is the expected data likelihood. In the case of the generalized Bayes rule, the expected variance only upper bounds the normalization factor via the Golden-Thompsen inequality (2): tr(D(.)D(y|.)) ≥tr(D(.) ⊙D(y|.)). (6) The classical Bayes rule can be applied iteratively to a sequence of data and various cancellations occur. For the sake of simplicity we only consider two data points y1, y2: P(Mi|y2y1) = P(Mi|y1)P(y2|Mi, y1) P(y2|y1) = P(Mi)P(y1|Mi)P(y2|Mi, y1) P(y2y1) . P(y2|y1)P(y1) = ( X i P(Mi|y1) | {z } use(1) P(y2|Mi, y1))( X i P(Mi)P(y1|Mi)) = X i P(Mi)P(y1|Mi)P(y2|Mi, y1) = P(y2y1). (7) Analogously, D(.|y2y1) = D(.|y1) ⊙D(y2|., y1) tr(D(.|y1) ⊙D(y2|., y1)) = D(.) ⊙D(y1|.) ⊙D(y2|., y1) tr(D(.) ⊙D(y1|.) ⊙D(y2|., y1)). Finally, the product of the expected variance for both trials combine in a similar way, except that in the generalized case the equality becomes an inequality: tr(D(.|y1)D(y2|., y1)) tr(D(.)D(y1|.)) ≥ tr(D(.|y1) | {z } use(5) ) ⊙D(y2|., y1)) tr(D(.) ⊙D(y1|.)) = −log tr(D(.) ⊙D(y1|.) ⊙D(y2|., y1)). The above inequality is an instantiation of the Golden-Thompsen inequality (2) and the above equality generalizes the middle equality in (7). 6 The Derivation of the Generalized Bayes Rule The classical Bayes rule can be derived4 by minimizing a relative entropy to the prior plus a convex combination of the log losses of the models (See e.g. [KW99, SWRL03]): inf γi≥0, P i γi=1 X i γi ln γi P(Mi) − X i γi log P(y|Mi). Without the relative entropy, the argument of the infimum is linear in the weights γi and is minimized when all weight is placed on the maximum likelihood models, i.e. the set of indices argmaxi P(y|Mi). The negative entropy ameliorates the maximum calculation and pulls the optimal solution towards the prior. Observe that the non-negativity constraints can be dropped since the entropy acts as a barrier. By introducing a Lagrange multiplier for the remaining constraint and differentiating, we obtain the solution γ∗ i = P (Mi)P (y|Mi) P j P (Mj)P (y|Mj), which is the classical Bayes rule (1). By plugging γ∗ i into the argument of the infimum we obtain the optimum value −ln P(y). Notice that this is minus the logarithm of the normalization of the Bayes rule (1) and is also the log loss associated the standard Bayesian setup. To derive the new generalized Bayes rule in an analogous way, we use the quantum physics generalizations of the relative entropy between two densities G and D (due to Umegaki): tr(G(log G −log D)). We also need to replace the mixture of negative log likelihoods by the trace −tr(G log D(y|.)). Now the matrix parameter G is constrained to be a density matrix and the minimization problem becomes5 : inf G dens.matr. tr(G(log G −log D(.)) − tr(G log D(y|.)) Except for the quantum relative entropy term, the argument of the infimum is again linear in the variable G and is minimized when G is a single dyad uu⊤, where u is the eigenvector belonging to maximum eigenvalue of the matrix log D(y|.). The linear term pulls G toward a direction of high variance of this matrix, whereas the quantum relative entropy pulls G toward the prior density matrix. The density matrix constraint requires the eigenvalues of G to be non-negative and the trace to G to be one. The entropy works as a barrier for the non-negativity constraints and thus these constraints can be dropped. Again by introducing a Lagrange multiplier for the remaining trace constraint and differentiating (following [TRW05]), we arrive at a formula for the optimum G∗which coincides with the formula for the D(.|y) given in the generalized Bayes rule (5), where ⊙is defined6 as in (3). Since the quantum relative entropy is strictly convex [NC00] in G, the optimum G∗is unique. 4For the sake of simplicity assume that for all i, P(Mi) and P(y|Mi) are non-negative. 5Assume here that D(.) and D(y|.) are both strictly positive definite. 6With some work, one can also derive the Bayes rule with the fancier ⊙operation (4). 7 Conclusion Our generalized Bayes rule suggests a definition of conditional density matrices and we are currently developing a calculus for such matrices. In particular, a common formalism is needed that includes the multivariate conditional density matrices defined in [CA99] based on tensors. In this paper we only considered real symmetric matrices. However, our methods immediately generalize to complex Hermitian matrices, i.e square matrices in Cn×n for which S = S T = S∗. Now both the prior density matrix and the data covariance matrix must be Hermitian instead of symmetric. The generalized Bayes rule for symmetric positive definite matrices relies on computing eigendecompositions (Ω(n3) time). Hopefully, there exist O(n2) versions of the update that approximate the generalized Bayes rule sufficiently well. Extensive research has been done in the so-called “expert framework” (see e.g.[KW99] for a list of references) where a mixture over experts is maintained by the on-line algorithm for the purpose of performing as well as the best expert chosen in hindsight. In preliminary research we showed that one can maintain a density matrix over the base experts instead and derive updates similar to the generalized Bayes rule given in this paper. Most importantly, the bounds generalize to the case when mixtures over experts are replaced by density matrices. Acknowledgment: We would like to thank Dima Kuzmin for his extensive help with all aspects of this paper. Thanks also to Torsten Ehrhardt who first proved to us the range intersection and projection properties of the ⊙operation. References [Bha97] R. Bhatia. Matrix Analysis. Springer, Berlin, 1997. [CA99] N. J. Cerf and C. Adam. Quantum extension of conditional probability. Physical Review A, 60(2):893–897, August 1999. [KW97] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Information and Computation, 132(1):1– 64, January 1997. [KW99] J. Kivinen and M. K. Warmuth. Averaging expert predictions. In Computational Learning Theory: 4th European Conference (EuroCOLT ’99), pages 153–167, Berlin, March 1999. Springer. [NC00] M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000. [SBC01] R. Schack, T. A. Brun, and C. M. Caves. Quantum Bayes rule. Physical Review A, 64(014305), 2001. [Sim79] Barry Simon. Functional Integration and Quantum Physics. Academic Press, New York, 1979. [SWRL03] R. Singh, M. K. Warmuth, B. Raj, and P. Lamere. Classificaton with free energy at raised temperatures. In Proc. of EUROSPEECH 2003, pages 1773–1776, September 2003. [TRW05] K. Tsuda, G. R¨atsch, and M. K. Warmuth. Matrix exponentiated gradient updates for on-line learning and Bregman projections. Journal of Machine Learning Research, 6:995–1018, June 2005.
|
2005
|
54
|
2,871
|
Combining Graph Laplacians for Semi–Supervised Learning Andreas Argyriou, Mark Herbster, Massimiliano Pontil Department of Computer Science University College London Gower Street, London WC1E 6BT, England, UK {a.argyriou, m.herbster, m.pontil}@cs.ucl.ac.uk Abstract A foundational problem in semi-supervised learning is the construction of a graph underlying the data. We propose to use a method which optimally combines a number of differently constructed graphs. For each of these graphs we associate a basic graph kernel. We then compute an optimal combined kernel. This kernel solves an extended regularization problem which requires a joint minimization over both the data and the set of graph kernels. We present encouraging results on different OCR tasks where the optimal combined kernel is computed from graphs constructed with a variety of distances functions and the ‘k’ in nearest neighbors. 1 Introduction Semi-supervised learning has received significant attention in machine learning in recent years, see, for example, [2, 3, 4, 8, 9, 16, 17, 18] and references therein. The defining insight of semi-supervised methods is that unlabeled data may be used to improve the performance of learners in a supervised task. One of the key semi-supervised learning methods builds on the assumption that the data is situated on a low dimensional manifold within the ambient space of the data and that this manifold can be approximated by a weighted discrete graph whose vertices are identified with the empirical (labeled and unlabeled) data, [3, 17]. Graph construction consists of two stages, first selection of a distance function and then application of it to determine the graph’s edges (or weights thereof). For example, in this paper we consider distances between images based on the Euclidean distance, Euclidean distance combined with image transformations, and the related tangent distance [6]; we determine the edge set of the graph with k-nearest neighbors. Another common choice is to weight edges by a decreasing function of the distance d such as e−βd2. Although a surplus of unlabeled data may improve the quality of the empirical approximation of the manifold (via the graph) leading to improved performances, practical experience with these methods indicates that their performance significantly depends on how the graph is constructed. Hence, the model selection problem must consider both the selection of the distance function and the parameters k or β used in the graph building process described above. A diversity of methods have been proposed for graph construction; in this paper we do not advocate selecting a single graph but, rather we propose combining a number of graphs. Our solution implements a method based on regularization which builds upon the work in [1]. For a given dataset each combination of distance functions and edge set specifications from the distance will lead to a specific graph. Each of these graphs may then be associated with a kernel. We then apply regularization to select the best convex combination of these kernels; the minimizing function will trade off its fit to the data against its norm. What is unique about this regularization is that the minimization is not over a single kernel space but rather over a space corresponding to all convex combinations of kernels. Thus all data (labeled vertices) may be conserved for training rather than reduced by cross-validation which is not an appealing option when the number of labeled vertices per class is very small. Figure 3 in Section 4 illustrates our algorithm on a simple example. There, three different distances for 400 images of the digits ‘six’ and ‘nine’ are depicted, namely, the Euclidean distance, a distance invariant under small centered image rotations from [−10◦, 10◦] and a distance invariant under rotations from [−180◦, 180◦]. Clearly, the last distance is problematic as sixes become similar to nines. The performance of our graph regularization learning algorithm discussed in Section 2.2 with these distances is reported below each plot; as expected, this performance is much lower in the case that the third distance is used. The paper is constructed as follows. In Section 2 we discuss how regularization may be applied to single graphs. First, we review regularization in the context of reproducing kernel Hilbert spaces (Section 2.1); then in Section 2.2 we specialize our discussion to Hilbert spaces of functions defined over a graph. Here we review the (normalized) Laplacian of the graph and a kernel which is the pseudoinverse of the graph Laplacian. In Section 3 we detail our algorithm for learning an optimal convex combination of Laplacian kernels. Finally, in Section 4 we present experiments on the USPS dataset with our algorithm trained over different classes of Laplacian kernels. 2 Background on graph regularization In this section we review graph regularization [2, 9, 14] from the perspective of reproducing kernel Hilbert spaces, see e.g. [12]. 2.1 Reproducing kernel Hilbert spaces Let X be a set and K : X × X →IR a kernel function. We say that HK is a reproducing kernel Hilbert space (RKHS) of functions f : X →IR if (i): for every x ∈X, K(x, ·) ∈ HK and (ii): the reproducing kernel property f(x) = ⟨f, K(x, ·)⟩K holds for every f ∈ HK and x ∈X, where ⟨·, ·⟩K is the inner product on HK. In particular, (ii) tells us that for x, t ∈X, K(x, t) = ⟨K(x, ·), K(t, ·)⟩K, implying that the n × n matrix (K(ti, tj) : i, j ∈ INp) is symmetric and positive semi-definite for any set of inputs {ti : i ∈INp} ⊆X, p ∈IN, where we use the notation INp := {1, . . . , p}. Regularization in an RKHS learns a function f ∈HK on the basis of available input/output examples {(xi, yi) : i ∈INℓ} by solving the variational problem Eγ(K) := min ( ℓ X i=1 V (yi, f(xi)) + γ∥f∥2 K : f ∈HK ) (2.1) where V : IR × IR →[0, ∞) is a loss function and γ a positive parameter. Moreover, if f is a solution to problem (2.1) then it has the form f(x) = ℓ X i=1 ciK(xi, x), x ∈X (2.2) for some real vector of coefficients c = (ci : i ∈INℓ)⊤, see, for example, [12], where “⊤” denotes transposition. This vector can be found by replacing f by the right hand side of equation (2.2) in equation (2.1) and then optimizing with respect to c. However, in many practical situations it is more convenient to compute c by solving the dual problem to (2.1), namely −Eγ(K) := min ( 1 4γ c ⊤eKc + ℓ X i=1 V ∗(yi, ci) : c ∈IRℓ ) (2.3) where eK = (K(xi, xj))ℓ i,j=1 and the function V ∗: IR×IR →IR∪{+∞} is the conjugate of the loss function V which is defined, for every z, α ∈IR, as V ∗(z, α) := sup{λα − V (z, λ) : λ ∈IR}, see, for example, [1] for a discussion. The choice of the loss function V leads to different learning methods among which the most prominent are square loss regularization and support vector machines, see, for example [15]. 2.2 Graph regularization Let G be an undirected graph with m vertices and an m × m adjacency matrix A such that Aij = 1 if there is an edge connecting vertices i and j and zero otherwise1. The graph Laplacian L is the m × m matrix defined as L := D −A, where D = diag(di : i ∈INm) and di is the degree of vertex i, that is di = Pm j=1 Aij. We identify the linear space of real-valued functions defined on the graph with IRm and introduce on it the semi-inner product ⟨u, v⟩:= u ⊤Lv, u, v ∈IRm. The induced semi-norm is ∥v∥:= p ⟨v, v⟩, v ∈IRm. It is a semi-norm since ∥v∥= 0 if v is a constant vector, as can be verified by noting that ∥v∥2 = 1 2 Pm i,j=1(vi −vj)2Aij. We recall that G has r connected components if and only if L has r eigenvectors with zero eigenvalues. Those eigenvectors are piece-wise constant on the connected components of the graph. In particular, G is connected if and only if the constant vector is the only eigenvector of L with zero eigenvalue [5]. We let {σi, ui}m i=1 be a system of eigenvalues/vectors of L where the eigenvalues are non-decreasing in order, σi = 0, i ∈INr, and define the linear subspace H(G) of IRm which is orthogonal to the eigenvectors with zero eigenvalue, that is, H(G) := {v : v ⊤ui = 0, i ∈INr}. Within this framework, we wish to learn a function v ∈H(G) on the basis of a set of labeled vertices. Without loss of generality we assume that the first ℓ≤m vertices are labeled and let y1, ..., yℓ∈{−1, 1} be the corresponding labels. Following [2] we prescribe a loss function V and compute the function v by solving the optimization problem min ( ℓ X i=1 V (yi, vi) + γ∥v∥2 : v ∈H(G) ) . (2.4) We note that a similar approach is presented in [17] where v is (essentially) obtained as the minimal norm interpolant in H(G) to the labeled vertices. The functional (2.4) balances the error on the labeled points with a smoothness term measuring the complexity of v on the graph. Note that this last term contains the information of both the labeled and unlabeled vertices via the graph Laplacian. 1The ideas we discuss below naturally extend to weighted graphs. Method (2.4) is a special case of problem (2.1). Indeed, the restriction of the semi-norm ∥·∥ on H(G) is a norm. Moreover, the pseudoinverse of the Laplacian, L+, is the reproducing kernel of H(G), see, for example, [7] for a proof. This means that for every v ∈H(G) and i ∈INm there holds the reproducing kernel property vi = ⟨L+ i , v⟩, where L+ i is the i-th column of L+. Hence, by setting X ≡INm, f(i) = vi and K(i, j) = L+ ij, i, j ∈INm, we see that HK ≡H(G). We note that the above analysis naturally extends to the case that L is replaced by any positive semidefinite matrix. In particular, in our experiments below we will use the normalized Laplacian matrix given by D−1 2 LD−1 2 . Typically, problem (2.4) is solved by optimizing over v = (vi : i ∈INm). In particular, for square loss regularization [2] and minimal norm interpolation [17] this requires solving a squared linear system of m and m−ℓequations respectively. On the contrary, in this paper we use the representer theorem to express v as v = ℓ X j=1 L+ ijcj : i ∈INm . This approach is advantageous if L+ can be computed off-line because, typically, ℓ≪m. A further advantage of this approach is that multiple problems may be solved with the same Laplacian kernel. The coefficients ci are obtained by solving problem (2.3) with eK = (L+ ij)ℓ i,j=1. For example, for square loss regularization the computation of the parameter vector c = (ci : i ∈INℓ) involves solving a linear system of ℓequations, namely ( eK + γI)c = y. (2.5) 3 Learning a convex combination of Laplacian kernels We now describe our framework for learning with multiple graph Laplacians. We assume that we are given n graphs G(q), q ∈INn, all having m vertices, with corresponding Laplacians L(q), kernels K(q) = (L(q))+, Hilbert spaces H(q) := H(G(q)) and norms ∥v∥2 q := v⊤L(q)v, v ∈H(q). We propose to learn an optimal convex combination of graph kernels, that is, we solve the optimization problem ρ = min ( ℓ X i=1 V (yi, vi) + γ∥v∥2 K(λ) : λ ∈Λ, v ∈HK(λ) ) (3.1) where we have defined the set Λ := {λ ∈IRn : λq ≥0, Pn q=1 λq = 1} and, for each λ ∈Λ, the kernel K(λ) := Pn q=1 λqK(q). The above problem is motivated by observing that ρ ≤min Eγ(K(q)) : q ∈INn . Hence an optimal convex combination of kernels has a smaller right hand side than that of any individual kernel, motivating the expectation of improved performance. Furthermore, large values of the components of the minimizing λ identify the most relevant kernels. Problem (3.1) is a special case of the problem of jointly minimizing functional (2.1) over v ∈HK and K ∈co(K), the convex hull of kernels in a prescribed set K. This problem is discussed in detail in [1, 12], see also [10, 11] where the case that K is finite is considered. Practical experience with this method [1, 10, 11] indicates that it can enhance the performance of the learning algorithm and, moreover, it is computationally efficient to solve. When solving problem (3.1) it is important to require that the kernels K(q) satisfy a normalization condition such as that they all have the same trace or the same Frobenius norm, see [10] for a discussion. Initialization: Choose K(1) ∈co{K(q) : q ∈INn} For t = 1 to T: 1. compute c(t) to be the solution of problem (2.3) with K = K(t); 2. find q ∈INn : (c(t), K(q)c(t)) > (c(t), K(t)c(t)). If such q does not exist terminate; 3. compute ˆp = argmin n Eγ(pK(q) + (1 −p)K(t)) : p ∈(0, 1] o ; 4. set K(t+1) = ˆpK(q) + (1 −ˆp)K(t). Figure 1: Algorithm to compute an optimal convex combination of kernels in the set co{K(q) : q ∈INn}. Using the dual problem formulation discussed above (see equation (2.3)) in the inner minimum in (3.1) we can rewrite this problem as −ρ = max ( min n 1 4γ c ⊤eK(λ)c + ℓ X i=1 V ∗(yi, ci) : c ∈IRℓo : λ ∈Λ ) . (3.2) The variational problem (3.2) expresses the optimal convex combination of the kernels as the solution to a saddle point problem. This problem is simpler to solve than the original problem (3.1) since its objective function is linear in λ, see [1] for a discussion. Several algorithms can be used for computing a saddle point (ˆc, ˆλ) ∈IRℓ× Λ. Here we adapt an algorithm from [1] which alternately optimizes over c and λ. For reproducibility of the algorithm, it is reported in Figure 1. Note that once ˆλ is computed ˆc is given by a minimizer of problem (2.3) for K = K(λ). In particular, for square loss regularization this requires solving the equation (2.5) with eK = (Kij(ˆλ) : i, j ∈INℓ). 4 Experiments In this section we present our experiments on optical character recognition. We observed the following. First, the optimal convex combination of kernels computed by our algorithm is competitive with the best base kernels. Second, by observing the ‘weights’ of the convex combination we can distinguish the strong from the weak candidate kernels. We proceed by discussing the details of the experimental design interleaved with our results. We used the USPS dataset2 of 16×16 images of handwritten digits with pixel values ranging between -1 and 1. We present the results for 5 pairwise classification tasks of varying difficulty and for odd vs. even digit classification. For pairwise classification, the training set consisted of the first 200 images for each digit in the USPS training set and the number of labeled points was chosen to be 4, 8 or 12 (with equal numbers for each digit). For odd vs. even digit classification, the training set consisted of the first 80 images per digit in the USPS training set and the number of labeled points was 10, 20 or 30, with equal numbers for each digit. Performance was averaged over 30 random selections, each with the same number of labeled points. In each experiment, we constructed n = 30 graphs G(q) (q ∈INn) by combining k-nearest neighbors (k ∈IN10) with three different distances. Then, n corresponding Laplacians were computed together with their associated kernels. We chose as the loss function V the square loss. Since kernels obtained from different types of graphs can vary widely, it was necessary to renormalize them. Hence, we chose to normalize each kernel during the 2Available at: http://www-stat-class.stanford.edu/∼tibs/ElemStatLearn/data.html Euclidean (10 kernels) Transf. (10 kernels) Tangent dist. (10 kernels) All (30 kernels) Task \ Labels % 1% 2% 3% 1% 2% 3% 1% 2% 3% 1% 2% 3% 1 vs. 7 1.55 1.53 1.50 1.45 1.45 1.38 1.01 1.00 1.00 1.28 1.24 1.20 0.08 0.05 0.15 0.10 0.11 0.12 0.00 0.09 0.11 0.28 0.27 0.22 2 vs. 3 3.08 3.34 3.38 0.80 0.85 0.82 0.73 0.19 0.03 0.79 0.25 0.10 0.85 1.21 1.29 0.40 0.38 0.32 0.93 0.51 0.09 0.93 0.61 0.21 2 vs. 7 4.46 4.04 3.56 3.27 2.92 2.96 2.95 2.30 2.14 3.51 2.54 2.41 1.17 1.21 0.82 1.16 1.26 1.08 1.79 0.76 0.53 1.92 0.97 0.89 3 vs. 8 7.33 7.30 7.03 6.98 6.87 6.50 4.43 4.22 3.96 4.80 4.32 4.20 1.67 1.49 1.43 1.57 1.77 1.78 1.21 1.36 1.25 1.57 1.46 1.53 4 vs. 7 2.90 2.64 2.25 1.81 1.82 1.69 0.88 0.90 0.90 1.04 1.14 1.13 0.77 0.78 0.77 0.26 0.42 0.45 0.17 0.20 0.20 0.37 0.42 0.39 Labels 10 20 30 10 20 30 10 20 30 10 20 30 Odd vs. Even 18.6 15.5 13.4 15.7 11.7 8.52 14.66 10.50 8.38 17.07 10.98 8.74 3.98 2.40 2.67 4.40 3.14 1.32 4.37 2.30 1.90 4.38 2.61 2.39 Table 1: Misclassification error percentage (top) and standard deviation (bottom) for the best convex combination of kernels on different handwritten digit recognition tasks, using different distances. See text for description. training process by the Frobenius norm of its submatrix corresponding to the labeled data. We also observed that similar results were obtained when normalizing with the trace of this submatrix. The regularization parameter was set to 10−5 in all algorithms. For convex minimization, as the starting kernel in the algorithm in Figure 1 we always used the average of the n kernels and as the maximum number of iterations T = 100. Table 1 shows the results obtained using three distances as combined with k-NN (k ∈ IN10). The first distance is the Euclidean distance between images. The second method is transformation, where the distance between two images is given by the smallest Euclidean distance between any pair of transformed images as determined by applying a number of affine transformations and a thickness transformation3, see [6] for more information. The third distance is tangent distance, as described in [6], which is a first-order approximation to the above transformations. For the first three columns in the table the Euclidean distance was used, for columns 4–6 the image transformation distance was used, for columns 7–9 the tangent distance was used. Finally, in the last three columns all three methods were jointly compared. As the results indicate, when combining different types of kernels, the algorithm tends to select the most effective ones (in this case the tangent distance kernels and to a lesser degree the transformation distance kernels which did not work very well because of the Matlab optimization routine we used). We also noted that within each of the methods the performance of the convex combination is comparable to that of the best kernels. Figure 2 reports the weight of each individual kernel learned by our algorithm when 2% labels are used in the pairwise tasks and 20 labels are used for odd vs. even. With the exception of the easy 1 vs. 7 task, the large weights are associated with the graphs/kernels built with the tangent distance. The effectiveness of our algorithm in selecting the good graphs/kernels is better demonstrated in Figure 3, where the Euclidean and the transformation kernels are combined with a “low-quality” kernel. This “low-quality” kernel is induced by considering distances invariant over rotation in the range [−180◦, 180◦], so that the image of a 6 can easily have a small distance from an image of a 9, that is, if x and t are two images and Tθ(x) is the image obtained by rotating x by θ degrees, we set d(x, t) = min{∥Tθ(x) −Tθ′(t)∥: θ, θ′ ∈[−180◦, 180◦]}. 3This distance was approximated using Matlab’s constrained minimization function. The figure shows the distance matrix on the set of labeled and unlabeled data for the Euclidean, transformation and “low-quality distance” respectively. The best error among 15 different values of k within each method, the error of the learned convex combination and the total learned weights for each method are shown below each plot. It is clear that the solution of the algorithm is dominated by the good kernels and is not influenced by the ones with low performance. As a result, the error of the convex combination is comparable to that of the Euclidean and transformation methods. The final experiment (see Figure 4) demonstrates that unlabeled data improves the performance of our method. 0 5 10 15 20 25 30 0 0.02 0.04 0.06 0.08 0.1 0.12 1 vs. 7 0 5 10 15 20 25 30 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 2 vs. 3 0 5 10 15 20 25 30 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 2 vs. 7 0 5 10 15 20 25 30 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 3 vs. 8 0 5 10 15 20 25 30 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 4 vs. 7 0 5 10 15 20 25 30 0 0.05 0.1 0.15 0.2 0.25 odd−even Figure 2: Kernel weights for Euclidean (first 10), Transformation (middle 10) and Tangent (last 10). See text for more information. 0 100 200 300 400 0 50 100 150 200 250 300 350 400 Euclidean 0 100 200 300 400 0 50 100 150 200 250 300 350 400 Transformation 0 100 200 300 400 0 50 100 150 200 250 300 350 400 Low−quality distance error = 0.24% error = 0.24% error = 17.47% P15 i=1 λi = 0.553 P30 i=16 λi = 0.406 P45 i=31 λi = 0.041 convex combination error = 0.26% Figure 3: Similarity matrices and corresponding learned coefficients of the convex combination for the 6 vs. 9 task. See text for description. 5 Conclusion We have presented a method for computing an optimal kernel within the framework of regularization over graphs. The method consists of a minimax problem which can be efficiently solved by using an algorithm from [1]. When tested on optical character recognition tasks, the method exhibits competitive performance and is able to select good graph structures. Future work will focus on out-of-sample extensions of this algorithm and on continuous optimization versions of it. In particular, we may consider a continuous family of graphs each corresponding to a different weight matrix and study graph kernel combinations over this class. 0 500 1000 1500 2000 0.18 0.19 0.2 0.21 0.22 0.23 0.24 0.25 0.26 0.27 0.28 Euclidean transformation tang. dist. 0 500 1000 1500 2000 0.1 0.12 0.14 0.16 0.18 0.2 0.22 Euclidean transformation tang. dist. Figure 4: Misclassification error vs. number of training points for odd vs. even classification. The number of labeled points is 10 on the left and 20 on the right. References [1] A. Argyriou, C.A. Micchelli and M. Pontil. Learning convex combinations of continuously parameterized basic kernels. Proc. 18-th Conf. on Learning Theory, 2005. [2] M. Belkin, I. Matveeva and P. Niyogi. Regularization and semi-supervised learning on large graphs. Proc. of 17–th Conf. Learning Theory (COLT), 2004. [3] M. Belkin and P. Niyogi. Semi-supervised learning on Riemannian manifolds. Mach. Learn., 56: 209–239, 2004. [4] A. Blum and S. Chawla. Learning from Labeled and Unlabeled Data using Graph Mincuts, Proc. of 18–th International Conf. on Learning Theory, 2001. [5] F.R. Chung. Spectral Graph Theory. Regional Conference Series in Mathematics, Vol. 92, 1997. [6] T. Hastie and P. Simard. Models and Metrics for Handwritten Character Recognition. Statistical Science, 13(1): 54–65, 1998. [7] M. Herbster, M. Pontil, L. Wainer. Online learning over graphs. Proc. 22-nd Int. Conf. Machine Learning, 2005. [8] T. Joachims. Transductive Learning via Spectral Graph Partitioning. Proc. of the Int. Conf. Machine Learning (ICML), 2003. [9] R.I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input spaces. Proc. 19-th Int. Conf. Machine Learning, 2002. [10] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, M. I. Jordan. Learning the kernel matrix with semidefinite programming. J. Machine Learning Research, 5: 27–72, 2004. [11] Y. Lin and H.H. Zhang. Component selection and smoothing in smoothing spline analysis of variance models – COSSO. Institute of Statistics Mimeo Series 2556, NCSU, January 2003. [12] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization, J. Machine Learning Research, 6: 1099–1125, 2005. [13] C.S. Ong, A.J. Smola, and R.C. Williamson. Hyperkernels. Advances in Neural Information Processing Systems, 15, S. Becker et al. (Eds.), MIT Press, Cambridge, MA, 2003. [14] A.J. Smola and R.I Kondor. Kernels and regularization on graphs. Proc. of 16–th Conf. Learning Theory (COLT), 2003. [15] V.N. Vapnik. Statistical Learning Theory. Wiley, New York, 1998. [16] D. Zhou, O. Bousquet, T.N. Lal, J. Weston and B. Scholkopf. Learning with local and global consistency. Advances in Neural Information Processing Systems, 16, S. Thrun et al. (Eds.), MIT Press, Cambridge, MA, 2004. [17] X. Zhu, Z. Ghahramani and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. Proc. 20–th Int. Conf. Machine Learning, 2003. [18] X. Zhu, J. Kandola, Z, Ghahramani, J. Lafferty. Nonparametric transforms of graph kernels for semi-supervised learning. Advances in Neural Information Processing Systems, 17, L.K. Saul et al. (Eds.), MIT Press, Cambridge, MA, 2005.
|
2005
|
55
|
2,872
|
Integrate-and-Fire models with adaptation are good enough: predicting spike times under random current injection Renaud Jolivet∗ Brain Mind Institute, EPFL CH-1015 Lausanne, Switzerland renaud.jolivet@epfl.ch Alexander Rauch MPI for Biological Cybernetics D-72012 T¨ubingen, Germany alexander.rauch@tuebingen.mpg.de Hans-Rudolf L¨uscher Institute of Physiology CH-3012 Bern, Switzerland luescher@pyl.unibe.ch Wulfram Gerstner Brain Mind Institute, EPFL CH-1015 Lausanne, Switzerland wulfram.gerstner@epfl.ch Abstract Integrate-and-Fire-type models are usually criticized because of their simplicity. On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. Here, we develop a sequential procedure to quantitatively evaluate an equivalent Integrate-and-Fire-type model based on intracellular recordings of cortical pyramidal neurons. We find that the resulting effective model is sufficient to predict the spike train of the real pyramidal neuron with high accuracy. In in vivo-like regimes, predicted and recorded traces are almost indistinguishable and a significant part of the spikes can be predicted at the correct timing. Slow processes like spike-frequency adaptation are shown to be a key feature in this context since they are necessary for the model to connect between different driving regimes. 1 Introduction In a recent paper, Feng [1] was questioning the “goodness” of the Integrate-and-Fire model (I&F). This is a question of importance since the I&F model is one of the most commonly used spiking neuron model in theoretical studies as well as in the machine learning community (see [2-3] for a review). The I&F model is usually criticized in the biological community because of its simplicity. It is believed to be much too simple to capture the firing dynamics of real neurons beyond a very rough and conceptual description of input integration and spikes initiation. Nevertheless, recent years have seen several groups reporting that this type of model yields quantitative predictions of the activity of real neurons. Rauch and colleagues have shown that I&F-type models (with adaptation) reliably predict the mean firing rate of cortical ∗homepage: http://icwww.epfl.ch/∼rjolivet pyramidal cells [4]. Keat and colleagues have shown that a similar model is able to predict almost exactly the timing of spikes of neurons in the visual pathway [5]. However, the question is still open of how the predictions of I&F-type models compare to the precise structure of spike trains in the cortex. Indeed, cortical pyramidal neurons are known to produce spike trains whose reliability highly depends on the input scenario [6]. The aim of this paper is twofold. Firstly, we will show that there exists a systematic way to extract relevant parameters of an I&F-type model from intracellular recordings. To do so, we will follow the method exposed in [7] and which is based on optimal filtering techniques. Alternative approaches like maximum-likelihood methods exist and have been explored recently by Paninski and colleagues [8]. Note that both approaches had already been mentioned by Brillinger and Segundo [9]. Secondly, we will show by a quantitative evaluation of the model performances that the quality of simple threshold models is surprisingly good and is close to the intrinsic reliability of real neurons. We will try to convince the reader that, given the addition of a slow process, the I&F model is in fact a model that can be considered good enough for pyramidal neurons of the neocortex under random current injection. 2 Model and Methods We started by collecting recordings. Layer 5 pyramidal neurons of the rat neocortex were recorded intracellularly in vitro while stimulated at the soma by a randomly fluctuating current generated by an Ornstein-Uhlenbeck (OU) process with a 1 ms autocorrelation time. Both the mean µI and the variance σ2 I of the OU process were varied in order to sample the response of the neurons to various levels of tonic and noisy inputs. Details of the experimental procedure can be found in [4]. A subset of these recordings was used to construct, separately for each recorded neuron, a generalized I&F-type model that we formulated in the framework of the Spike Response Model [3]. 2.1 Definition of the model The Spike Response Model (SRM) is written u(t) = η(t −ˆt) + Z +∞ 0 κ(s) I(t −s)ds (1) with u the membrane voltage of the neuron and I the external driving current. The kernel κ models the integrative properties of the membrane. The kernel η acts as a template for the shape of spikes (usually highly stereotyped). Like in the I&F model, the model neuron fires each time that the membrane voltage u crosses the threshold ϑ from below if u(t) ≥ϑ(t) and d dtu(t) ≥d dtϑ(t), then ˆt = t (2) Here, the threshold includes a mechanism of spike-frequency adaptation. ϑ is given by the following equation dϑ dt = −ϑ −ϑ0 τϑ + Aϑ X k δ(t −tk) (3) Each time that a spike is fired, the threshold ϑ is increased by a fixed amount Aϑ. It then decays back to its resting value ϑ0 with time constant τϑ. tk denote the past firing times of the model neuron. During discharge at rate f, the threshold fluctuates around the average value ¯ϑ ≈ϑ0 + α f (4) where α = Aϑ τϑ. This type of adaptation mechanism has been shown to constitute a universal model for spike-frequency adaptation [10] and has already been applied in a similar context [11]. During the model estimation, we use as a first step a traditional constant threshold denoted by ϑ(t) = ϑcst which is then transformed in the adaptive threshold of Equation (3) by a procedure to be detailed below. 2.2 Mapping technique The mapping technique itself is extensively described in [7,12-13] and we refer interested readers to these publications. In short, it is a systematic step-by-step evaluation and optimization procedure based on intracellular recordings. It consists in sequentially evaluating kernels (η and κ) and parameters [Aϑ, ϑ0 and τϑ in Equation (3)] that characterize a specific instance of the model. The consecutive steps of the procedure are as follows 1. Extract the kernel η from a sample voltage recording by spike triggered averaging. For the sake of simplicity, we assume that the mean drive µI = 0. 2. Subtract η from the voltage recording to isolate the subthreshold fluctuations. 3. Extract the kernel κ by the Wiener-Hopf optimal filtering technique [7,14]. This step involves a comparison between the subthreshold fluctuations and the corresponding input current. 4. Find the optimal constant threshold ϑcst. The optimal value of ϑcst is the one that maximizes the coefficient Γ (see subsection 2.3 below for the definition of Γ). The parameter ϑcst depends on the specific set of input parameters (mean µI and variance σ2 I) used during stimulation. 5. Plot the threshold ϑcst as a function of the firing frequency f of the neuron and run a linear regression. ϑ0 is identified with the value of the fit at f = 0 and α with the slope [see Equation (4) and Figure 1C]. 6. Optimize Aϑ for the best performances (again measured with Γ), τϑ is defined as τϑ = α/Aϑ. Figure 1A and B show kernels η (step 1) and κ (step 3) for a typical neuron. The double exponential shape of κ is due to the coupling between somatic and dendritic compartments [15]. Figure 1C shows the optimal constant ϑcst plotted versus f. It is very well fitted by a simple linear function and allows to determine the parameters ϑ0 and α (steps 4 and 5). 2.3 Evaluation of performances The performances of the model are evaluated with the coincidence factor Γ [16]. It is defined by Γ = Ncoinc −⟨Ncoinc⟩ 1 2(Ndata + NSRM) 1 N (5) where Ndata is the number of spikes in the reference spike train, NSRM is the number of spikes in the predicted spike train SSRM, Ncoinc is the number of coincidences with precision ∆between the two spike trains, and ⟨Ncoinc⟩= 2ν∆Ndata is the expected number of coincidences generated by a homogeneous Poisson process with the same rate ν as the spike train SSRM. The factor N = 1−2ν∆normalizes Γ to a maximum value Γ = 1 which is reached if and only if the spike train of the SRM reproduces exactly that of the cell. A homogeneous Poisson process with the same number of spikes as the SRM would yield Γ = 0. We compute the coincidence factor Γ by comparing the two complete spike trains as in [7]. Throughout the paper, we use ∆= 2 ms. Results do depend on ∆but the exact value of ∆is not critical as long as it is chosen in a reasonable range 1 ≤∆≤4 ms [17]. The coincidence factor Γ is similar to the “reliability” as defined in [6]. All measures of Γ A B C -5 0 5 10 15 time (msec) -80 -60 -40 -20 0 20 η (mV) -5 0 5 10 15 time (msec) 0.000 0.005 0.010 κ∞ (GΩs -1) 0 10 20 30 f (Hz) -55 -50 -45 -40 θcst (mV) R 2 = 0.93 p < 0.0001 Figure 1: Kernels η (A) and κ (B) as extracted by the method exposed in this paper. Raw data (symbols) and fit by double exponential functions (solid line). C. The optimal constant threshold ϑcst is plotted versus the output frequency f (symbols). It is very neatly fitted by a linear function (line). reported in this paper are given for new stimuli, independent of those used for parameter optimization during the model estimation procedure. 3 Results Figure 2 shows a direct comparison between predicted and recorded spike train for a typical neuron. Both spike trains are almost indistinguishable (A). Even when zooming on the subthreshold regime, differences are in the range of a few millivolts only (B). The spike dynamics is correctly predicted apart from a short period of time just after a spike is emitted (C). This is due to the fact that the kernel η was extracted for a mean drive µI = 0. Here, the mean is much larger than 0 and the neuron has already adapted to this new regime. It produces slightly different after-spike effects. This can be corrected easily in our framework by taking a time-dependent time constant in the kernel κ, i.e. κ(s) → κ(t −ˆt, s). This dependence is of importance to account for spike-to-spike interactions [18]. The mapping procedure discussed above allows, in principle, to compute κ(t −ˆt, s) for any t −ˆt (see [7] for further details). However, it requires longer recordings than the ones provided by our experiments and was dropped here. Before moving to a quantitative estimate of the quality of the predictions of our model, we need to understand what kind of limits are imposed on predictions by the modelled neurons themselves. It is well known that pyramidal neurons of the cortex respond with very different reliability depending on the type of stimulation they receive [6]. Neurons tend to fire regularly but without conserving the exact timing of spikes in response to constant or quasi constant input current. On the other hand, they fire irregularly but reliably in terms of spike timing in response to fluctuating current. We do not expect our model to yield better predictions than the intrinsic reliability of the modelled neuron. To evaluate the intrinsic reliability of the pyramidal neurons, we repeated injection of the same OU process, i.e. injection of processes with the same seed, and computed Γ between the repeated spike trains obtained in response to this procedure. Figure 3A shows a surface plot of the intrinsic reliability Γn→n of a typical neuron (the subscript n →n is written for neuron to itself). It is plotted versus the parameters of the stimulation, the current mean drive µI and its standard deviation σI. We find that the mean drive µI has almost no impact on Γn→n (measured cross-correlation coefficient r = 0.04 with a p-value p = 0.81). On the other hand, σI has a strong impact on the reliability of the neuron (r = 0.93 with p < 10−4). When σI is large (σI ≳300 pA), Γn→n reaches a plateau at about 0.84 ± 0.05 (mean ± s.d.). A 1700 1800 1900 time (msec) -80 -60 -40 -20 0 20 40 membrane voltage (mV) B C Figure 2: Performances of the SRM constructed by the method presented in this paper. A. The prediction of the model (black line) is compared to the spike train of the corresponding neuron (thick grey line). B. Zoom on the subthreshold regime. This panel corresponds to the first dotted zone in A (horizontal bar is 5 ms; vertical bar is 5 mV) C. Zoom on a correctly predicted spike. This panel corresponds to the second dotted zone in A (horizontal bar is 1 ms; vertical bar is 20 mV). The model slightly undershoots during about 4 ms after the spike (see text for further details). When σI decreases to 100 ≤σI ≤300 pA, Γn→n quickly drops to an intermediate value of 0.65 ± 0.1 and finally for σI ≤100 pA drops down to 0.09 ± 0.05. These findings are stable across the different neurons that we recorded and repeat the findings of Mainen and Sejnowski [6]. In order to connect model predictions to these findings, we evaluate the Γ coincidence factor between the predicted spike train and the recorded spike trains (this Γ is labelled m →n for model to neuron). Figure 3B shows a plot of Γm→n versus Γn→n. We find that the predictions of our minimal model are close to the natural upper bound set by the intrinsic reliability of the pyramidal neuron. On average, the minimal model achieves a quality Γm→n which is 65% (±3% s.e.m.) of the upper bound, i.e. Γm→n = 0.65 Γn→n. Furthermore, let us recall that due to the definition of the coincidence factor Γ, the threshold for statistical significance here is Γm→n = 0. All the points are well above this value, hence highly significant. Finally, we compare the predictions of our minimal model in terms of two other indicators, the mean rate and the coefficient of variation of the interspike interval distribution (Cv). The mean rate is usually correctly predicted by our minimal model (see Figure 3C) in agreement with the findings of Rauch and colleagues [4]. The Cv is predicted in the correct range as well but may vary due to missed or extra spikes added in the prediction (data not shown). It is also noteworthy that available spike trains are not very long (a few seconds) and the number of spikes is sometimes too low to yield a reliable estimate of the Cv. A B 0.5 1 Γn→n 0.5 1 Γm→n R 2 = 0.68 p < 0.0002 C D 0 50 100 150 actual rate (Hz) 0 50 100 150 predicted rate (Hz) R 2 = 0.96 p < 0.0001 0.5 1 Γn→n 0.5 1 Γm→n R 2 = 0.81 p < 0.0001 Figure 3: Quantitative performances of the model. A. Intrinsic reliability Γn→n of a typical pyramidal neuron in function of the mean drive µI and its standard deviation σI. B. Performances of the SRM in correct spike timing prediction Γm→n are plotted versus the cells intrinsic reliability Γn→n (symbols) for the very same stimulation parameters. The diagonal line (solid) denotes the “natural” upper bound limit imposed by the neurons intrinsic reliability. C. Predicted frequency versus actual frequency (symbols). D. Same as in A but in a model without adaptation where the threshold has been optimized separately for each set of stimulation parameters (see text for further details.) Previous model studies had shown that a model with a threshold simpler than the one used here is able to reliably predict the spike train of more detailed neuron models [7,12]. Here, we used a threshold including an adaptation mechanism. Without adaptation, i.e. when the sum over all preceding spikes in Equation (3) is replaced by the contribution of the last emitted spike only, it is still possible to reach the same quality of predictions for each driving regime (Figure 3D) under the condition that the three threshold parameters (Aϑ, ϑ0 and τϑ) are chosen differently for each set of input parameters µI and σI. In contrast to this, our I&F model with adaptation achieves the same level of predictive quality (Figure 3B) with one single set of threshold parameters. This illustrates the importance of adaptation to I&F models or SRM. 4 Discussion Mapping real neurons to simplified neuronal models has benefited from many developments in recent years [4-5,7-8,11-13,19-22] and was applied to both in vitro [4,9,13,22] and in vivo recordings [5]. We have shown here that a simple estimation procedure allows to build an equivalent I&F-type model for a collection of cortical neurons. The model neuron is built sequentially from intracellular recordings. The resulting model is very efficient in the sense that it allows a quantitative and accurate prediction of the spike train of the real neuron. Most of the time, the predicted subthreshold membrane voltage differs from the recorded one by a few millivolts only. The mean firing rate of the minimal model corresponds to that of the real neuron. The statistical structure of the spike train is approximately conserved since we observe that the coefficient of variation (Cv) of the interspike interval distribution is predicted in the correct range by our minimal model. But most important, our minimal model has the ability to predict spikes with the correct timing (±2 ms) and the level of prediction that is reached is close to the intrinsic reliability of the real neuron in terms of spike timing [6]. The adapting threshold has been found to play an important role. It allows the model to tune to variable input characteristics and to extend its predictions beyond the input regimes used for model evaluation. This work suggests that L5 neocortical pyramidal neurons under random current injection behave very much like I&F neurons including a spike-frequency adaptation process. This is a result of importance. Indeed, the I&F-type models are extremely popular in large scale network studies. Our results can be viewed as a strong a posteriori justification to the use of this class of model neurons. They also indicate that the picture of a neuron combining a linear summation in the subthreshold regime with a threshold criterion for spike initiation is good enough to account for much of the behavior in an in vivo-like lab setting. This should however be moderated since several important aspects were neglected in this study. First, we used random current injection rather than a more realistic random conductance protocol [23]. In a previous report [12], we had checked the consequences of random conductance injection with simulated data. We found that random conductance injection mainly changes the effective membrane time constant of the neuron and can be accounted for by making the time course of the optimal linear filter (κ here) depend on the mean input to the neuron. The minimal model reached the same quality level of predictions when driven by random conductance injection [12] as the level it reaches when driven by random current injection [7]. Second, a largely fluctuating current generated by a random process can only be seen as a poor approximation to the input a neuron would receive in vivo. Our input has stationary statistics with a spectrum that is close to white (cut-off at 1 kHz), but a lower cut-off frequency could be used as well. Whether random input is a reasonable model of the input a neuron would receive in vivo is highly controversial [24-26], but from a purely practical point of view random stimulation provides at least a well-defined experimental paradigm for in vitro experiments that mimics some aspects of synaptic bombardment [27]. Third, all transient effects have been excluded since neuronal data is analyzed in the adapted state. Finally, our experimental paradigm used somatic current injection. Thus, all dendritic non-linearities, including backpropagating action potentials and dendritic spikes are excluded. In summary, simple threshold models will never be able to account for all the variety of neuronal responses that can be probed in an artificial laboratory setting. For example, effects of delayed spike initiation cannot be reproduced by simple threshold models that combine linear subthreshold behavior with a strict threshold criterion (but could be reproduced by quadratic or exponential I&F models). For this reason, we are currently studying exponential I&F models with adaptation that allow us to relate our approach with other known models [21,28]. However, for random current injection that mimics synaptic bombardment, the picture of a neuron that combines linear summation with a threshold criterion is not too wrong. Moreover, in contrast to more complicated neuron models, the simple threshold model allows rapid parameter extraction from experimental traces; efficient numerical simulation; and rigorous mathematical analysis. Our results also suggest that, if any elaborated computation is taking place in single neurons, it is likely to happen at dendritic level rather than at somatic level. In absence of a clear understanding of dendritic computation, the I&F neuron with adaptation thus appears as a model that we consider “good enough”. Acknowledgments This work was supported by Swiss National Science Foundation grants number FN 200020103530/1 to WG and number 3100-061335.00 to HRL. References [1] Feng J. Neural Net. 14: 955–975, 2001. [2] Maass W & Bishop C. Pulsed Neural Networks. MIT Press, Cambridge, 1998. [3] Gerstner W & Kistler W. Spiking neurons models: single neurons, populations, plasticity. Cambridge Univ. Press, Cambridge, 2002. [4] Rauch A, La Camera G, L¨uscher H, Senn W & Fusi S. J. Neurophysiol. 90: 1598–1612, 2003. [5] Keat J, Reinagel P, Reid R & Meister M. Neuron 30: 803-817, 2001. [6] Mainen Z and Sejnowski T. Science 268: 1503–1506, 1995. [7] Jolivet R, Lewis TJ & Gerstner W. J. Neurophysiol. 92: 959–976, 2004. [8] Paninski L, Pillow J & Simoncelli E. Neural Comp. 16: 2533-2561, 2004. [9] Brillinger D & Segundo J. Biol. Cyber. 35: 213-220, 1979. [10] Benda J & Herz A. Neural Comp. 15: 2523-2564, 2003. [11] La Camera G, Rauch A, L¨uscher H, Senn W & Fusi S. Neural Comp. 16: 2101-2124, 2004. [12] Jolivet R & Gerstner W. J. Physiol.-Paris 98: 442-451, 2004. [13] Jolivet R, Rauch A, L¨uscher H & Gerstner W. Accepted in J. Comp. Neuro. [14] Wiener N. Nonlinear problems in random theory. MIT Press, Cambridge, 1958. [15] Roth A & H¨ausser M. J. Physiol. 535: 445-472, 2001. [16] Kistler W, Gerstner W & van Hemmen J. Neural Comp. 9: 1015-1045, 1997. [17] Jolivet R (2005). Effective minimal threshold models of neuronal activity. PhD thesis, EPFL, Lausanne. [18] Arcas B & Fairhall A. Neural Comp. 15: 1789-1807, 2003. [19] Brillinger D. Ann. Biomed. Engineer. 16: 3-16, 1988. [20] Arcas B, Fairhall A & Bialek W. Neural Comp. 15: 1715-1749, 2003. [21] Izhikevich E. IEEE Trans. Neural Net. 14: 1569-1572, 2003. [22] Paninski L, Pillow J & Simoncelli E. Neurocomp. 65-66: 379-385, 2005. [23] Robinson H & Kawai N. J. Neurosci. Meth. 49: 157-165, 1993. [24] Arieli A, Sterkin A, Grinvald A & Aertsen A. Science 273: 1868–1871, 1996. [25] De Weese M & Zador A. J. Neurosci. 23: 7940–7949, 2003. [26] Stevens C & Zador A. In Proc. of the 5th Joint Symp. on Neural Comp., Inst. for Neural Comp., La Jolla, 1998. [27] Destexhe A, Rudolph M & Par´e D. Nat. Rev. Neurosci. 4: 739-751, 2003. [28] Fourcaud-Trocm´e N, Hansel D, van Vreeswijk C & Brunel N. J. Neurosci. 23: 11628-11640, 2003.
|
2005
|
56
|
2,873
|
Large-Scale Multiclass Transduction Thomas G¨artner Fraunhofer AIS.KD, 53754 Sankt Augustin, Thomas.Gaertner@ais.fraunhofer.de Quoc V. Le, Simon Burton, Alex J. Smola, Vishy Vishwanathan Statistical Machine Learning Program, NICTA and ANU, Canberra, ACT {Quoc.Le, Simon.Burton, Alex.Smola, SVN.Vishwanathan}@nicta.com.au Abstract We present a method for performing transductive inference on very large datasets. Our algorithm is based on multiclass Gaussian processes and is effective whenever the multiplication of the kernel matrix or its inverse with a vector can be computed sufficiently fast. This holds, for instance, for certain graph and string kernels. Transduction is achieved by variational inference over the unlabeled data subject to a balancing constraint. 1 Introduction While obtaining labeled data remains a time and labor consuming task, acquisition and storage of unlabelled data is becoming increasingly cheap and easy. This development has driven machine learning research into exploring algorithms that make extensive use of unlabelled data at training time in order to obtain better generalization performance. A common problem of many transductive approaches is that they scale badly with the amount of unlabeled data, which prohibits the use of massive sets of unlabeled data. Our algorithm shows improved scaling behavior, both for standard Gaussian Process classification and transduction. We perform classification on a dataset consisting of a digraph with 75, 888 vertices and 508, 960 edges. To the best of our knowledge it has so far not been possible to perform transduction on graphs of this size in reasonable time (with standard hardware). On standard data our method shows competitive or better performance. Existing Transductive Approaches for SVMs use nonlinear programming [2] or EM-style iterations for binary classification [4]. Moreover, on graphs various methods for unsupervised learning have been proposed [12, 11], all of which are mainly concerned with computing the kernel matrix on training and test set jointly. Other formulations impose that the label assignment on the test set be consistent with the assumption of confident classification [8]. Yet others impose that training and test set have similar marginal distributions [4]. The present paper uses all three properties. It is particularly efficient whenever Kα or K−1α can be computed in linear time, where K ∈Rm×m is the kernel matrix and α ∈Rm. • We require consistency of training and test marginals. This avoids problems with overly large majority classes and small training sets. • Kernels (or their inverses) are computed on training and test set simultaneously. On graphs this can lead to considerable computational savings. • Self consistency of the estimates is achieved by a variational approach. This allows us to make use of Gaussian Process multiclass formulations. 2 Multiclass Classification We begin with a brief overview over Gaussian Process multiclass classification [10] recast in terms of exponential families. Denote by X × Y with Y = {1..n} the domain of observations and labels. Moreover let X := {x1, . . . , xm} and Y := {y1, . . . , ym} be the set of observations. It is our goal to estimate y|x via p(y|x, θ) = exp (⟨φ(x, y), θ⟩−g(θ|x)) where g(θ|x) = log X y∈Y exp (⟨φ(x, y), θ⟩) . (1) φ(x, y) are the joint sufficient statistics of x and y and g(θ|x) is the log-partition function which takes care of the normalization. We impose a normal prior on θ, leading to the following negative joint likelihood in θ and Y : P := −log p(θ, Y |X) = m X i=1 [g(θ|xi) −⟨φ(xi, yi), θ⟩] + 1 2σ2 ∥θ∥2 + const. (2) For transduction purposes p(θ, Y |X) will prove more useful than p(θ|Y, X). Note that a normal prior on θ with variance σ21 implies a Gaussian process on the random variable t(x, y) := ⟨φ(x, y), θ⟩with covariance kernel Cov [t(x, y), t(x′, y′)] = σ2 ⟨φ(x, y), φ(x′, y′)⟩=: σ2k((x, y), (x′, y′)). (3) Parametric Optimization Problem In the following we assume isotropy among the class labels, that is ⟨φ(x, y), φ(x′, y′)⟩= δy,y′ ⟨φ(x), φ(x′)⟩(this is not a necessary requirement for the efficiency of our algorithm, however it greatly simplifies the presentation). This allows us to decompose θ into θ1, . . . , θn such that ⟨φ(x, y), θ⟩= ⟨φ(x), θy⟩and ∥θ∥2 = n X y=1 ∥θy∥2. (4) Applying the representer theorem allows us to expand θ in terms of φ(xi, yi) as θ = Pm i=1 Pn y=1 αiyφ(xi, y). In conjunction with (4) we have θy = m X i=1 αiyφ(xi) where α ∈Rm×n. (5) Let µ ∈Rm×n with µij = 1 if yi = j and µij = 0 otherwise, and K ∈Rm×m with Kij = ⟨φ(xi), φ(xj)⟩. Here joint log-likelihood (2) in terms of α and K yields m X i=1 log n X y=1 exp ([Kα]iy) −tr µ⊤Kα + 1 2σ2 tr α⊤Kα + const. (6) Equivalently we could expand (2) in terms of t := Kα. This is commonly done in Gaussian process literature and we will use both formulations, depending on the problem we need to solve: if Kα can be computed effectively, as is the case with string kernels [9], we use the α-parameterization. Conversely, if K−1α is cheap, as for example with graph kernels [7], we use the t-parameterization. Derivatives Second order methods such as Conjugate Gradient require the computation of derivatives of −log p(θ, Y |X) with respect to θ in terms of α or t. Using the shorthand π ∈Rm×n with πij := p(y = j|xi, θ) we have ∂αP = K(π −µ + σ−2α) and ∂tP = π −µ + σ−2K−1t. (7) To avoid spelling out tensors of fourth order for the second derivatives (since α ∈Rm×n) we state the action of the latter as bilinear forms on vectors β, γ, u, v ∈Rm×n. For convenience we use the “Matlab” notation of ’.∗’ to denote element-wise multiplication of matrices: ∂2 αP[β, γ] = tr(Kγ)⊤(π. ∗(Kβ)) −tr(π. ∗Kγ)⊤(π. ∗(Kβ)) + σ−2 tr γ⊤Kβ (8a) ∂2 t P[u, v] = tr u⊤(π. ∗v) −tr(π. ∗u)⊤(π. ∗v) + σ−2 tr u⊤K−1v. (8b) Let L · n be the computational time required to compute Kα and K−1t respectively. One may check that L = O(m) implies that each conjugate gradient (CG) descent step can be performed in O(m) time. Combining this with rates of convergence for Newton-type or nonlinear CG solver strategies yields overall time costs in the order of O(m log m) to O(m2) worst case, a significant improvement over conventional O(m3) methods. 3 Transductive Inference by Variational Methods As we are interested in transduction, the labels Y (and analogously the data X) decompose as Y = Ytrain ∪Ytest. To directly estimate p(Ytest|X, Ytrain) we would need to integrating out θ, which is usually intractable. Instead, we now aim at estimating the mode of p(θ|X, Ytrain) by variational means. With the KL-divergence D and an arbitrary distribution q the well-known bound (see e.g. [5]) −log p(θ|X, Ytrain) ≤−log p(θ|X, Ytrain) + D(q(Ytest)∥p(Ytest|X, Ytrain, θ)) (9) = − X Ytest (log p(Ytest, θ|X, Ytrain) −log q(Ytest)) q(Ytest) (10) holds. This bound (10) can be minimized with respect to θ and q in an iterative fashion. The key trick is that while using a factorizing approximation for q we restrict the latter to distributions which satisfy balancing constraints. That is, we require them to yield marginals on the unlabeled data which are comparable with the labeled observations. Decomposing the Variational Bound To simplify (10) observe that p(Ytest, θ|X, Ytrain) = p(Ytrain, Ytest, θ|X)/p(Ytrain|X). (11) In other words, the first term in (10) equals (6) up to a constant independent of θ or Ytest. With qij := q(yi = j) we define µij(q) = qij for all i > mtrain and µij(q) = 1 if yi = 1 and 0 otherwise for all i ≤mtrain. In other words, we are taking the expectation in µ over all unobserved labels Ytest with respect to the distribution q(Ytest). We have X Ytest q(Ytest) log p(Ytest, θ|X, Ytrain) = m X i=1 log n X j=1 exp ([Kα]ij) −tr µ(q)⊤Kα + 1 2σ2 tr α⊤Kα + const. (12) For fixed q the optimization over θ proceeds as in Section 2. Next we discuss q. Optimization over q The second term in (10) is the negative entropy of q. Since q factorizes we have X Ytest q(Ytest) log q(Ytest) = m X i=mtrain+1 qij log qij. (13) It is unreasonable to assume that q may be chosen freely from all factorizing distributions (the latter would lead to a straightforward EM algorithm for transductive inference): if we observe a certain distribution of labels on the training set, e.g., for binary classification we see 45% positive and 55% negative labels, then it is very unlikely that the label distribution on the test set deviates significantly. Hence we should make use of this information. If m ≫mtrain, however, a naive application of the variational bound can lead to cases where q is concentrated on one class — the increase in likelihood for a resulting very simple classifier completely outweighs any balancing constraints implicit in the data. This is confirmed by experimental results. It is, incidentally, also the reason why SVM transduction optimization codes [4] impose a balancing constraint on the assignment of test labels. We impose the following conditions: r− j ≤ m X i=mtrain+1 qij ≤r+ j for all j ∈Y and n X j=1 qij = 1 for all i ∈{mtrain..m} . Here the constraints r− j = pemp(y = j) −ǫ and r+ j = pemp(y = j) + ǫ are chosen such as to correspond to confidence intervals given by finite sample size tail bounds. In other words we set pemp(y = j) = m−1 train Pmtrain i=1 {yi = j} and ǫ such as to satisfy Pr (m−1 train mtrain X i=1 ξi −m−1 test mtest X i=1 ξ′ i > ǫ ) ≤δ (14) for iid {0, 1} random variables ξi and ξ′ i with mean p. This is a standard ghost-sample inequality. It follows directly from [3, Eq. (2.7)] after application of a union bound over the class labels that ǫ ≤ p log(2n/δ)m/ (2mtrainmtest). 4 Graphs, Strings and Vectors We now discuss the two main applications where computational savings can be achieved: graphs and strings. In the case of graphs, the advantage arises from the fact that K−1 is sparse, whereas for texts we can use fast string kernels [9] to compute Kα in linear time. Graphs Denote by G(V, E) the graph given by vertices V and edges E where each edge is a set of two vertices. Then W ∈R|V |×|V | denotes the adjacency matrix of the graph, where Wij > 0 only if edge {i, j} ∈E. We assume that the graph G, and thus also the adjacency matrix W, is sparse. Now denote by 1 the identity matrix and by D the diagonal matrix of vertex degrees, i.e., Dii = P j Wij. Then the graph Laplacian and the normalized graph Laplacian of G are given by L := D −W and ˜L := 1 −D−1 2 WD−1 2 , (15) respectively. Many kernels K (or their inverse) on G are given by low-degree polynomials of the Laplacian or the adjacency matrix of G, such as the following: K = l X i=1 ciW 2i, K = lY i=1 (1 −ci ˜L), or K−1 = ˜L + ǫ1. (16) In all three cases we assumed ci, ǫ ≥0 and l ∈N. The first kernel arises from an l-step random walk, the third case is typically referred to as regularized graph Laplacian. In these cases Kα or K−1t can be computed using L = l(|V | + |E|) operations. This means that if the average degree of the graph does not increase with the number of observations, L = O(m) as m = |V | for inference on graphs. From Graphs to Graphical Models Graphs are one of the examples where transduction actually improves computational cost: Assume that we are given the inverse kernel matrix K−1 on training and test set and we wish to perform induction only. In this case we need to compute the kernel matrix (or its inverse) restricted to the training set. Let K−1 = A B B⊤ C , then the upper left hand corner (representing the training set part only) of K is given by the Schur complement A −B⊤C−1B −1. Computing the latter is costly. Moreover, neither the Schur complement nor its inverse are typically sparse. Here we have a nice connection between graphical models and graph kernels. Assume that t is a normal random variable with conditional independence properties. In this case the inverse covariance matrix has nonzero entries only for variables with a direct dependency structure. This follows directly from an application of the Clifford-Hammersley theorem to Gaussian random variables [6]. In other words, if we are given a graphical model of normal random variables, their conditional independence structure is reflected by K−1. In the same way as in graphical models marginalization may induce dependencies, computing the kernel matrix on the training set only, may lead to dense matrices, even when the inverse kernel on training and test data combined is sparse. The bottom line is there are cases where it is computationally cheaper to take both training and test set into account and optimize over a larger set of variables rather than dealing with a smaller dense matrix. Strings: Efficient computation of string kernels using suffix trees was described in [9]. In particular, it was observed that expansions of the form Pm i=1 αik(xi, x) can be evaluated in linear time in the length of x, provided some preprocessing for the coefficients α and observations xi is performed. This preprocessing is independent of x and can be computed in O(P i |xi|) time. The efficient computation scheme covers all kernels of type k(x, x′) = X s ws#s(x)#s(x′) (17) for arbitrary ws ≥0. Here, #s(x) denotes the number of occurrences of s in x and the sum is carried out over all substrings of x. This means that computation time for evaluating Kα is again O(P i |xi|) as we need to evaluate the kernel expansion for all x ∈X. Since the average string length is independent of m this yields an O(m) algorithm for Kα. Vectors: If k(x, x′) = φ(x)⊤φ(x′) and φ(x) ∈Rd for d ≪m, it is possible to carry out matrix vector multiplications in O(md) time. This is useful for cases where we have a sparse matrix with a small number of low-rank updates (e.g. from low rank dense fill-ins). 5 Optimization Optimization in α and t: P is convex in α (and in t since t = Kα). This means that a combination of Conjugate-Gradient and Newton-Raphson (NR) can be used for optimization. • Compute updates α ←−α −η∂2 αP −1∂αP via – Solve the linear system approximately by Conjugate Gradient iterations. – Find optimal η by line search. • Repeat until the norm of the gradient is sufficiently small. Key is the fact that the arising linear system is only solved approximately, which can be done using very few CG iterations. Since each of them is O(m) for fast kernel-vector computations the overall cost is a sub-quadratic function of m. Optimization in q is somewhat less straightforward: we need to find the optimal q in terms of KL-divergence subject to the marginal constraint. Denote by τ the part of Kα pertaining to test data, or more formally τ ∈Rmtest×n with τij = [Kα]i+mtrain,j. We have: minimize q tr q⊤τ + X i,j qij log qij (18) subject to q− j ≤ X i qij ≤q+ j , qij ≥0 and X i qli = 1 for all j ∈Y, l ∈{1..mtest} Table 1: Error rates on some benchmark datasets (mostly from UCI). The last column is the error rates reported in [1] DATASET #INST #ATTR IND. GP TRANSD. GP S3VMMIP cancer 699 9 3.4%±4.1% 2.1%±4.7% 3.4% cancer (progn.) 569 30 6.1%±3.7% 6.0%±3.7% 3.3% heart (cleave.) 297 13 15.0%±5.6% 13.0%±6.3% 16.0% housing 506 13 7.0%±1.0% 6.8%±0.9% 15.1% ionosphere 351 34 8.6%±6.3% 6.1%±3.4% 10.6% pima 769 8 19.6%±8.1% 17.6%±8.0% 22.2% sonar 208 60 10.5%±5.1% 8.6%±3.4% 21.9% glass 214 10 20.5%±1.6% 17.3%±4.5% — wine 178 13 19.4%±5.7% 15.6%±4.2% — tictactoe 958 9 3.9%±0.7% 3.3%±0.6% — cmc 1473 10 32.5%±7.1% 28.9%±7.5% — USPS 9298 256 5.9% 4.8% —1 This is a convex optimization problem. Using Lagrange multipliers one can show that q needs to satisfy qij = exp(−τij)bicj where bi, cj ≥0. Solving for Pn j qij = 1 yields qij = exp(−τij)cj Pn l=1 exp(−τil)cl . This means that instead of an optimization problem in mtest × n variables we now only need to optimize over n variables subject to 2n constraints. Note that the exact matching constraint where q+ i = q− i amounts to a maximum likelihood problem for a shifted exponential family model where qij = exp(τij) exp(γi −gj(γi)). It can be shown that the approximate matching problem is equivalent to a maximum a posteriori optimization problem using the norm dual to expectation constraints on qij. We are currently working on extending this setting In summary, the optimization now only depends on n variables. It can be solved by standard second order methods. As initialization we choose γi such that the per class averages match the marginal constraint while ignoring the per sample balance. After that a small number Newton steps suffices for optimization. 6 Experiments Unfortunately, we are not aware of other multiclass transductive learning algorithms. To still be able to compare our approach to other transductive learning algorithms we performed experiments on some benchmark datasets. To investigate the performance of our algorithm in classifying vertices of a graph, we choose the WebKB dataset. Benchmark datasets Table 1 reports results on some benchmark datasets. To be able to compare the error rates of the transductive multiclass Gaussian Process classifier proposed in this paper, we also report error rates from [2] and an inductive multiclass Gaussian Process classifier. The reported error rates are for 10-fold crossvalidations. Parameters were chosen by crossvalidation inside the training folds. Graph Mining To illustrate the effectiveness of our approach on graphs we performed experiments on the well known WebKB dataset. This dataset consists of 8275 webpages classified into 7 classes. Each webpage contains textual content and/or links to other webpages. As we are using this dataset to evaluate our graph mining algorithm, we ignore the text on each webpage and consider the dataset as a labelled directed graph. To have the data 1In [2] only subsets of USPS were considered due to the size of this problem. Table 2: Results on WebKB for ‘inverse’ 10-fold crossvalidation DATASET |V | |E| ERROR DATASET |V | |E| ERROR Cornell 867 1793 10% Misc 4113 4462 66% Texas 827 1683 8% all 8275 14370 53% Washington 1205 2368 10% Universities 4162 9591 12% Wisconsin 1263 3678 15% set as large as possible, we did not remove any webpages, opposed to most other work. Table 2 reports the results of our algorithm on different subsets of the WebKB data as well as on the full data. We use the co-linkage graph and report results for ‘inverse’ 10fold stratified crossvalidations, i.e., we use 1 fold as training data and 9 folds as test data. Parameters are the same for all reported experiments and were found by experimenting with a few parametersets on the ‘Cornell’ subset only. It turned out that the class membership probabilities are not well-calibrated on this dataset. To overcome this, we predict on the test set as follows: For each class the instances that are most likely to be in this class are picked (if they haven’t been picked for a class with lower index) such that the fraction of instances assigned to this class is the same on the training and test set. We will investigate the reason for this in future work. The setting most similar to ours is probably the one described in [11]. Although a directed graph approach outperforms there an undirected approach, we resorted to kernels for undirected graphs, as those are computationally more attractive. We will investigate computationally attractive digraph kernels in future work and expect similar benefits as reported by [11]. Though we are using more training data than [11] we are also considering a more difficult learning problem (multiclass without removing various instances). To investigate the behaviour of our algorithm with less training data, we performed a 20-fold inverse crossvalidation on the ‘wisconsin’ subset and observed an error rate of 17% there. To further strengthen our results and show that the runtime performance of our algorithm is sufficient for classifying the vertices of massive graphs, we also performed initial experiments on the Epinions dataset collected by Mathew Richardson and Pedro Domingos. The dataset is a social network consisting of 75, 888 people connected by 508, 960 ‘trust’ edges. Additionally the dataset comes with a list of 185 ‘topreviewers’ for 25 topic areas. We tried to predict these but only got 12% of the topreviewers correct. As we are not aware of any predictive results on this task, we suppose this low accuracy is inherent to this task. However, the experiments show that the algorithm can be run on very large graph datasets. 7 Discussion and Extensions We presented an efficient method for performing transduction on multiclass estimation problems with Gaussian Processes. It performs particularly well whenever the kernel matrix has special numerical properties which allow fast matrix vector multiplication. That said, also on standard dense problems we observed very good improvements (typically a 10% reduction of the training error) over standard induction. Structured Labels and Conditional Random Fields are a clear area where to extend the transductive setting. The key obstacle to overcome in this context is to find a suitable marginal distribution: with increasing structure of the labels the confidence bounds per subclass decrease dramatically. A promising strategy is to use only partial marginals on maximal cliques and enforce them directly similarly to an unconditional Markov network. Applications to Document Analysis require efficient small-memory-footprint suffix tree implementations. We are currently working on this, which will allow GP classification to perform estimation on large document collections. We believe it will be possible to use out-of-core storage in conjunction with annotation to work on sequences of 108 characters. Other Marginal Constraints than matching marginals are worth exploring. In particular, constraints derived from exchangeable distributions such as those used by Latent Dirichlet Allocation are a promising area to consider. This may also lead to connections between GP classification and clustering. Sparse O(m1.3) Solvers for Graphs have recently been proposed by the theoretical computer science community. It is worthwhile exploring their use for inference on graphs. Acknowledgements The authors thank Mathew Richardson and Pedro Domingos for collecting the Epinions data and Deepayan Chakrabarti and Christos Faloutsos for providing a preprocessed version. Parts of this work were carried out when TG was visiting NICTA. National ICT Australia is funded through the Australian Government’s Backing Australia’s Ability initiative, in part through the Australian Research Council. This work was supported by grants of the ARC and by the Pascal Network of Excellence. References [1] K. Bennett. Combining support vector and mathematical programming methods for classification. In Advances in Kernel Methods - -Support Vector Learning, pages 307 – 326. MIT Press, 1998. [2] K. Bennett. Combining support vector and mathematical programming methods for induction. In B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods - -SV Learning, pages 307 – 326, Cambridge, MA, 1999. MIT Press. [3] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58:13 – 30, 1963. [4] T. Joachims. Learning to Classify Text Using Support Vector Machines: Methods, Theory, and Algorithms. The Kluwer International Series In Engineering And Computer Science. Kluwer Academic Publishers, Boston, May 2002. ISBN 0 - 7923 7679-X. [5] M. I. Jordan, Z. Ghahramani, Tommi S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183 – 233, 1999. [6] S. L. Lauritzen. Graphical Models. Oxford University Press, 1996. [7] A. J. Smola and I. R. Kondor. Kernels and regularization on graphs. In B. Sch¨olkopf and M. K. Warmuth, editors, Proceedings of the Annual Conference on Computational Learning Theory, Lecture Notes in Computer Science. Springer, 2003. [8] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998. [9] S. V. N. Vishwanathan and A. J. Smola. Fast kernels for string and tree matching. In K. Tsuda, B. Sch¨olkopf, and J.P. Vert, editors, Kernels and Bioinformatics, Cambridge, MA, 2004. MIT Press. [10] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI, 20(12):1342 – 1351, 1998. [11] D. Zhou, J. Huang, and B. Sch¨olkopf. Learning from labeled and unlabeled data on a directed graph. In International Conference on Machine Learning, 2005. [12] X. Zhu, J. Lafferty, and Z. Ghahramani. Semi-supervised learning using gaussian fields and harmonic functions. In International Conference on Machine Learning ICML’03, 2003.
|
2005
|
57
|
2,874
|
Sparse Gaussian Processes using Pseudo-inputs Edward Snelson Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WC1N 3AR, UK {snelson,zoubin}@gatsby.ucl.ac.uk Abstract We present a new Gaussian process (GP) regression model whose covariance is parameterized by the the locations of M pseudo-input points, which we learn by a gradient based optimization. We take M ≪N, where N is the number of real data points, and hence obtain a sparse regression method which has O(M 2N) training cost and O(M 2) prediction cost per test case. We also find hyperparameters of the covariance function in the same joint optimization. The method can be viewed as a Bayesian regression model with particular input dependent noise. The method turns out to be closely related to several other sparse GP approaches, and we discuss the relation in detail. We finally demonstrate its performance on some large data sets, and make a direct comparison to other sparse GP methods. We show that our method can match full GP performance with small M, i.e. very sparse solutions, and it significantly outperforms other approaches in this regime. 1 Introduction The Gaussian process (GP) is a popular and elegant method for Bayesian non-linear nonparametric regression and classification. Unfortunately its non-parametric nature causes computational problems for large data sets, due to an unfavourable N 3 scaling for training, where N is the number of data points. In recent years there have been many attempts to make sparse approximations to the full GP in order to bring this scaling down to M 2N where M ≪N [1, 2, 3, 4, 5, 6, 7, 8, 9]. Most of these methods involve selecting a subset of the training points of size M (active set) on which to base computation. A typical way of choosing such a subset is through some sort of information criterion. For example, Seeger et al. [7] employ a very fast approximate information gain criterion, which they use to greedily select points into the active set. A major common problem to these methods is that they lack a reliable way of learning kernel hyperparameters, because the active set selection interferes with this learning procedure. Seeger et al. [7] construct an approximation to the full GP marginal likelihood, which they try to maximize to find the hyperparameters. However, as the authors state, they have persistent difficulty in practically doing this through gradient ascent. The reason for this is that reselecting the active set causes non-smooth fluctuations in the marginal likelihood and its gradients, meaning that they cannot get smooth convergence. Therefore the speed of active set selection is somewhat undermined by the difficulty of selecting hyperparameters. Inappropriately learned hyperparameters will adversely affect the quality of solution, especially if one is trying to use them for automatic relevance determination (ARD) [10]. In this paper we circumvent this problem by constructing a GP regression model that enables us to find active set point locations and hyperparameters in one smooth joint optimization. The covariance function of our GP is parameterized by the locations of pseudo-inputs — an active set not constrained to be a subset of the data, found by a continuous optimization. This is a further major advantage, since we can improve the quality of our fit by the fine tuning of their precise locations. Our model is closely related to several sparse GP approximations, in particular Seeger’s method of projected latent variables (PLV) [7, 8]. We discuss these relations in section 3. In principle we could also apply our technique of moving active set points off data points to approximations such as PLV. However we empirically demonstrate that a crucial difference between PLV and our method (SPGP) prevents this idea from working for PLV. 1.1 Gaussian processes for regression We provide here a concise summary of GPs for regression, but see [11, 12, 13, 10] for more detailed reviews. We have a data set D consisting of N input vectors X = {xn}N n=1 of dimension D and corresponding real valued targets y = {yn}N n=1. We place a zero mean Gaussian process prior on the underlying latent function f(x) that we are trying to model. We therefore have a multivariate Gaussian distribution on any finite subset of latent variables; in particular, at X: p(f|X) = N(f|0, KN), where N(f|m, V) is a Gaussian distribution with mean m and covariance V. In a Gaussian process the covariance matrix is constructed from a covariance function, or kernel, K which expresses some prior notion of smoothness of the underlying function: [KN]nn′ = K(xn, xn′). Usually the covariance function depends on a small number of hyperparameters θ, which control these smoothness properties. For our experiments later on we will use the standard Gaussian covariance with ARD hyperparameters: K(xn, xn′) = c exp h −1 2 XD d=1 bd x(d) n −x(d) n′ 2i , θ = {c, b} . (1) In standard GP regression we also assume a Gaussian noise model or likelihood p(y|f) = N(y|f, σ2I). Integrating out the latent function values we obtain the marginal likelihood: p(y|X, θ) = N(y|0, KN + σ2I) , (2) which is typically used to train the GP by finding a (local) maximum with respect to the hyperparameters θ and σ2. Prediction is made by considering a new input point x and conditioning on the observed data and hyperparameters. The distribution of the target value at the new point is then: p(y|x, D, θ) = N y k⊤ x (KN + σ2I)−1y, Kxx −k⊤ x (KN + σ2I)−1kx + σ2 , (3) where [kx]n = K(xn, x) and Kxx = K(x, x). The GP is a non-parametric model, because the training data are explicitly required at test time in order to construct the predictive distribution, as is clear from the above expression. GPs are prohibitive for large data sets because training requires O(N 3) time due to the inversion of the covariance matrix. Once the inversion is done, prediction is O(N) for the predictive mean and O(N 2) for the predictive variance per new test case. 2 Sparse Pseudo-input Gaussian processes (SPGPs) In order to derive a sparse model that is computationally tractable for large data sets, which still preserves the desirable properties of the full GP, we examine in detail the GP predictive distribution (3). Consider the mean and variance of this distribution as functions of x, the new input. Regarding the hyperparameters as known and fixed for now, these functions are effectively parameterized by the locations of the N training input and target pairs, X and y. In this paper we consider a model with likelihood given by the GP predictive distribution, and parameterized by a pseudo data set. The sparsity in the model will arise because we will generally consider a pseudo data set ¯D of size M < N: pseudo-inputs ¯X = {¯xm}M m=1 and pseudo targets ¯f = { ¯fm}M m=1. We have denoted the pseudo targets ¯f instead of ¯y because as they are not real observations, it does not make much sense to include a noise variance for them. They are therefore equivalent to the latent function values f. The actual observed target value will of course be assumed noisy as before. These assumptions therefore lead to the following single data point likelihood: p(y|x, ¯X,¯f) = N y k⊤ x K−1 M ¯f, Kxx −k⊤ x K−1 M kx + σ2 , (4) where [KM]mm′ = K(¯xm, ¯xm′) and [kx]m = K(¯xm, x), for m = 1, . . . , M. This can be viewed as a standard regression model with a particular form of parameterized mean function and input-dependent noise model. The target data are generated i.i.d. given the inputs, giving the complete data likelihood: p(y|X, ¯X,¯f) = YN n=1 p(yn|xn, ¯X,¯f) = N(y|KNMK−1 M ¯f, Λ + σ2I) , (5) where Λ = diag(λ), λn = Knn −k⊤ n K−1 M kn, and [KNM]nm = K(xn, ¯xm). Learning in the model involves finding a suitable setting of the parameters – an appropriate pseudo data set that explains the real data well. However rather than simply maximize the likelihood with respect to ¯X and ¯f it turns out that we can integrate out the pseudo targets ¯f. We place a Gaussian prior on the pseudo targets: p(¯f| ¯X) = N(¯f|0, KM) . (6) This is a very reasonable prior because we expect the pseudo data to be distributed in a very similar manner to the real data, if they are to model them well. It is not easy to place a prior on the pseudo-inputs and still remain with a tractable model, so we will find these by maximum likelihood (ML). For the moment though, consider the pseudo-inputs as known. We find the posterior distribution over pseudo targets ¯f using Bayes rule on (5) and (6): p(¯f|D, ¯X) = N ¯f|KMQ−1 M KMN(Λ + σ2I)−1y, KMQ−1 M KM , (7) where QM = KM + KMN(Λ + σ2I)−1KNM. Given a new input x∗, the predictive distribution is then obtained by integrating the likelihood (4) with the posterior (7): p(y∗|x∗, D, ¯X) = Z d¯f p(y∗|x∗, ¯X,¯f) p(¯f|D, ¯X) = N(y∗|µ∗, σ2 ∗) , (8) where µ∗= k⊤ ∗Q−1 M KMN(Λ + σ2I)−1y σ2 ∗= K∗∗−k⊤ ∗(K−1 M −Q−1 M )k∗+ σ2 . Note that inversion of the matrix Λ + σ2I is not a problem because it is diagonal. The computational cost is dominated by the matrix multiplication KMN(Λ + σ2I)−1KNM in the calculation of QM which is O(M 2N). After various precomputations, prediction can then be made in O(M) for the mean and O(M 2) for the variance per test case. x y (a) x y (c) x y (b) Figure 1: Predictive distributions (mean and two standard deviation lines) for: (a) full GP, (b) SPGP trained using gradient ascent on (9), (c) SPGP trained using gradient ascent on (10). Initial pseudo point positions are shown at the top as red crosses; final pseudo point positions are shown at the bottom as blue crosses (the y location on the plots of these crosses is not meaningful). We are left with the problem of finding the pseudo-input locations ¯X and hyperparameters Θ = {θ, σ2}. We can do this by computing the marginal likelihood from (5) and (6): p(y|X, ¯X, Θ) = Z d¯f p(y|X, ¯X,¯f) p(¯f| ¯X) = N(y|0, KNMK−1 M KMN + Λ + σ2I) . (9) The marginal likelihood can then be maximized with respect to all these parameters { ¯X, Θ} by gradient ascent. The details of the gradient calculations are long and tedious and therefore omitted here for brevity. They closely follow the derivations of hyperparameter gradients of Seeger et al. [7] (see also section 3), and as there, can be most efficiently coded with Cholesky factorisations. Note that KM, KMN and Λ are all functions of the M pseudo-inputs ¯X and θ. The exact form of the gradients will of course depend on the functional form of the covariance function chosen, but our method will apply to any covariance that is differentiable with respect to the input points. It is worth saying that the SPGP can be viewed as a standard GP with a particular non-stationary covariance function parameterized by the pseudo-inputs. Since we now have MD+|Θ| parameters to fit, instead of just |Θ| for the full GP, one may be worried about overfitting. However, consider the case where we let M = N and ¯X = X – the pseudo-inputs coincide with the real inputs. At this point the marginal likelihood is equal to that of a full GP (2). This is because at this point KMN = KM = KN and Λ = 0. Moreover the predictive distribution (8) also collapses to the full GP predictive distribution (3). These are clearly desirable properties of the model, and they give confidence that a good solution will be found when M < N. However it is the case that hyperparameter learning complicates matters, and we discuss this further in section 4. 3 Relation to other methods It turns out that Seeger’s method of PLV [7, 8] uses a very similar marginal likelihood approximation and predictive distribution. If you remove Λ from all the SPGP equations you get precisely their expressions. In particular the marginal likelihood they use is: p(y|X, ¯X, Θ) = N(y|0, KNMK−1 M KMN + σ2I) , (10) which has also been used elsewhere before [1, 4, 5]. They have derived this expression from a somewhat different route, as a direct approximation to the full GP marginal likelihood. x y (a) x y (b) x y (c) Figure 2: Sample data drawn from the marginal likelihood of: (a) a full GP, (b) SPGP, (c) PLV. For (b) and (c), the blue crosses show the location of the 10 pseudo-input points. As discussed earlier, the major difference between our method and these other methods, is that they do not use this marginal likelihood to learn locations of active set input points – only the hyperparameters are learnt from (10). This begged the question of what would happen if we tried to use their marginal likelihood approximation (10) instead of (9) to try to learn pseudo-input locations by gradient ascent. We show that the Λ that appears in the SPGP marginal likelihood (9) is crucial for finding pseudo-input points by gradients. Figure 1 shows what happens when we try to optimize these two likelihoods using gradient ascent with respect to the pseudo inputs, on a simple 1D data set. Plotted are the predictive distributions, initial and final locations of the pseudo inputs. Hyperparameters were fixed to their true values for this example. The initial pseudo-input locations were chosen adversarially: all towards the left of the input space (red crosses). Using the SPGP likelihood, the pseudo-inputs spread themselves along the extent of the training data, and the predictive distribution matches the full GP very closely (Figure 1(b)). Using the PLV likelihood, the points begin to spread, but very quickly become stuck as the gradient pushing the points towards the right becomes tiny (Figure 1(c)). Figure 2 compares data sampled from the marginal likelihoods (9) and (10), given a particular setting of the hyperparameters and a small number of pseudo-input points. The major difference between the two is that the SPGP likelihood has a constant marginal variance of Knn + σ2, whereas the PLV decreases to σ2 away from the pseudo-inputs. Alternatively, the noise component of the PLV likelihood is a constant σ2, whereas the SPGP noise grows to Knn + σ2 away from the pseudo-inputs. If one is in the situation of Figure 1(c), under the SPGP likelihood, moving the rightmost pseudo-input slightly to the right will immediately start to reduce the noise in this region from Knn + σ2 towards σ2. Hence there will be a strong gradient pulling it to the right. With the PLV likelihood, the noise is fixed at σ2 everywhere, and moving the point to the right does not improve the quality of fit of the mean function enough locally to provide a significant gradient. Therefore the points become stuck, and we believe this effect accounts for the failure of the PLV likelihood in Figure 1(c). It should be emphasised that the global optimum of the PLV likelihood (10) may well be a good solution, but it is going to be difficult to find with gradients. The SPGP likelihood (9) also suffers from local optima of course, but not so catastrophically. It may be interesting in the future to compare which performs better for hyperparameter optimization. 4 Experiments In the previous section we showed our gradient method successfully learning the pseudoinputs on a 1D example. There the initial pseudo input points were chosen adversarially, but on a real problem it is sensible to initialize by randomly placing them on real data points, 0 200 400 600 800 1000 1200 10 −2 10 −1 n = 10000 random 0 200 400 600 800 1000 1200 10 −2 10 −1 n = 10000 info−gain 0 200 400 600 800 1000 1200 10 −2 10 −1 n = 10000 smo−bart 0 20 40 60 80 100 120 140 160 10 −2 10 −1 random 0 20 40 60 80 100 120 140 160 10 −2 10 −1 info−gain 0 20 40 60 80 100 120 140 160 10 −2 10 −1 info−gain 0 20 40 60 80 100 120 140 160 10 −2 10 −1 smo−bart Figure 3: Our results have been added to plots reproduced with kind permission from [7]. The plots show mean square test error as a function of active/pseudo set size M. Top row – data set kin-40k, bottom row – pumadyn-32nm1. We have added circles which show SPGP with both hyperparameter and pseudo-input learning from random initialisation. For kin-40k the squares show SPGP with hyperparameters obtained from a full GP and fixed. For pumadyn-32nm the squares show hyperparameters initialized from a full GP. random, info-gain and smo-bart are explained in the text. The horizontal lines are a full GP trained on a subset of the data. and this is what we do for all of our experiments. To compare our results to other methods we have run experiments on exactly the same data sets as in Seeger et al. [7], following precisely their preprocessing and testing methods. In Figure 3, we have reproduced their learning curves for two large data sets1, superimposing our test error (mean squared). Seeger et al. compare three methods: random, info-gain and smo-bart. random involves picking an active set of size M randomly from among training data. info-gain is their own greedy subset selection method, which is extremely cheap to train – barely more expensive than random. smo-bart is Smola and Bartlett’s [1] more expensive greedy subset selection method. Also shown with horizontal lines is the test error for a full GP trained on a subset of the data of size 2000 for data set kin-40k and 1024 for pumadyn-32nm. For these learning curves, they do not actually learn hyperparameters by maximizing their approximation to the marginal likelihood (10). Instead they fix them to those obtained from the full GP2. For kin-40k we follow Seeger et al.’s procedure of setting the hyperparameters from the full GP on a subset. We then optimize the pseudo-input positions, and plot the results as red squares. We see the SPGP learning curve lying significantly below all three other methods in Figure 3. We rapidly approach the error of a full GP trained on 2000 points, using a pseudo set of only a few hundred points. We then try the harder task of also finding the hyperparameters at the same time as the pseudo-inputs. The results are plotted as blue circles. The method performs extremely well for small M, but we see some overfitting 1kin-40k: 10000 training, 30000 test, 9 attributes, see www.igi.tugraz.at/aschwaig/data.html. pumadyn-32nm: 7168 training, 1024 test, 33 attributes, see www.cs.toronto/ delve. 2Seeger et al. have a separate section testing their likelihood approximation (10) to learn hyperparameters, in conjunction with the active set selection methods. They show that it can be used to reliably learn hyperparameters with info-gain for active set sizes of 100 and above. They have more trouble reliably learning hyperparameters for very small active sets. x y standard GP x y SPGP Figure 4: Regression on a data set with input dependent noise. Left: standard GP. Right: SPGP. Predictive mean and two standard deviation lines are shown. Crosses show final locations of pseudo-inputs for SPGP. Hyperparameters are also learnt. behaviour for large M which seems to be caused by the noise hyperparameter being driven too small (the blue circles have higher likelihood than the red squares below them). For data set pumadyn-32nm, we again try to jointly find hyperparameters and pseudoinputs. Again Figure 3 shows SPGP with extremely low error for small pseudo set size – with just 10 pseudo-inputs we are already close to the error of a full GP trained on 1024 points. However, in this case increasing the pseudo set size does not decrease our error. In this problem there is a large number of irrelevant attributes, and the relevant ones need to be singled out by ARD. Although the hyperparameters learnt by our method are reasonable (2 out of the 4 relevant dimensions are found), they are not good enough to get down to the error of the full GP. However if we initialize our gradient algorithm with the hyperparameters of the full GP, we get the points plotted as squares (this time red likelihoods > blue likelihoods, so it is a problem of local optima not overfitting). Now with only a pseudo set of size 25 we reach the performance of the full GP, and significantly outperform the other methods (which also had their hyperparameters set from the full GP). Another main difference between the methods lies in training time. Our method performs optimization over a potentially large parameter space, and hence is relatively expensive to train. On the face of it methods such as info-gain and random are extremely cheap. However all these methods must be combined with obtaining hyperparameters in some way – either by a full GP on a subset (generally expensive), or by gradient ascent on an approximation to the likelihood. When you consider this combined task, and that all methods involve some kind of gradient based procedure, then none of the methods are particularly cheap. We believe that the gain in accuracy achieved by our method can often be worth the extra training time associated with optimizing in a larger parameter space. 5 Conclusions, extensions and future work Although GPs are very flexible regression models, they are still limited by the form of the covariance function. For example it is difficult to model non-stationary processes with a GP because it is hard to construct sensible non-stationary covariance functions. Although the SPGP is not specifically designed to model non-stationarity, the extra flexibility associated with moving pseudo inputs around can actually achieve this to a certain extent. Figure 4 shows the SPGP fit to some data with an input dependent noise variance. The SPGP achieves a much better fit to the data than the standard GP by moving almost all the pseudoinput points outside the region of data3. It will be interesting to test these capabilities further in the future. The extension to classification is also a natural avenue to explore. We have demonstrated a significant decrease in test error over the other methods for a given small pseudo/active set size. Our method runs into problems when we consider much larger 3It should be said that there are local optima in this problem, and other solutions looked closer to the standard GP. We ran the method 5 times with random initialisations. All runs had higher likelihood than the GP; the one with the highest likelihood is plotted. pseudo set size and/or high dimensional input spaces, because the space in which we are optimizing becomes impractically big. However we have currently only tried using an ‘off the shelf’ conjugate gradient minimizer, or L-BFGS, and there are certainly improvements that can be made in this area. For example we can try optimizing subsets of variables iteratively (chunking), or stochastic gradient ascent, or we could make a hybrid by picking some points randomly and optimizing others. In general though we consider our method most useful when one wants a very sparse (hence fast prediction) and accurate solution. One further way in which to deal with large D is to learn a low dimensional projection of the input space. This has been considered for GPs before [14], and could easily be applied to our model. In conclusion, we have presented a new method for sparse GP regression, which shows a significant performance gain over other methods especially when searching for an extremely sparse solution. We have shown that the added flexibility of moving pseudo-input points which are not constrained to lie on the true data points leads to better solutions, and even some non-stationary effects can be modelled. Finally we have shown that hyperparameters can be jointly learned with pseudo-input points with reasonable success. Acknowledgements Thanks to the authors of [7] for agreeing to make their results and plots available for reproduction. Thanks to all at the Sheffield GP workshop for helping to clarify this work. References [1] A. J. Smola and P. Bartlett. Sparse greedy Gaussian process regression. In Advances in Neural Information Processing Systems 13. MIT Press, 2000. [2] C. K. I. Williams and M. Seeger. Using the Nystr¨om method to speed up kernel machines. In Advances in Neural Information Processing Systems 13. MIT Press, 2000. [3] V. Tresp. A Bayesian committee machine. Neural Computation, 12:2719–2741, 2000. [4] L. Csat´o. Sparse online Gaussian processes. Neural Computation, 14:641–668, 2002. [5] L. Csat´o. Gaussian Processes — Iterative Sparse Approximations. PhD thesis, Aston University, UK, 2002. [6] N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: the informative vector machine. In Advances in Neural Information Processing Systems 15. MIT Press, 2002. [7] M. Seeger, C. K. I. Williams, and N. D. Lawrence. Fast forward selection to speed up sparse Gaussian process regression. In C. M. Bishop and B. J. Frey, editors, Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, 2003. [8] M. Seeger. Bayesian Gaussian Process Models: PAC-Bayesian Generalisation Error Bounds and Sparse Approximations. PhD thesis, University of Edinburgh, 2003. [9] J. Qui˜nonero Candela. Learning with Uncertainty — Gaussian Processes and Relevance Vector Machines. PhD thesis, Technical University of Denmark, 2004. [10] D. J. C. MacKay. Introduction to Gaussian processes. In C. M. Bishop, editor, Neural Networks and Machine Learning, NATO ASI Series, pages 133–166. Kluwer Academic Press, 1998. [11] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In Advances in Neural Information Processing Systems 8. MIT Press, 1996. [12] C. E. Rasmussen. Evaluation of Gaussian Processes and Other Methods for Non-Linear Regression. PhD thesis, University of Toronto, 1996. [13] M. N. Gibbs. Bayesian Gaussian Processes for Regression and Classification. PhD thesis, Cambridge University, 1997. [14] F. Vivarelli and C. K. I. Williams. Discovering hidden features with Gaussian processes regression. In Advances in Neural Information Processing Systems 11. MIT Press, 1998.
|
2005
|
58
|
2,875
|
Fast biped walking with a reflexive controller and real-time policy searching Tao Geng1, Bernd Porr2 and Florentin W¨org¨otter1,3 1 Dept. Psychology, University of Stirling, UK. runbot05@gmail.com 2 Dept. Electronics & Electrical Eng., University of Glasgow, UK. b.porr@elec.gla.ac.uk 3 Bernstein Centre for Computational Neuroscience, University of G¨ottingen worgott@chaos.gwdg.de Abstract In this paper, we present our design and experiments of a planar biped robot (“RunBot”) under pure reflexive neuronal control. The goal of this study is to combine neuronal mechanisms with biomechanics to obtain very fast speed and the on-line learning of circuit parameters. Our controller is built with biologically inspired sensor- and motor-neuron models, including local reflexes and not employing any kind of position or trajectory-tracking control algorithm. Instead, this reflexive controller allows RunBot to exploit its own natural dynamics during critical stages of its walking gait cycle. To our knowledge, this is the first time that dynamic biped walking is achieved using only a pure reflexive controller. In addition, this structure allows using a policy gradient reinforcement learning algorithm to tune the parameters of the reflexive controller in real-time during walking. This way RunBot can reach a relative speed of 3.5 leg-lengths per second after a few minutes of online learning, which is faster than that of any other biped robot, and is also comparable to the fastest relative speed of human walking. In addition, the stability domain of stable walking is quite large supporting this design strategy. 1 Introduction Building and controlling fast biped robots demands a deeper understanding of biped walking than for slow robots. While slow robots may walk statically, fast biped walking has to be dynamically balanced and more robust as less time is available to recover from disturbances [1]. Although many biped robots have been developed using various technologies in the past 20 years, their walking speeds are still not comparable to that of their counterpart in nature, humans. Most of the successful biped robots have commonly used the ZMP (Zero Moment Point, [2]) as the criterion for stability control and motion generation. The ZMP is the point on the ground where the total moment generated by gravity and inertia equals zero. This measure has two deficiencies in the case of high-speed walking. First, the ZMP must always reside in the convex hull of the stance foot, and the stability margin is measured by the minimal distance between the ZMP and the edge of the foot. To ensure an appropriate stability margin, the foot has to be flat and large, which will deteriorate the robot’s performance and pose great difficulty during fast walking. This difficulty can be shown clearly when humans try to walk with skies or swimming fins. Second, the ZMP criterion does not permit rotation of the stance foot at the heel or the toe, which, however, can amount to up to eighty percent of a normal human walking gait, and is important and inevitable in fast biped walking. On the other hand, sometimes dynamic biped walking can be achieved without considering any stability criterion such as the ZMP. For example, passive biped robots can walk down a shallow slope without sensing or control. Some researchers have proposed approaches to equip a passive biped with actuators to improve its performance and drive it to walk on the flat ground [3] [4]. Nevertheless, these passive bipeds excessively depend on their natural dynamics for gait generation, which, while making their gaits efficient in energy, also limits their walking rate to be very slow. In this study, we will show that, with a properly designed mechanical structure, a novel, pure reflexive controller, and an online policy gradient reinforcement learning algorithm, our biped robot can attain a fast walking speed of 3.5 leg-lengths per second. This makes it faster than any other biped robot we know. Though not a passive biped, it exploits its own natural dynamics during some stages of its walking gait, greatly simplifying the necessary control structures. 2 The robot RunBot (Fig. 1) is 23 cm high, foot to hip joint axis. It has four joints: left hip, right hip, left knee, right knee. Each joint is driven by a modified RC servo motor. A hard mechanical stop is installed on the knee joints, preventing it from going into hyperextension. Each foot is equipped with a modified piezo transducer to sense ground contact events. Similar to other approaches [1], we constrain the robot only in the sagittal plane by a boom of one meter length freely rotating in its joints (planar robot). This assures that RunBot can still very easily trip and fall in the sagittal plane. Figure 1: A): The robot, RunBot, and its boom structure. All three orthogonal axis of the boom can rotate freely. B) Illustration of a walking step of RunBot. C) A series of sequential frames of a walking gait cycle. The interval between every two adjacent frames is 33 ms. Note that, during the time between frame (8) and frame (13), which is nearly one third of the duration of a step, the motor voltage of all four joints remain to be zero, and the whole robot is moving passively. At the time of frame (13), the swing leg touches the floor and a next step begins. Since we intended to exploit RunBot’s natural dynamics during some stages of its gait cycle; similar to passive bipeds; its foot bottom is also curved with a radius equal to half the leg-length (with a too large radius, the tip of the foot may strike the ground during its swing phase). During the stance phase of such a curved foot, always only one point touches the ground, thus allowing the robot to roll passively around the contact point, which is similar to the rolling action of human feet facilitating fast walking. The most important consideration in the mechanical design of our robot is the location of its center of mass. About seventy percent of the robot’s weight is concentrated on its trunk. The parts of the trunk are assembled in such a way that its center of mass is located before the hip axis (Fig. 1 A). The effect of this design is illustrated in Fig. 1 B. As shown, one walking step includes two stages, the first from (1) to (2), the second from (2) to (3). During the first stage, the robot has to use its own momentum to rise up on the stance leg. When walking at a low speed, the robot may have not enough momentum to do this. So, the distance the center of mass has to cover in this stage should be as short as possible, which can be fulfilled by locating the center of mass of the trunk forward. In the second stage, the robot just falls forward naturally and catches itself on the next stance leg. Then the walking cycle is repeated. The figure also shows clearly the rolling movement of the curved foot of the stance leg. A stance phase begins with the heel touching ground, and terminates with the toe leaving ground. In summary, our mechanical design of RunBot has following special features that distinguish it from other powered biped robots and facilitate high-speed walking and exploitation of natural dynamics: (a) Small curved feet allowing for rolling action; (b) Unactuated, hence, light ankles; (c) Light-weight structure; (d) Light and fast motors; (e) Proper mass distribution of the limbs; (f) Properly positioned mass center of the trunk. 3 The neural structure of our reflexive controller The reflexive walking controller of RunBot follows a hierarchical structure (Fig. 2). The bottom level is the reflex circuit local to the joints, including motor-neurons and angle sensor neurons involved in the joint reflexes. The top level is a distributed neural network consisting of hip stretch receptors and ground contact sensor neurons, which modulate the local reflexes of the bottom level. Neurons are modelled as non-spiking neurons simulated on a Linux PC, and communicated to the robot via the DA/AD board. Though somewhat simplified, they still retain some of the prominent neuronal characteristics. 3.1 Model neuron circuit of the top level The joint coordination mechanism in the top level is implemented with the neuron circuit illustrated in Fig. 2. While other biologically inspired locomotive models and robots use two stretch receptors on each leg to signal the attaining of the leg’s AEP (Anterior Extreme Position) and PEP (Posterior Extreme Position) respectively, our robot has only one stretch receptor on each leg to signal the AEA (Anterior Extreme Angle) of its hip joint. Furthermore, the function of the stretch receptor on our robot is only to trigger the extensor reflex on the knee joint of the same leg, rather than to implicitly reset the phase relations between different legs as in the case of Cruse’s model. As the hip joint approaches the AEA, the output of the stretch receptors for the left (AL) and the right hip (AR) are increased as: ρAL = 1 + eαAL(ΘAL−φ)−1 (1) ρAL = 1 + eαAR(ΘAR−φ)−1 (2) Where φ is the real time angular position of the hip joint, ΘAL and ΘAR are the hip anterior extreme angles whose values are tuned by hand, αAL and αAR are positive constants. This Figure 2: The neuron model of reflexive controller on RunBot. model is inspired by a sensor neuron model presented in [5] that is thought capable of emulating the response characteristics of populations of sensor neurons in animals. Another kind of sensor neuron incorporated in the top level is the ground contact sensor neuron, which is active when the foot is in contact with the ground. Its output, similar to that of the stretch receptors, changes according to: ρGL = 1 + eαGL(ΘGL−VL+VR)−1 (3) ρGR = 1 + eαGR(ΘGR−VR+VL)−1 (4) Where VL and VR are the output voltage signals from piezo sensors of the left foot and right foot respectively, ΘGL and ΘGR work as thresholds, αGL and αGR are positive constants. 3.2 Neural circuit of the bottom level The bottom-level reflex system of our robot consists of reflexes local to each joint (Fig. 2). The neuron module for one reflex is composed of one angle sensor neuron and the motorneuron it contacts. Each joint is equipped with two reflexes, extensor reflex and flexor reflex, both are modelled as a monosynaptic reflex, that is, whenever its threshold is exceeded, the angle sensor neuron directly excites the corresponding motor-neuron. This direct connection between angle sensor neuron and motor-neuron is inspired by a reflex described in cockroach locomotion [6]. In addition, the motor-neurons of the local reflexes also receive an excitatory synapse and an inhibitory synapse from the neurons of the top level, by which the top level can modulate the bottom-level reflexes. Each joint has two angle sensor neurons, one for the extensor reflex, and the other for the flexor reflex (Fig. 2). Their models are similar to that of the stretch receptors described above. The extensor angle sensor neuron changes its output according to: ρES = 1 + eαES(ΘES−φ)−1 (5) where φ is the real time angular position obtained from the potentiometer of the joint. ΘES is the threshold of the extensor reflex and αES a positive constant. Likewise, the output of Table 1: Parameters of neurons for hip- and knee joints. For meaning of the subscripts, see Fig. 2. ΘEM ΘF M αES αF S Hip Joints 5 5 2 2 Knee Joints 5 5 2 2 Table 2: Parameters of stretch receptors and ground contact sensor neurons. ΘGL (v) ΘGR (v) ΘAL (deg) ΘAR (deg) αGL αGR αAL αAR 2 2 = ΘES = ΘES 2 2 2 2 the flexor sensor neuron is modelled as: ρF S = (1 + eαF S(φ−ΘF S))−1 (6) with ΘF S and αF S similar as above. The direction of extensor on both hip and knee joints is forward while that of flexors is backward. It should be particularly noted that the thresholds of the sensor neurons in the reflex modules do not work as desired positions for joint control, because our reflexive controller does not involve any exact position control algorithms that would ensure that the joint positions converge to a desired value. The motor-neuron model is adapted from one used in the neural controller of a hexapod simulating insect locomotion [7]. The state and output of each extensor motor-neuron is governed by equations 7,8 [8] (that of flexor motor-neurons are similar): τ dy dt = −y + X ωXρX (7) uEM = 1 + eΘEM−y−1 (8) Where y represents the mean membrane potential of the neuron. Equation 8 is a sigmoidal function that can be interpreted as the neuron’s short-term average firing frequency, ΘEM is a bias constant that controls the firing threshold. τ is a time constant associated with the passive properties of the cell membrane [8], ωX represents the connection strength from the sensor neurons and stretch receptors to the motor-neuron neuron (Fig. 2). ρX represents the output of the sensor-neurons and stretch receptors that contact this motor-neuron (e.g., ρES, ρAL, ρGL, etc.) Note that, on RunBot, the output value of the motor-neurons, after multiplication by a gain coefficient, is sent to the servo amplifier to directly drive the joint motor. The voltage of joint motor is determined by Motor V oltage = MAMP GM(sEMuEM + sF MuF M), (9) where MAMP represents the magnitude of the servo amplifier, which is 3 on RunBot. GM stands for output gain of the motor-neurons. sEM and sF M are signs for the motor voltage of flexor and extensor, being +1 or -1, depending on the the hardware of the robot. uEM and uF M are the outputs of the motor-neurons. 4 Robot walking experiments The model neuron parameters chosen jointly for all experiments are listed in Tables 1 and 2. The time constants τi of all neurons take the same value of 3ms. The weights of all Table 3: Fixed parameters of the knee joints. ΘES,k (deg) ΘF S,k (deg) GM,k Knee Joints 175 110 0.9GM,h the inhibitory connections are set to -10, except those between sensor-neurons and motorneurons, which are -30, and those between stretch receptors and flexor motor-neurons, which are -15. The weights of all excitatory connections are 10, except those between stretch receptors and extensor motor-neurons, which are 15. Because the movements of the knee joints is needed mainly for timely ground clearance without big contributions to the walking speed, we set their neuron parameters to fixed values (see Table 3 ). We also fix the threshold of the flexor sensor neurons of the hips (ΘF S,h) to 85◦. So, in the experiments described below, we only need to tune the two parameters of the hip joints, the threshold of the extensor sensor neurons (ΘES,h) and the gain of the motor-neurons (GM,h), which work together to determine the walking speed and the important gait properties of RunBot. In RunBot, ΘES,h determines roughly the stride length (not exactly, because the hip joint moves passively after passing ΘES,h), while GM,h is proportional to the angular velocity of the motor on the hip joint. In experiments of walking on a flat floor, surprisingly, we have found that stable gaits can appear in a considerably large range of the parameters ΘES,h and GM,h (Fig. 3A). Figure 3: (A), The range of the two parameters, GM,h and ΘES,h, in which stable gaits appear. The maximum permitted value of GM,h is 3.5 (higher value will destroy the motor of the hip joint). See text for more information. (B), Phase diagrams of hip joint position and knee joint position of one leg during the whole learning process. The smallest orbit is the fastest walking gait. (C), The walking speed of RunBot during the learning process. In RunBot, passive movements appear on two levels, at the single joint level and at the whole robot level. Due to the high gear ratio of the joint motors, the passive movement of each joint is not very large. Whereas the effects of passive movements at the whole robot level can be clearly seen especially when RunBot is walking at a medium or slow speed (Fig. 1 C). 4.1 Policy gradient searching for fast walking gaits In order to get a fast walking speed, the biped robot should have a long stride length, a short swing time, and a short double support phase [1]. In RunBot, because the phase-switching of its legs is triggered immediately by ground contact signals, its double support phase is so short (usually less than 30 ms) that it is negligible. A long stride length and a short swing time are mutually exclusive. Because there are no position or trajectory tracking control in RunBot, it is impossible to control its walking speed directly or explicitly. However, knowing that runBot’s walking gait is determined by only two parameters, ΘES,h and GM,h (Fig. 3A), we formulate RunBot’s fast walking control as a policy gradient reinforcement learning problem by considering each point in the the parameter space (Fig. 3A) as an open-loop policy that can be executed by RunBot in real-time. Our approach is modified from [9]. It starts from an initial parameter vector π = (θ1, θ2) (here θ1 and θ2 represent GM,h and ΘES,h, respectively) and proceeds to evaluate following 5 polices near π: (θ1, θ2), (θ1, θ2 +ϵ2), (θ1 −ϵ1, θ2), (θ1, θ2 −ϵ2), (θ1 +ϵ1, θ2), where each ϵj is a adaptive value that is small relative to θj. The evaluation of each policy generates a score that is a measure of the speed of the gait described by that policy. We use these scores to construct an adjustment vector A [9]. Then A is normalized and multiplied by an adaptive step-size. Finally, we add A to π, and begin the next iteration. If A = 0, this means a possible local minimum is encountered. In this case, we replace A with a stochastically generated vector. Although this is a very simple strategy, our experiments show that it can effectively prevent the real-time learning from trapping in the local minimums. One experiment result is shown in Fig. 3. RunBot starts its walking with the parameters corresponding to point S in Fig. 3A whose speed is 41 cm/s (see Fig. 3C). After 240 seconds of continuous walking with the learning algorithm and no any human intervention, RunBot attains a walking speed of about 80 cm/s (see Fig. 3C, corresponding to point F in Fig. 3A), which is equivalent to 3.5 leg-lengths per second. To compare the walking speed of various biped robots whose sizes are quite different from each other, we use the relative speed, speed divided by the leg-length. We know of no other biped robot attaining such a fast relative speed. The world record of human walking race is equivalent to about 4.0 – 4.5 leglengths per second. So, RunBot’s highest walking speed is comparable to that of humans. To get a feeling of how fast RunBot can walk, we strongly encourage readers to watch the videos of the experiment at, http://www.cn.stir.ac.uk/˜tgeng/nips Although there is no specifically designed controller in charge of the sensing and control of the transient stages of policy changing (speed changing), the natural dynamics of the robot itself ensures the stability during the changes. By exploiting the natural dynamics, the reflexive controller is robust to its parameter variation as shown in Fig. 3A. 5 Discussions Cruse developed a completely decentralized reflexive controller model to understand the locomotion control of walking in stick insects (Carausius morosus, [10]), which can immensely decrease the computational burden of the locomotion controller, and has been applied in many hexapod robots. Up to date, however, no real biped robot has existed that depends exclusively on reflexive controllers. This may be because of the intrinsic instability specific to biped-walking, which makes the dynamic stability of biped robots much more difficult to control than that of multi-legged robots. To our knowledge, our RunBot is the first dynamic biped exclusively controlled by a pure reflexive controller. Although such a pure reflexive controller itself involves no explicit mechanisms for the global stability control of the biped, its coupling with the properly designed mechanics of RunBot has substantially ensured the considerably large stable domain of the dynamic biped gaits. Our reflexive controller has some evident differences from Cruse’s model. Cruse’s model depends on PEP, AEP and GC (Ground Contact) signals to generate the movement pattern of the individual legs. Whereas our reflexive controller presented here uses only GC and AEA signals to coordinate the movements of the joints. Moreover, the AEA signal of one hip in RunBot only acts on the knee joint belonging to the same leg, not functioning on the leg-level as the AEP and PEP did in Cruse’s model. The use of fewer phasic feedback signals has further simplified the controller structure in RunBot. In order to achieve real time walking gait in a real world, even biological inspired robots often have to depend on some kinds of position- or trajectory tracking control on their joints [6, 11, 12]. However, in RunBot, there is no exact position control implemented. The neural structure of our reflexive controller does not depend on, or ensure the tracking of, any desired position. Indeed, it is this approximate nature of our reflexive controller that allows the physical properties of the robot itself to contribute implicitly to generation of overall gait trajectories. The effectiveness of this hybrid neuro-mechanical system is also reflected in the fact that real-time learning of parameters was possible, where sometimes the speed of the robot changes quite strongly (see movie) without tripping it. References [1] J. Pratt. Exploiting Inherent Robustness and Natural Dynamics in the Control of Bipedal Walking Robots. PhD thesis, Massachusetts Institute of Technology, 2000. [2] B. Surla D. Vukobratovic, M. Borovac and D. Stokic. Biped locomotion: dynamics, stability, control and application. Springer-Verlag, 1990. [3] R. Q. V. Van der Linde. Active leg compliance for passive walking. In Proceedings of IEEE International Conference on Robotics and Automation, Orlando, Florida, 1998. [4] Steve Collins and Andy Ruina. Efficient bipedal robots based on passive-dynamic walkers. Science, 37:1082–1085, 2005. [5] T. Wadden and O. Ekeberg. A neuro-mechanical model of legged locomotion: Single leg control. Biological Cybernetics, 79:161–173, 1998. [6] R.D. Beer, R.D. Quinn, H.J. Chiel, and R.E. Ritzmann. Biologically inspired approaches to robotics. Communications of the ACM, 40(3):30–38, 1997. [7] R.D. Beer and H.J. Chiel. A distributed neural network for hexapod robot locomotion. Neural Computation, 4:356–365, 1992. [8] J.C. Gallagher, R.D. Beer, K.S. Espenschied, and R.D. Quinn. Application of evolved locomotion controllers to a hexapod robot. Robotics and Autonomous Systems, 19:95– 103, 1996. [9] Nate Kohl and Peter Stone. Policy gradient reinforcement learning for fast quadrupedal locomotion. In Proceedings of the IEEE International Conference on Robotics and Automation, volume 3, pages 2619–2624, May 2004. [10] H. Cruse, T. Kindermann, M. Schumm, and et.al. Walknet - a biologically inspired network to control six-legged walking. Neural Networks, 11(7-8):1435–1447, 1998. [11] Y. Fukuoka, H. Kimura, and A.H. Cohen. Adaptive dynamic walking of a quadruped robot on irregular terrain based on biological concepts. Int. J. of Robotics Research, 22:187–202, 2003. [12] M.A. Lewis. Certain principles of biomorphic robots. Autonomous Robots, 11:221– 226, 2001.
|
2005
|
59
|
2,876
|
Saliency Based on Information Maximization Neil D.B. Bruce and John K. Tsotsos Department of Computer Science and Centre for Vision Research York University, Toronto, ON, M2N 5X8 {neil,tsotsos}@cs . yorku. c a Abstract A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from a scene. The proposed operation is based on Shannon's self-information measure and is achieved in a neural circuit, which is demonstrated as having close ties with the circuitry existent in the primate visual cortex. It is further shown that the proposed saliency measure may be extended to address issues that currently elude explanation in the domain of saliency based models. Results on natural images are compared with experimental eye tracking data revealing the efficacy of the model in predicting the deployment of overt attention as compared with existing efforts. 1 Introduction There has long been interest in the nature of eye movements and fixation behavior following early studies by Buswell [I] and Yarbus [2]. However, a complete description of the mechanisms underlying these peculiar fixation patterns remains elusive. This is further complicated by the fact that task demands and contextual knowledge factor heavily in how sampling of visual content proceeds. Current bottom-up models of attention posit that saliency is the impetus for selection of fixation points. Each model differs in its definition of saliency. In perhaps the most popular model of bottom-up attention, saliency is based on centre-surround contrast of units modeled on known properties of primary visual cortical cells [3]. In other efforts, saliency is defined by more ad hoc quantities having less connection to biology [4]. In this paper, we explore the notion that information is the driving force behind attentive sampling. The application of information theory in this context is not in itself novel. There exist several previous efforts that define saliency based on Shannon entropy of image content defined on a local neighborhood [5, 6, 7, 8]. The model presented in this work is based on the closely related quantity of self-information [9]. In section 2.2 we discuss differences between entropy and self-information in this context, including why self-information may present a more appropriate metric than entropy in this domain. That said, contributions of this paper are as follows: 1. A bottom-up model of overt attention with selection based on the self-information of local image content. 2. A qualitative and quantitative comparison of predictions of the model with human eye tracking data, contrasted against the model ofItti and Koch [3]. 3. Demonstration that the model is neurally plausible via implementation based on a neural circuit resembling circuitry involved in early visual processing in primates. 4. Discussion of how the proposal generalizes to address issues that deny explanation by existing saliency based attention models. 2 The Proposed Saliency Measure There exists much evidence indicating that the primate visual system is built on the principle of establishing a sparse representation of image statistics. In the most prominent of such studies, it was demonstrated that learning a sparse code for natural image statistics results in the emergence of simple-cell receptive fields similar to those appearing in the primary visual cortex of primates [10, 11]. The apparent benefit of such a representation comes from the fact that a sparse representation allows certain independence assumptions with regard to neural firing. This issue becomes important in evaluating the likelihood of a set of local image statistics and is elaborated on later in this section. In this paper, saliency is determined by quantifying the self-information of each local image patch. Even for a very small image patch, the probability distribution resides in a very high dimensional space. There is insufficient data in a single image to produce a reasonable estimate of the probability distribution. For this reason, a representation based on independent components is employed for the independence assumption it affords. leA is performed on a large sample of 7x7 RGB patches drawn from natural images to determine a suitable basis. For a given image, an estimate of the distribution of each basis coefficient is learned across the entire image through non-parametric density estimation. The probability of observing the RGB values corresponding to a patch centred at any image location may then be evaluated by independently considering the likelihood of each corresponding basis coefficient. The product of such likelihoods yields the joint likelihood of the entire set of basis coefficients. Given the basis determined by ICA, the preceding computation may be realized entirely in the context of a biologically plausible neural circuit. The overall architecture is depicted in figure 1. Details of each of the aforesaid model components including the details of the neural circuit are as follows: Projection into independent component space provides, for each local neighborhood of the image, a vector W consisting of N variables Wi with values Vi. Each W i specifies the contribution of a particular basis function to the representation of the local neighborhood. As mentioned, these basis functions, learned from statistical regularities observed in a large set of natural images show remarkable similarity to V 1 cells [10, 11]. The ICA projection then allows a representation w, in which the components W i are as independent as possible. For further details on the ICA projection of local image statistics see [12]. In this paper, we propose that salience may be defined based on a strategy for maximum information sampling. In particular, Shannon's self-information measure [9], -log(p(x)), applied to the joint likelihood of statistics in a local neighborhood decribed by w, provides an appropriate transformation between probability and the degree of infom1ation inherent in the local statistics. It is in computing the observation likelihood that a sparse representation is instrumental: Consider the probability density function p( W l = Vl, Wz = Vz, ... , Wn = vn ) which quantifies the likelihood of observing the local statistics with values Vl, ... , Vn within a particular context. An appropriate context may include a larger area encompassing the local neigbourhood described by w, or the entire scene in question. The presumed independence of the ICA decomposition means that P(WI = VI, Wz = V2, ... , Wn = V n ) = rr ~= l P(Wi = Vi) . Thus, a sparse representation allows the estimation of the n-dimensional space described by W to be derived from n one dimensional probability density functions. Evaluating p( Wl = VI, W2 = V2, ... , Wn = v n ) requires considering the distribution of values taken on by each W i in a more global context. In practice, this might be derived on the basis of a nonparametric or histogram density estimate. In the section that follows, we demonstrate that an operation equivalent to a non-parametric density estimate may be achieved using a suitable neural circuit. 2.1 Likelihood Estimation in A Neural Circuit In the following formulation, we assume an estimate of the likelihood of the components of W based on a Gaussian kernel density estimate. Any other choice of kernel may be substituted, with a Gaussian window chosen only for its common use in density estimation and without loss of generality. Let Wi ,j ,k denote the set of independent coefficients based on the neighborhood centered at j , k. An estimate of p( Wi,j,k = Vi,j,k) based on a Gaussian window is given by: (1) with L s ,t w(s, t) = 1 where \f! is the context on which the probability estimate of the coefficients of w is based. w (s, t) describes the degree to which the coefficient w at coordinates s, t contributes to the probability estimate. On the basis of the form given in equation I it is evident that this operation may equivalently be implemented by the neural circuit depicted in figure 2. Figure 2 demonstrates only coefficients derived from a horizontal cross-section. The two dimensional case is analogous with parameters varying in i, j, and k dimensions. K consists of the Kernel function employed for density estimation. In our case this is a Gaussian of the form 0"~e-x2 /20-2. w(s, t) is encoded based on the weight of connections to K. As x = Vi ,j,k Vi,s,t the output of this operation encodes the impact of the Kernel function with mean Vi,s,t on the value of p( Wi,j,k = Vi,j,k). Coefficients at the input layer correspond to coefficients of v. The logarithmic operator at the final stage might also be placed before the product on each incoming connection, with the product then becoming a summation. It is interesting to note that the structure of this circuit at the level of within feature spatial competition is remarkably similar to the standard feedforward model of lateral inhibition, a ubiquitous operation along the visual pathways thought to playa chief role in attentional processing [14]. The similarity between independent components and VI cells, in conjunction with the aforementioned consideration lends credibility to the proposal that information may contribute to driving overt attentional selection. One aspect lacking from the preceding description is that the saliency map fails to take into account the dropoff in visual acuity moving peripherally from the fovea. In some instances the maximum information accommodating for visual acuity may correspond to the center of a cluster of salient items, rather than centered on one such item. For this reason, the resulting saliency map is convolved with a Gaussian with parameters chosen to correspond approximately to the drop off in visual acuity observed in the human visual system. 2.2 Self-Information versus Entropy It is important to distinguish between self-information and entropy since these terms are often confused. The difference is subtle but important on two fronts. The first consideration lies in the expected behavior in popout paradigms and the second in the neural circuitry involved. Let X = [Xl, X2, ... , xnl denote a vector of RGB values corresponding to image patch X, and D a probability density function describing the distribution of some feature set over X. For example, D might correspond to a histogram estimate of intensity values within X or the relative contribution of different orientations within a local neighborhood situated on the boundary of an object silhouette [6]. Assuming an estimate of D based on N Orfglnallmage of A r----ii-iiliiiil---A----: :11 ••••• : : •••••• Ba~is : I... . . Functions : :. ••••• ! :. _. : ~ --------------------10 Examplo basad on shlfUng window In horizontal direction Inromax leA r-------------I •• • • I ... I. . .. : I • • : : 360,000 : I Random Patches I -------______ 1 Figure I : The framework that achieves the desired information measure. Shown is the computation corresponding to three horizontally adjacent neighbourhoods with flow through the network indicated by the orange, purple, and cyan windows and connections. The connections shown facilitate computation of the information measure corresponding to the pixel centered in the purple window. The network architecture produces this measure on the basis of evaluating the probability of these coefficients with consideration to the values of such coefficients in neighbouring regions. bins, the entropy of D is given by: - L~ l Di1og(Di). In this example, entropy characterizes the extent to which the feature(s) characterized by D are uniformly distributed on X . Self-information in the proposed saliency measure is given by -log(p(X)). That is, SelfinfolTIlation characterizes the raw likelihood of the specific n-dimensional vector of ROB values given by X . p(X) in this case is based on observing a number of n-dimensional feature vectors based on patches drawn from the area surrounding X . Thus, p( X) characterizes the raw likelihood of observing X based on its surround and -log(p(X)) becomes closer to a measure of local contrast whereas entropy as defined in the usual manner is closer to a measure of local activity. The importance of this distinction is evident in considering figure 3. Figure 3 depicts a variety of candles of varying orientation, and color. There is a tendency to fixate the empty region on the left, which is the location of lowest entropy in the image. In contrast, this region receives the highest confidence from the algorithm proposed in this paper as it is highly informative in the context of this image. In classic popout experiments, a vertical line among horizontal lines presents a highly salient target. The same vertical line among many lines of random orientations is not, although the entropy associated with the second scenario is much greater. With regard to the neural circuitry involved, we have demonstrated that self-information may be computed using a neural circuit in the absence of a representation of the entire probability distribution. Whether an equivalent operation may be achieved in a biologically plausible manner for the computation of entropy remains to be established. j..1 , j..1, j..1 . j..1 , j..1 , i, i, i, i, i+1. i+1 , i+1, i+1, i+1 , ~2 ~ 1 i i+1 /+2 ,2 ,1 i i+1 /+2 1-2 ;.' / /+' iT, Figure 2: AID depiction of the neural architecture that computes the self-information of a set of local statistics. The operation is equivalent to a Kernel density estimate. Coefficients correspond to subscripts of Vi,j,k. The small black circles indicate an inhibitory relationship and the small white circles an excitatory relationship Figure 3: An image that highlights the difference between entropy and self-information. Fixation invariably falls on the empty patch, the locus of minimum entropy in orientation and color but maximum in self-information when the surrounding context is considered. 3 Experimental Validation The following section evaluates the output of the proposed algorithm as compared with the bottom-up model of Itti and Koch [3]. The model of Itti and Koch is perhaps the most popular model of saliency based attention and currently appears to be the yardstick against which other models are measured. 3.1 Experimental eye tracking data The data that forms the basis for performance evaluation is derived from eye tracking experiments performed while subjects observed 120 different color images. Images were presented in random order for 4 seconds each with a mask between each pair of images. Subjects were positioned 0.75m from a 21 inch CRT monitor and given no particular instructions except to observe the images. Images consist of a variety of indoor and outdoor scenes, some with very salient items, others with no particular regions of interest. The eye tracking apparatus consisted of a standard non head-mounted device. The parameters of the setup are intended to quantify salience in a general sense based on stimuli that one might expect to encounter in a typical urban environment. Data was collected from 20 different subjects for the full set of 120 images. The issue of comparing between the output of a particular algorithm, and the eye tracking data is non-trivial. Previous efforts have selected a number of fixation points based on the saliency map, and compared these with the experimental fixation points derived from a small number of subjects and images (7 subjects and 15 images in a recent effort [4]). There are a variety of methodological issues associated with such a representation. The most important such consideration is that the representation of perceptual importance is typically based on a saliency map. Observing the output of an algorithm that selects fixation points based on the underlying saliency map obscures observation of the degree to which the saliency maps predict important and unimportant content and in particular, ignores confidence away from highly salient regions. Secondly, it is not clear how many fixation points should be selected. Choosing this value based on the experimental data will bias output based on information pertaining to the content of the image and may produce artificially good results. The preceding discussion is intended to motivate the fact that selecting discrete fixation coordinates based on the saliency map for comparison may not present the most appropriate representation to use for performance evaluation. In this effort, we consider two different measures of performance. Qualitative comparison is based on the representation proposed in [16]. In this representation, a fixation density map is produced for each image based on all fixation points, and subjects. Given a fixation point, one might consider how the image under consideration is sampled by the human visual system as photoreceptor density drops steeply moving peripherally from the centre of the fovea. This dropoff may be modeled based on a 2D Gaussian distribution with appropriately chosen parameters, and centred on the measured fixation point. A continuous fixation density map may be derived for a particular image based on the sum of all 2D Gaussians corresponding to each fixation point, from each subject. The density map then comprises a measure of the extent to which each pixel of the image is sampled on average by a human observer based on observed fixations. This affords a representation for which similarity to a saliency map may be considered at a glance. Quantitative performance evaluation is achieved based on the measure proposed in [15]. The saliency maps produced by each algorithm are treated as binary classifiers for fixation versus non-fixation points. The choice of several different thresholds and assessment of performance in predicting fixated versus not fixated pixel locations allows an ROC curve to be produced for each algorithm. 3.2 Experimental Results Figure 4 affords a qualitative comparison of the output of the proposed model with the experimental eye tracking data for a variety of images. Also depicted is the output of the Itti and Koch algorithm for comparison. In the implementation results shown, the ICA basis set was learned from a set of 360,000 7x7x3 image patches from 3600 natural images using the Lee et al. extended infomax algorithm [17]. Processed images are 340 by 255 pixels. W consists of the entire extent of the image and w(s, t) = ~ 'V s, t with p the number of pixels in the image. One might make a variety of selections for these variables based on arguments related to the human visual system, or based on performance. In our case, the values have been chosen on the basis of simplicity and do not appear to dramatically affect the predictive capacity of the model in the simulation results. In particular, we wished to avoid tuning these parameters to the available data set. Future work may include a closer look at some of the parameters involved in order to determine the most appropriate choices. The ROC curves appearing in figure 5 give some sense of the efficacy of the model in predicting which regions of a scene human observers tend to fixate. As may be observed, the predictive capacity of the model is on par with the approach of lui and Koch. Encouraging is the fact that similar perfonnance is achieved using a method derived from first principles, and with no parameter tuning or ad hoc design choices. Figure 4: Results for qualitative comparison. Within each boxed region defined by solid lines: (Top Left) Original Image (Top Right) Saliency map produced by Itti + Koch algorithm. (Bottom Left) Saliency map based on information maximization. (Bottom Right) Fixation density map based on experimental human eye tracking data. 4 On Biological Plausibility Although the proposed approach, along with the model of lui and Koch describe saliency on the basis of a single topographical saliency map, there is mounting evidence that saliency in the primate brain is represented at several levels based on a hierarchical representation [18] of visual content. The proposed approach may accommodate such a configuration with the single necessary condition being a sparse representation at each layer. As we have described in section 2, there is evidence that suggests the possibility that the primate visual system may consist of a multi-layer sparse coding architecture [10, 11]. The proposed algorithm quantifies information on the basis of a neural circuit, on units with response properties corresponding to neurons appearing in the primary visual cortex. However, given an analogous representation corresponding to higher visual areas that encode form, depth, convexity etc. the proposed method may be employed without any modification. Since the popout of features can occur on the basis of more complex properties such as a convex surface among concave surfaces [19], this is perhaps the next stage in a system that encodes saliency in the same manner as primates. Given a multi-layer architecture, the mechanism for selecting the locus of attention becomes less clear. In the model of Itti and Koch, a multi-layer winner-take-all network acts directly on the saliency map and there is no hierarchical representation of image content. There are however attention models that subscribe to a distributed representation of saliency (e.g. [20]), that may implement attentional selection with the proposed neural circuit encoding saliency at each layer. •• • 0 ~ 05 cr ~ 05 •• 01 02 0) O' OS 06 07 01 0' 1 False Alarm Rate Figure 5: ROC curves for Self-information (blue) and Itti and Koch (red) saliency maps. Area under curves is 0.7288 and 0.7277 respectively. 5 Conclusion We have described a strategy that predicts human attentional deployment on the principle of maximizing information sampled from a scene. Although no computational machinery is included strictly on the basis of biological plausibility, nevertheless the formulation results in an implementation based on a neurally plausible circuit acting on units that resemble those that facilitate early visual processing in primates. Comparison with an existing attention model reveals the efficacy of the proposed model in predicting salient image content. Finally, we demonstrate that the proposal might be generalized to facilitate selection based on high-level features provided an appropriate sparse representation is available. References [I] G.T. BusweII, How people look at pictures. Chicago: The University of Chicago Press. [2] A. Yarbus, Eye movements and vision. New York: Plenum Press. [3] L. Itti, C. Koch, E. Niebur, IEEE T PAMI, I I: I 254- I 259, 1998. [4] C. M. Privitera and L.w. Stark, IEEE T PAMI 22:970-981,2000. [5] F. Fritz, C. Seifert, L. Paletta, H. Bischof, Proc. WAPCV, Graz, Austria, 2004. [6] L.W. Renninger, J. Coughlan, P. Verghese, J. Malik, Proceedings NIPS 17, Vancouver, 2004. [7] T. Kadir, M. Brady, IJCV 45(2):83-105,2001. [8] T.S. Lee, S. Yu, Advances in NIPS 12:834-840, Ed. S.A. Solla, T.K. Leen, K. Muller, MIT Press. [9] C. E. Shannon, The BeII Systems Technical Journal, 27:93- I 54, 1948. [10] D.J. Field, and B. A. Olshausen, Nature 381 :607-609,1996. [I I] A.J. BeII, TJ. Sejnowski, Vision Research 37:3327-3338,1997. [12] N. Bruce, Neurocomputing, 65-66: I 25-133, 2005. [13] P. Comon, Signal Processing 36(3):287-314,1994. [14] M.W. Cannon and S.c. Fullenkamp, Vision Research 36(8):1 I 15-1 125, 1996. [15] B.W. Tatler, R.J. Baddeley, J.D. Gilchrist, Vision Research 45(5):643-659,2005. [16] H. Koesling, E. Carbone, H. Ritter, University of Bielefeld, Technical Report, 2002. [17] T.w. Lee, M. Girolami, TJ. Sejnowski, Neural Computation 11:417-441 , 1999. [1 8] J. Braun, C. Koch, D. K. Lee, L. Itti, In: Visual Attention and Cortical Circuits, (J. Braun, C. Koch, J. Davis Ed.), 215-242, Cambridge, MA:MIT Press, 200 I. [19] J. HuIIman, W. Te Winkel, F. Boselie, Perception and Psychophysics 62: 162-174, 2000. [20] J.K. Tsotsos, S. Culhane, W. Wai, Y. Lai, N. Davis, F. Nuflo, Art. Intel!. 78(1-2):507-547,1995.
|
2005
|
6
|
2,877
|
Learning from Data of Variable Quality Koby Crammer, Michael Kearns, Jennifer Wortman Computer and Information Science University of Pennsylvania Philadelphia, PA 19103 {crammer,mkearns,wortmanj}@cis.upenn.edu Abstract We initiate the study of learning from multiple sources of limited data, each of which may be corrupted at a different rate. We develop a complete theory of which data sources should be used for two fundamental problems: estimating the bias of a coin, and learning a classifier in the presence of label noise. In both cases, efficient algorithms are provided for computing the optimal subset of data. 1 Introduction In many natural machine learning settings, one is not only faced with data that may be corrupted or deficient in some way (classification noise or other label errors, missing attributes, and so on), but with data that is not uniformly corrupted. In other words, we might be presented with data of variable quality — perhaps some small amount of entirely “clean” data, another amount of slightly corrupted data, yet more that is significantly corrupted, and so on. Furthermore, in such circumstances we may often know at least an upper bound on the rate and type of corruption in each pile of data. An extreme example is the recent interest in settings where one has a very limited set of correctly labeled examples, and an effectively unlimited set of entirely unlabeled examples, as naturally arises in problems such as classifying web pages [1]. Another general category of problems that falls within our interest is when multiple piles of data are drawn from processes that differ perhaps slightly and in varying amounts from the process we wish to estimate. For example, we might wish to estimate a conditional distribution P(X|Y = y) but have only a small number of observations in which Y = y, but a larger number of observations in which Y = y′ for values of y′ “near” to y. In such circumstances it might make sense to base our model on a larger number of observations, at least for those y′ closest to y. While there is a large body of learning theory both for uncorrupted data and for data that is uniformly corrupted in some way [2, 3], there is no general framework and theory for learning from data of variable quality. In this paper we introduce such a framework, and develop its theory, for two basic problems: estimating a bias from corrupted coins, and learning a classifier in the presence of varying amounts of label noise. For the corrupted coins case we provide an upper bound on the error that is expressed as a trade-off between weighted approximation errors and larger amounts of data. This bound provides a building block for the classification noise setting, in which we are able to give a bound on the generalization error of empirical risk minimization that specifies the optimal subset of the data to use. Both bounds can computed by simple and efficient algorithms. We illustrate both problems and our algorithms with numerical simulations. 2 Estimating the Bias from Corrupted Coins We begin by considering perhaps the simplest possible instance of the general class of problems in which we are interested — namely, the problem of estimating the unknown bias of a coin. In this version of the variable quality model, we will have access to different amounts of data from “corrupted” coins whose bias differs from the one we wish to estimate. We use our solution for this simple problem as a building block for the classification noise setting in Section 3. 2.1 Problem Description Suppose we wish to estimate the bias β of a coin given K piles of training observations N1, ..., NK. Each pile Ni contains ni outcomes of flips of a coin with bias βi, where the only information we are provided is that βi ∈[β −ǫi, β +ǫi], and 0 ≤ǫ1 ≤ǫ2 ≤... ≤ǫK. We refer to the ǫi as bounds on the approximation errors of the corrupted coins. We denote by hi the number of heads observed in the ith pile. Our immediate goal is to determine which piles should be considered in order to obtain the best estimate of the true bias β. We consider estimates for β obtained by merging some subset of the data into a single unified pile, and computing the maximum likelihood estimate for β, which is simply the fraction of times heads appears as an outcome in the unified pile. Although one can consider using any subset of the data, it can be proved (and is intuitively obvious) that an optimal estimate (in the sense that will be defined shortly) always uses a prefix of the data, i.e. all data from the piles indexed 1 to k for some k ≤K, and possibly a subset of the data from pile k + 1. In fact, it will be shown that only complete piles need to be considered. Therefore, from this point on we restrict ourselves to estimates of this form, and identify them by the maximal index k of the piles used. The associated estimate is then simply ˆβk = h1 + . . . + hk n1 + . . . + nk . We denote the expectation of this estimate by ¯βk = E h ˆβk i = n1β1 + . . . + nkβk n1 + . . . + nk . To simplify the presentation we denote by ni,j the number of outcomes in piles Ni, . . . , Nj, that is, ni,j = Pj m=i nm . We now bound the deviation of the estimate ˆβk from the true bias of the coin β using the expectation ¯βk: |β −ˆβk| = |β −¯βk + ¯βk −ˆβk| ≤ |β −¯βk| + |¯βk −ˆβk| ≤ k X i=1 ni n1,k ǫi + |¯βk −ˆβk| The first inequality follows from the triangle inequality and the second from our assumptions. Using the Hoeffding inequality we can bound the second term and find that with high probability for an appropriate choice of δ we have |β −ˆβk| ≤ k X i=1 ni n1,k ǫi + s log(2K/δ) 2n1,k . (1) To summarize, we have proved the following theorem. Theorem 1 Let ˆβk be the estimate obtained by using only the data from the first k piles. Then for any δ > 0, with probability ≥1 −δ we have β −ˆβk ≤ k X i=1 ni n1,k ǫi + s log(2K/δ) 2n1,k simultaneously for all k = 1, . . . , K. Two remarks are in place here. First, the theorem is data-independent since it does not take into account the actual outcomes of the experiments h1, . . . , hK. Second, the two terms in the bound reflect the well-known trade-off between bias (approximation error) and variance (estimation error). The first term bounds the approximation error of replacing the true coin β with the average ¯βk. The second term corresponds to the estimation error which arises as a result of our finite sample size. This theorem implies a natural algorithm to choose the number of piles k∗as is the minimizer of the bound over the number of piles used: k∗= argmin k∈{1,...,K} ( k X i=1 ni n1,k ǫi + s log(2K/δ) 2n1,k ) . To conclude this section we argue that our choice of using a prefix of piles is optimal. First, note that by adding a new pile with a corruption level ǫ smaller then the current corruption level, we can always reduce the bounds. Thus it is optimal to use prefix of the piles and not to ignore piles with low corruption levels. Second, we need to show that if we decide to use a pile, it will be optimal to use all of it. Note that we can choose to view each coin toss as a separate pile with a single observation, thus yielding n1,K piles of size 1. The following technical lemma states that under this view of singleton piles, once we decide to add a pile with some corruption level, it will be optimal to use all singleton piles with the same corruption level. The proof of this lemma is omitted due to lack of space. Lemma 1 Assume that all the piles are of size ni = 1 and that ǫk ≤ǫp+k = ǫp+k+1. Then the following two inequalities cannot hold simultaneously: k X i=1 ni n1,k ǫi + s log(2n1,K/δ) 2n1,k > k+p X i=1 ni n1,k+p ǫi + s log(2n1,K/δ) 2n1,k+p k+p+1 X i=1 ni n1,k+p+1 ǫi + s log(2n1,K/δ) 2n1,k+p+1 ≥ k+p X i=1 ni n1,k+p ǫi + s log(2n1,K/δ) 2n1,k+p . In other words, if the bound on |β −ˆβk+p| is smaller than the bound on |β −ˆβk|, then the bound on |β −ˆβk+p+1| must be smaller than both unless ǫk+p+1 > ǫk+p. Thus if the pth and p+1th samples are from the same original pile (and ǫk+p+1 = ǫk+p), then once we decide to use samples through p, we will always want to include sample p + 1. It follows that we must only consider using complete piles of data. 2.2 Corrupted Coins Simulations The theory developed so far can be nicely illustrated via some simple simulations. We briefly describe just one such experiment in which there were K = 8 piles. The target coin was fair: β = 0.5. The approximation errors of the corrupted coins were 10 1 10 2 10 3 10 4 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of Examples Used Error Actual Error Bound Singeltons Bound e6 e5 e4 e4 + B(4, k) e3 e2 e1 i∗ k = 2 2 4 6 8 10 12 0 0.2 0.4 0.6 0.8 1 Number of Piles Used Error Error Bound Actual Error Achieved Error Figure 1: Left: Illustration of the actual error and our error bounds for estimating the bias of a coin. The error bars show one standard deviation. Center: Illustration of the interval construction. Right: Illustration of actual error of a 20 dimensional classification problem and the error bounds found using our methods. ⃗ǫ = (0.001, 0.01, 0.02, 0.03, 0.04, 0.2, 0.3, 0.5), and number of outcomes in the corresponding piles were ⃗n = (10, 50, 100, 500, 1500, 2000, 3000, 10000). The following process was repeated 1, 000 times. We set the probability of the ith coin to be βi = β + ǫi and sampled ni times from it. We then used all possible prefixes 1, . . . , k of piles to estimate β. For each k, we computed the bound for the estimate using piles 1, . . . , k using the theory developed in the previous section. To illustrate Lemma 1 we also computed the bound using partial piles. This bound is slightly higher than the suggested bound since we use effectively more piles (n1,K instead of K). As the lemma predicts, it is not valuable to use subsets of piles. Simulations with other values of K, ⃗ǫ and ⃗n yield similar qualitative behavior. We note that a strength of the theory developed is its generality, as it provides bounds for any model parameters. The leftmost panel of Figure 1 summarizes the simulation results. Empirically, the best estimate of the target coin is using the first four piles, while our algorithm suggests using the first five piles. However, the empirical difference in quality between the two estimates is negligible, so the theory has given near-optimal guidance in this case. We note that while our bounds have essentially the right shape (which is what matters for the computation of k∗), numerically they are quite loose compared to the true behavior. There are various limits to the numerical precision we should expect without increasing the complexity of the theory — for example, the precision is limited by accuracy of constants in the Hoeffding inequality and the use of the union bound. 3 Classification with Label Noise We next explore the problem of classification in the presence of multiple data sets with varying amounts of label noise. The setting is as follows. We assume there is a fixed and unknown binary function f : X →{0, 1} and a fixed and unknown distribution P on the inputs X to f. We are presented again with K piles of data, N1, ..., NK. Now each pile Ni contains ni labeled examples (x, y) that are generated from the target function f with label noise at rate ηi, where 0 ≤η1 < η2 < ... < ηK. In other words, for each example (x, y) in pile Ni, y = f(x) with probability 1 −ηi and y = ¬f(x) with probability ηi. The goal is to decide which piles of data to use in order to choose a function h from a set of hypothesis functions H with minimal generalization (true) error e(h) with respect to f and P. As before, for any prefix of piles N1, . . . , Nk, we examine the most basic estimator based on this data, namely the hypothesis minimizing the observed or training error: ˆhk = argmin h∈H {ˆek(h)} where ˆek(h) is the fraction of times h(x) ̸= y over all (x, y) ∈N1 ∪· · · ∪Nk. Thus we examine the standard empirical risk minimization framework [2]. Generalizing from the biased coin setting, we are interested in three primary questions: what can we say about the deviation |e(ˆhk) −ˆe(ˆhk)|, which is the gap between the true and observed error of the estimator ˆhk; what is the optimal value of k; and how can we compute the corresponding bounds? We note that the classification noise setting can naturally be viewed as a special case of a more general and challenging “agnostic” classification setting that we discuss briefly in Section 4. Here we provide a more specialized solution that exploits particular properties of class label noise. We begin by observing that for any fixed function h, the question of how ˆek(h) is related to e(h) bears great similarity to the biased coin setting. More precisely, the expected classification error of h on pile Ni only is (1 −ηi)e(h) + ηi(1 −e(h)) = e(h) + ηi(1 −2e(h)) . Thus if we set β = e(h), ǫi = ηi |1 −2e(h)| (2) and if we were only concerned with making the best use of the data in estimating e(h), we could attempt to apply the theory developed in Section 2 using the reduction above. There are two distinct and obvious difficulties. The first difficulty is that even restricting attention to estimating e(h) for a fixed h, the values for ǫi above (and thus the bounds computed by the methods of Section 2) depend on e(h), which is exactly the unknown quantity we would like to estimate. The second difficulty is that in order to bound the performance of empirical error minimization within H, we must say something about the probability of any h ∈H being selected. We address each of these difficulties in turn. 3.1 Computing the Error Bound Matrix For now we assume that {e(h) : h ∈H} is a finite set containing M values e1 < . . . < eM. This assumption clearly holds if |H| is finite, and can be removed entirely by discretizing the values in {e(h) : h ∈H}. For convenience we assume that for all levels ei there exists a function h ∈H such that e(h) = ei. This assumption can also be removed (details of both omitted due to space considerations). We define a matrix B of estimation errors as follows. Each row i of B represents one possible value of e(h) = ei, while each column k represents the use of only piles N1, . . . , Nk of noisy labeled examples of the target f. The entry B(i, k) will contain a bound on |e(h)−ˆek(h)| that is valid simultaneously for all h ∈H with e(h) = ei. In other words, for any such h, with high probability ˆek(h) falls in the range [ei −B(i, k), ei +B(i, k)]. It is crucial to note that we do not need to know which functions h ∈H satisfy e(h) = ei in order to either compute or use the bound B(i, k), as we shall see shortly. Rather, it is enough to know that for each h ∈H, some row of B will provide estimation error bounds for each k. The values in B can be now be calculated using the settings provided by Eq. (2) and the bound in Eq. (1). However, since Eq. (1) applies to the case of a single biased coin and here we have many (essentially one for each function at a given generalization error ei), we must modify it slightly. We can (pessimistically) bound the VC dimension of all functions with error rate e(h) = ei by the VC dimension d of the entire class H. Formally, we replace the square root term in Eq. (1) with the following expression, which is a simple application of VC theory [2, 3]: O s 1 n1,k d log n1,k d + log KM δ ! . (3) We note that in cases where we have more information on the structure of the generalization errors in H, an accordingly modified equation can be used, which may yield considerably improved bounds. For example, in the statistical physics theory of learning curves[4] it is common to posit knowledge of the density or number of functions in H at a given generalization error ei. In such a case we could clearly substitute the VC dimension d by the (potentially much smaller) VC dimension di of just this subclass. In a moment we describe how the matrix B can be used to choose the number k of piles to use, and to compute a bound on the generalization error of ˆhk. We first formalize the development above as an intermediate result. Lemma 2 Suppose H is a set of binary functions with VC dimension d. Let M be the number of noise levels and K be the number of piles. Then for all δ > 0, with probability at least 1 −δ, for all i ∈{1, . . . , M}, for all h ∈H with e(h) = ei, and for all k ∈ {1, . . . , K} we have |e(h) −ˆek(h)| ≤B(i, k) . The matrix B can be computed in time linear in its size O(KM). 3.2 Putting It All Together By Lemma 2, the matrix B gives, for each possible generalization error ei and each k, an upper bound on the deviation between observed and true errors for functions of true error ei when using piles N1, . . . , Nk. It is thus natural to try to use column k of B to bound the error of ˆhk, the function minimizing the observed error on these piles. Suppose we fix the number of piles used to be k. The observed error of any function with true generalization error ei must, with high probability, lie in the interval Ii,k = [ei −B(i, k), ei + B(i, k)]. By simultaneously considering these intervals for all values of ei, we can put a bound on the generalization error of the best function in the hypothesis class. This process is best illustrated by an example. Consider a hypothesis space in which the generalization error of the available functions can take on the discrete values 0, 0.1, 0.2, 0.3, 0.4, and 0.5. Suppose the matrix B has been calculated as above and the kth column is (0.16, 0.05, 0.08, 0.14, 0.07, 0.1). We know, for example, that all functions with true generalization error e2 = 0.1 will show an error in the range I2,k = [0.05, 0.15], and that all functions with true generalization error e4 = 0.3 will show an error in the range I4,k = [0.16, 0.44]. The center panel of Figure 1 illustrates the span of each interval. Examining this diagram, it becomes clear that the function ˆhk minimizing the error on N1 ∪· · · ∪Nk could not possibly be a function with true error e4 or higher as long as H contains at least one function with true error e2 since the observed error of the latter would necessarily be lower (with high probability). Likewise, it would not be possible for a function with true error e5 or e6 to be chosen. However, a function with true error e3 could produce a lower observed error than one with true error e1 or e2 (since e3−B(3, k) < e2 + B(2, k) and e3 −B(3, k) < e1 + B(1, k)), and thus could be chosen as ˆhk. Therefore, the smallest bound we can place on the true error of ˆhk in this example is e3 = 0.2. In general, we know that ˆhk will have true error corresponding to the midpoint of an a interval which overlaps with the interval with the least upper bound (I2,k in this example). This leads to an intuitive procedure for calculating a bound on the true error of ˆhk. First, we determine the interval with the smallest upper bound, i∗ k = argmini{ei + B(i, k)}. Consider the set of intervals which overlap with i∗ k, namely Jk = {i : ei−B(i, k) ≤ei∗ k +B(i∗ k, k)}. It is possible for the smallest observed error to come from a function corresponding to any of the intervals in Jk. Thus, a bound on the true error of ˆhk can be obtained by taking the maximum e(h) value for any function in Jk, i.e. C(k) def = maxi∈Jk{ei}. Our overall algorithm for bounding e(ˆhk) and choosing k∗can thus be summarized: 1. Compute the matrix B as described in Section 3.1 . 2. Compute the vector C described above. 3. Output k∗= argmink{C(k)}. We have established the following theorem. Theorem 2 Suppose H is a set of binary functions with VC dimension d. Let M be the number of noise levels and K be the number of piles. For all k = 1, ..., K, let ˆhk = argminh{ˆek(h)} be the function in H with the lowest empirical error evaluated using the first k piles of data. Then for all δ > 0, with probability at least 1 −δ, e(ˆhk) ≤C(k) The suggested choice of k is thus k∗= argmink {C(k)}. 3.3 Classification Noise Simulations In order to illustrate the methodology described in this section, simulations were run on a classification problem in which samples ⃗x ∈{0, 1}20 were chosen uniformly at random, and the target function f(⃗x) was 1 if and only if P20 i=1 xi > 10. Classification models were created for k = 1, ..., K by training using the first k piles of data using logistic regression with a learning rate of 0.0005 for a maximum of 5, 000 iterations. The generalization error for each model was determined by testing on a noise-free sample of 500 examples drawn from the same uniform distribution. Bounds were calculated using the algorithm described above with functions binned into 101 evenly spaced error values ⃗e = (0, 0.01, 0.02, ..., 1) with δ = 0.001. The right panel of Figure 1 shows an example of the bounds found with K = 12 piles, noise levels ⃗η = (0.001, 0.002, 0.01, 0.02, 0.03, 0.04, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5), and sample sizes ⃗n = (20, 150, 300, 400, 500, 600, 700, 1000, 1500, 2000, 3000, 5000). The algorithm described above correctly predicts that the eighth pile should be chosen as the cutoff, yielding an optimal error value of 0.018. It is interesting to note that although the error bounds shown are significantly higher than the actual error, the shapes of the curves are similar. This phenomena is common to many uniform convergence bounds. Further experimentation has shown that the algorithm described here works well in general when there are small piles of low noise data and large piles of high noise data. Its predictions are more useful in higher dimensional space, since it is relatively easy to get good predictions without much available data in lower dimensions. 4 Further Research In research subsequent to the results presented here [5], we examine a considerably more general “agnostic” classification setting [6]. As before, we assume there is a fixed and unknown binary function f : X →{0, 1} and a fixed and unknown distribution P on the inputs X to f. We are presented again with K piles of data, N1, ..., NK. Now each pile Ni contains ni labeled examples (x, y) that are generated from an unknown function hi such that e(hi) = e(hi, f) = PrP [hi(x) ̸= f(x)] ≤ǫi for given values ǫ1 ≤. . . ≤ǫK. Thus we are provided piles of labeled examples of unknown functions “nearby” the unknown target f, where “nearby” is quantified by the sequence of ǫi. In forthcoming work [5] we show that with high probability, for any k ≤K e(ˆhk, f) ≤min h∈H {e(f, h)}+2 k X i=1 ni n1,k ǫi+O s 1 n1,k d log n1,k d + log K δ ! This result again allows us to express the optimal number of piles as a trade-off between weighted approximation errors and increasing sample size. We suspect the result can be extended to a wider class of loss functions that just classification. References [1] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 92–100, 1998. [2] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [3] M. J. Kearns and U. V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994. [4] D. Haussler, M. Kearns, H.S. Seung, and N. Tishby. Rigorous learning curve bounds from statistical mechanics. In Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory, pages 76–87, 1994. [5] K. Crammer, M. Kearns, and J. Wortman. Forthcoming. 2006. [6] M. Kearns, R. Schapire, and L. Sellie. Towards efficient agnostic learning. Machine Learning, 17:115–141, 1994.
|
2005
|
60
|
2,878
|
Two view learning: SVM-2K, Theory and Practice Jason D.R. Farquhar jdrf99r@ecs.soton.ac.uk David R. Hardoon drh@ecs.soton.ac.uk Hongying Meng hongying@cs.york.ac.uk John Shawe-Taylor jst@ecs.soton.ac.uk Sandor Szedmak ss03v@ecs.soton.ac.uk School of Electronics and Computer Science, University of Southampton, Southampton, England Abstract Kernel methods make it relatively easy to define complex highdimensional feature spaces. This raises the question of how we can identify the relevant subspaces for a particular learning task. When two views of the same phenomenon are available kernel Canonical Correlation Analysis (KCCA) has been shown to be an effective preprocessing step that can improve the performance of classification algorithms such as the Support Vector Machine (SVM). This paper takes this observation to its logical conclusion and proposes a method that combines this two stage learning (KCCA followed by SVM) into a single optimisation termed SVM-2K. We present both experimental and theoretical analysis of the approach showing encouraging results and insights. 1 Introduction Kernel methods enable us to work with high dimensional feature spaces by defining weight vectors implicitly as linear combinations of the training examples. This even makes it practical to learn in infinite dimensional spaces as for example when using the Gaussian kernel. The Gaussian kernel is an extreme example, but techniques have been developed to define kernels for a range of different datatypes, in many cases characterised by very high dimensionality. Examples are the string kernels for text, graph kernels for graphs, marginal kernels, kernels for image data, etc. With this plethora of high dimensional representations it is frequently helpful to assist learning algorithms by preprocessing the feature space in projecting the data into a low dimensional subspace that contains the relevant information for the learning task. Methods of performing this include principle components analysis (PCA) [7], partial least squares [8], kernel independent component analysis (KICA) [1] and kernel canonical correlation analysis (KCCA) [5]. The last method requires two views of the data both of which contain all of the relevant information for the learning task, but which individually contain representation specific details that are different and irrelevant. Perhaps the simplest example of this situation is a paired document corpus in which we have the same information in two languages. KCCA attempts to isolate feature space directions that correlate between the two views and hence might be expected to represent the common relevant information. Hence, one can view this preprocessing as a denoising of the individual representations through cross-correlating them. Experiments have shown how using this as a preprocessing step can improve subsequent analysis in for example classification experiments using a support vector machine (SVM) [6]. This is explained by the fact that the signal to noise ratio has improved in the identified subspace. Though the combination of KCCA and SVM seems effective, there appears no guarantee that the directions identified by KCCA will be best suited to the classification task. This paper therefore looks at the possibility of combining the two distinct stages of KCCA and SVM into a single optimisation that will be termed SVM-2K. The next section introduces the new algorithm and discusses its structure. Experiments are then given showing the performance of the algorithm on an image classification task. Though the performance is encouraging it is in many ways counter-intuitive, leading to speculation about why an improvement is seen. To investigate this question an analysis of its generalisation properties is given in the following two sections, before drawing conclusions. 2 SVM-2K Algorithm We assume that we are given two views of the same data, one expressed through a feature projection φA with corresponding kernel κA and the other through a feature projection φB with kernel κB. A paired data set is then given by a set S = {(φA(x1), φB(x1)), . . . , (φA(xℓ), φB(xℓ))}, where for example φA could be the feature vector associated with one language and φB that associated with a second language. For a classification task each data item would also include a label. The KCCA algorithm looks for directions in the two feature spaces such that when the training data is projected onto those directions the two vectors (one for each view) of values obtained are maximally correlated. One can also characterise these directions as those that minimise the two norm between the two vectors under the constraint that they both have norm 1 [5]. We can think of this as constraining the choice of weight vectors in the two spaces. KCCA would typically find a sequence of projection directions of dimension anywhere between 50 and 500 that can then be used as the feature space for training an SVM [6]. An SVM can be thought of as a 1-dimensional projection followed by thresholding, so SVM-2K combines the two steps by introducing the constraint of similarity between two 1-dimensional projections identifying two distinct SVMs one in each of the two feature spaces. The extra constraint is chosen slightly differently from the 2-norm that characterises KCCA. We rather take an ǫ-insensitive 1-norm using slack variables to measure the amount by which points fail to meet ǫ similarity: |⟨wA, φA(xi)⟩+ bA −⟨wB, φB(xi)⟩−bB| ≤ηi + ǫ, where wA, bA (wB, bB) are the weight and threshold of the first (second) SVM. Combining this constraint with the usual 1-norm SVM constraints and allowing different regularisation constants gives the following optimisation: min L = 1 2 ∥wA∥2 + 1 2 ∥wB∥2 + CA ℓ X i=1 ξA i + CB ℓ X i=1 ξB i + D ℓ X i=1 ηi such that |⟨wA, φA(xi)⟩+ bA −⟨wB, φB(xi)⟩−bB| ≤ηi + ǫ yi (⟨wA, φA(xi)⟩+ bA) ≥1 −ξA i yi (⟨wB, φB(xi)⟩+ bB) ≥1 −ξB i ξA i ≥0, ξB i ≥0, ηi ≥0 all for 1 ≤i ≤ℓ. Let ˆwA, ˆwB, ˆbA, ˆbB be the solution to this optimisation problem. The final SVM-2K decision function is then h(x) = sign(f(x)), where f (x) = 0.5 ⟨ˆwA, φA (x)⟩+ ˆbA + ⟨ˆwB, φB (x)⟩+ ˆbB = 0.5 (fA (x) + fB (x)) . Applying the usual Lagrange multiplier techniques we arrive at the following dual problem: max W = −1 2 ℓ X i,j=1 gA i gA j κA(xi, xj) + gB i gB j κB(xi, xj) + ℓ X i=1 (αA i + αB i ) such that gA i = αA i yi −β+ i + β− i , gB i = αB i yi + β+ i −β− i , ℓ X i=1 gA i = 0 = ℓ X i=1 gB i , 0 ≤αA/B i ≤CA/B 0 ≤β+/− i , β+ i + β− i ≤D with the functions fA/B(x) = ℓ X i=1 gA/B i κA/B(xi, x) + bA/B. 3 Experimental results Figure 1: Typical example images from the PASCAL VOC challenge database. Classes are; Bikes (top-left), People (top-right), Cars (bottom-left) and Motorbikes (bottom-right). The performance of the algorithms developed in this paper we evaluated on PASCAL Visual Object Classes (VOC) challenge dataset test11. This is a new dataset consisting of four object classes in realistic scenes. The object classes are, motorbikes (M), bicycles (B), people (P) and cars (C) with the dataset containing 684 training set images consisting of (214, 114, 84, 272) images in each class and 689 test set images with (216, 114, 84, 275) for each class. As can be seen in Figure 1 this is a very challenging dataset with objects of widely varying type, pose, illumination, occlusion, background, etc. The task is to classify the image according to whether it contains a given object type. We tested the images containing the object (i.e. categories M, B, C and P) against non-object images from the database (i.e. category N). The training set contained 100 positive and 100 negative images. The tests are carried out on 100 new images, half belonging to the learned class and half not. Like many other successful methods [3, 4] we take a “set-of-patches” approach to this problem. These methods represent an image in terms of the features of a set of small image patches. By carefully choosing the patches and their features this representation can be made largely robust to the common types of image transformation, e.g. scale, rotation, perspective, occlusion. Two views were provided of each image through the use of different patch types. One was from affine invariant interest point detectors with a moment invariant descriptor calculated for each interest point. The second were key point features from SIFT detectors. For one image, several hundred characteristic patches were detected according to the complexity of the images. These were then clustered around K = 400 centres for each feature space. Each image is then represented as a histogram over these centres. So finally, for one image there are two feature vectors of length 400 that provide the two views. Motorbike Bicycle People Car SVM 1 94.05 91.58 91.58 87.95 SVM 2 91.15 91.15 90.57 86.21 KCCA + SVM 94.19 90.28 90.57 88.68 SVM 2K 94.34 93.47 92.74 90.13 Table 1: Results for 4 datasets showing test accuracy of the individual SVMs and SVM-2K. Figure 1 show the results of the test errors obtained for the different categories for the individual SVMs and the SVM-2K. There is a clear improvement in performance of the SVM-2K over the two individual SVMs in all four categories. If we examine the structure of the optimisation, the restriction that the output of the two linear functions be similar seems to be an arbitrary restriction particularly for points that are far from the margin or are misclassified. Intuitively it would appear better to take advantage of the abilities of the different representations to better fit the data. In order to understand this apparent contradiction we now consider a theoretical analysis of the generalisation of the SVM-2K using the framework provided by Rademacher complexity bounds. 4 Background theory We begin with the definitions required for Rademacher complexity, see for example Bartlett and Mendelson [2] (see also [9] for an introductory exposition). Definition 1. For a sample S = {x1, · · · , xℓ} generated by a distribution D on a set 1Available from http://www.pascal-network.org/challenges/VOC/voc/ 160305 VOCdata.tar.gz X and a real-valued function class F with a domain X, the empirical Rademacher complexity of F is the random variable ˆRℓ(F) = Eσ " sup f∈F 2 ℓ ℓ X i=1 σif (xi) x1, · · · , xℓ # where σ = {σ1, · · · , σℓ} are independent uniform {±1}-valued Rademacher random variables. The Rademacher complexity of F is Rℓ(F) = ES h ˆRℓ(F) i = ESσ " sup f∈F 2 ℓ ℓ X i=1 σif (xi) # We use ED to denote expectation with respect to a distribution D and ES when the distribution is the uniform (empirical) distribution on a sample S. Theorem 1. Fix δ ∈(0, 1) and let F be a class of functions mapping from S to [0, 1]. Let (xi)ℓ i=1 be drawn independently according to a probability distribution D. Then with probability at least 1 −δ over random draws of samples of size ℓ, every f ∈F satisfies ED [f (x)] ≤ES [f (x)] + Rℓ(F) + 3 q ln(2/δ) 2ℓ ≤ES [f (x)] + ˆRℓ(F) + 3 q ln(2/δ) 2ℓ Given a training set S the class of functions that we will primarily be considering are linear functions with bounded norm n x →Pℓ i=1 αiκ (xi, x) : α′Kα ≤B2o ⊆{x →⟨w, φ (x)⟩: ∥w∥≤B} = FB where φ is the feature mapping corresponding to the kernel κ and K is the corresponding kernel matrix for the sample S. The following result bounds the Rademacher complexity of linear function classes. Theorem 2. [2] If κ : X × X →R is a kernel, and S = {x1, · · · , xℓ} is a sample of point from X, then the empirical Rademacher complexity of the class FB satisfies ˆRℓ(F) ≤2B ℓ v u u t ℓ X i=1 κ (xi, xi) = 2B ℓ p tr (K) 4.1 Analysing SVM-2K For SVM-2K, the two feature sets from the same objects are (φA (xi))ℓ i=1 and (φB (xi))ℓ i=1 respectively. We assume the notation and optimisation of SVM-2K given in section 2, equation (1). First observe that an application of Theorem 1 shows that ES[|fA(x) −fB(x)|] ≤ ES[|⟨ˆwA, φA(x)⟩+ ˆbA −⟨ˆwB, φB(x)⟩−ˆbB|] ≤ ǫ + 1 ℓ ℓ X i=1 ηi + 2C ℓ p tr(KA) + tr(KB) + 3 r ln(2/δ) 2ℓ =: D with probability at least 1−δ. We have assumed that ∥wA∥2+b2 A ≤C2 and ∥wB∥2+b2 B ≤ C2 for some prefixed C. Hence, the class of functions we are considering when applying SVM-2K to this problem can be restricted to FC,D = ( f f : x →0.5 ℓ X i=1 gA i κA (xi, x) + gB i κB (xi, x) + bA + bB ! , gA′KAgA + b2 A ≤C2, gB′KBgB + b2 B ≤C2, ES[|fA(x) −fB(x)|] ≤D The class FC,D is clearly closed under negation. Applying the usual Rademacher techniques for margin bounds on generalisation we obtain the following result. Theorem 3. Fix δ ∈(0, 1) and let FC,D be the class of functions described above. Let (xi)ℓ i=1 be drawn independently according to a probability distribution D. Then with probability at least 1 −δ over random draws of samples of size ℓ, every f ∈FC,D satisfies P(x,y)∼D(sign(f(x)) ̸= y) ≤0.5 ℓ ℓ X i=1 (ξA i + ξB i ) + ˆRℓ(FC,D) + 3 r ln(2/δ) 2ℓ . It therefore remains to compute the empirical Rademacher complexity of FC,D, which is the critical discriminator between the bounds for the individual SVMs and that of the SVM-2K. 4.2 Empirical Rademacher complexity of FC,D We now define an auxiliary function of two weight vectors wA and wB, D(wA, wB) := ED[|⟨wA, φA(x)⟩+ bA −⟨wB, φB(x)⟩−bB|] With this notation we can consider computing the Rademacher complexity of the class FC,D. ˆRℓ(FC,D) = Eσ " sup f∈FC,D 2 ℓ ℓ X i=1 σif (xi) # = Eσ " sup ∥wA∥≤C, ∥wB∥≤C D(wA,wB)≤D 1 ℓ ℓ X i=1 σi [⟨wA, φA (xi)⟩+ bA + ⟨wB, φB (xi)⟩+ bB] # Our next observation follows from a reversed version of the basic Rademacher complexity theorem reworked to reverse the roles of the empirical and true expectations: Theorem 4. Fix δ ∈(0, 1) and let F be a class of functions mapping from S to [0, 1]. Let (xi)ℓ i=1 be drawn independently according to a probability distribution D. Then with probability at least 1 −δ over random draws of samples of size ℓ, every f ∈F satisfies ES [f (x)] ≤ED [f (x)] + Rℓ(F) + 3 q ln(2/δ) 2ℓ ≤ED [f (x)] + ˆRℓ(F) + 3 q ln(2/δ) 2ℓ The proof tracks that of Theorem 1 but is omitted through lack of space. For weight vectors wA and wB satisfying D(wA, wB) ≤D, an application of Theorem 4 shows that with probability at least 1 −δ we have ˆD(wA, wB) := ES[|⟨wA, φA(x)⟩+ bA −⟨wB, φB(x)⟩−bB|] ≤ D + 2C ℓ p tr(KA) + tr(KB) + 3 r ln(2/δ) 2ℓ ≤ ǫ + 1 ℓ ℓ X i=1 ηi + 4C ℓ p tr(KA) + tr(KB) + 6 r ln(2/δ) 2ℓ =: ˆD We now return to bounding the Rademacher complexity of FC,D. The above result shows that with probability greater than 1 −δ ˆRℓ FC,D ≤Eσ sup ∥wA∥≤C ∥wB∥≤C ˆ D(wA,wB)≤ˆ D 1 ℓ Pℓ i=1 σi [⟨wA, φA (xi)⟩+ bA + ⟨wB, φB (xi)⟩+ bB] First note that the expression in square brackets is concentrated under the uniform distribution of Rademacher variables. Hence, we can estimate the complexity for a fixed instantiation ˆσ of the the Rademacher variables σ. We now must find the value of wA and wB that maximises the expression 1 ℓ "* wA, ℓ X i=1 ˆσiφA (xi) + + bA ℓ X i=1 ˆσi + * wB, ℓ X i=1 ˆσiφB (xi) + + bB ℓ X i=1 ˆσi # = 1 ℓ ˆσ′KAgA + ˆσ′KBgB + (bA + bB)ˆσ′j subject to the constraints gA′KAgA ≤C2, gB′KBgB ≤C2, and 1 ℓ1′abs(KAgA −KBgB + (bA −bB)1) ≤ˆD where 1 is the all ones vector and abs(u) is the vector obtained by applying the abs function to u component-wise. The resulting value of the objective function is the estimate of the Rademacher complexity. This is the optimisation solved in the brief experiments described below. 4.3 Experiments with Rademacher complexity We computed the Rademacher complexity for the problems considered in the experimental section above. We wished to verify that the Rademacher complexity of the space FC,D, where C and D are determined by applying the SVM-2K, are indeed significantly lower than that obtained for the SVMs in each space individually. Motorbike Bicycle People Car SVM 1 94.05 91.58 91.58 87.95 Rad 1 1.65 0.93 0.91 1.60 SVM 2 91.15 91.15 90.57 86.21 Rad 2 1.72 1.48 0.87 1.64 SVM 2K 94.34 93.47 92.74 90.13 Rad 2K 1.26 1.28 0.82 1.26 Table 2: Results for 4 datasets showing test accuracy and Rademacher complexity (Rad) of the individual SVMs and SVM-2K. Table 2 shows the results for the motorbike, bicycle, people and car datasets. We show the Rademacher complexities for the individual SVMs and for the SVM-2K along with the generalisation results already given in Table 1. In the case of SVM-2K we sampled the Rademacher variables 10 times and give the corresponding standard deviation. As predicted the Rademacher complexity is significantly smaller for SVM-2K, hence confirming the intuition that led to the introduction of the approach, namely that the complexity of the class is reduced by restricting the weight vectors to align on the training data. Provided both representations contain the necessary data we can therefore expect an improvement in generalisation as observed in the reported experiments. 5 Conclusions With the plethora of data now being collected in a wide range of fields there is frequently the luxury of having two views of the same phenomenon. The simplest example is paired corpora of documents in different languages, but equally we can think of examples from bioinformatics, machine vision, etc. Frequently it is also reasonable to assume that both views contain all of the relevant information required for a classification task. We have demonstrated that in such cases it can be possible to leaver the correlation between the two views to improve classification accuracy. This has been demonstrated in experiments with a machine vision task. Furthermore, we have undertaken a theoretical analysis to illuminate the source and extent of the advantage that can be obtained, showing in the cases considered a significant reduction in the Rademacher complexity of the corresponding function classes. References [1] Francis R. Bach and Michael I. Jordan. Kernel independent component analysis. Journal of Machine Learning Research, 3:1–48, 2002. [2] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2002. [3] G. Csurka, C. Bray, C. Dance, and L. Fan. Visual categorization with bags of keypoints. In XRCE Research Reports, XEROX. The 8th European Conference on Computer Vision - ECCV, Prague, 2004. [4] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2003. [5] David Hardoon, Sandor Szedmak, and John Shawe-Taylor. Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16:2639–2664, 2004. [6] Yaoyong Li and John Shawe-Taylor. Using kcca for japanese-english cross-language information retrieval and classification. to appear in Journal of Intelligent Information Systems, 2005. [7] S. Mika, B. Sch¨olkopf, A. Smola, K.-R. M¨uller, M. Scholz, and G. R¨atsch. Kernel PCA and de-noising in feature spaces. In Advances in Neural Information Processing Systems 11, 1998. [8] R. Rosipal and L. J. Trejo. Kernel partial least squares regression in reproducing kernel hilbert space. Journal of Machine Learning Research, 2:97–123, 2001. [9] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, UK, 2004.
|
2005
|
61
|
2,879
|
Extracting Dynamical Structure Embedded in Neural Activity Byron M. Yu1, Afsheen Afshar1,2, Gopal Santhanam1, Stephen I. Ryu1,3, Krishna V. Shenoy1,4 1Department of Electrical Engineering, 2School of Medicine, 3Department of Neurosurgery, 4Neurosciences Program, Stanford University, Stanford, CA 94305 {byronyu,afsheen,gopals,seoulman,shenoy}@stanford.edu Maneesh Sahani Gatsby Computational Neuroscience Unit, UCL London, WC1N 3AR, UK maneesh@gatsby.ucl.ac.uk Abstract Spiking activity from neurophysiological experiments often exhibits dynamics beyond that driven by external stimulation, presumably reflecting the extensive recurrence of neural circuitry. Characterizing these dynamics may reveal important features of neural computation, particularly during internally-driven cognitive operations. For example, the activity of premotor cortex (PMd) neurons during an instructed delay period separating movement-target specification and a movementinitiation cue is believed to be involved in motor planning. We show that the dynamics underlying this activity can be captured by a lowdimensional non-linear dynamical systems model, with underlying recurrent structure and stochastic point-process output. We present and validate latent variable methods that simultaneously estimate the system parameters and the trial-by-trial dynamical trajectories. These methods are applied to characterize the dynamics in PMd data recorded from a chronically-implanted 96-electrode array while monkeys perform delayed-reach tasks. 1 Introduction At present, the best view of the activity of a neural circuit is provided by multiple-electrode extracellular recording technologies, which allow us to simultaneously measure spike trains from up to a few hundred neurons in one or more brain areas during each trial. While the resulting data provide an extensive picture of neural spiking, their use in characterizing the fine timescale dynamics of a neural circuit is complicated by at least two factors. First, extracellularly captured action potentials provide only an occasional view of the process from which they are generated, forcing us to interpolate the evolution of the circuit between the spikes. Second, the circuit activity may evolve quite differently on different trials that are otherwise experimentally identical. The usual approach to handling both problems is to average responses from different trials, and study the evolution of the peri-stimulus time histogram (PSTH). There is little alternative to this approach when recordings are made one neuron at a time, even when the dynamics of the system are the subject of study. Unfortunately, such averaging can obscure important internal features of the response. In many experiments, stimulus events provide the trigger for activity, but the resulting time-course of the response is internally regulated and may not be identical on each trial. This is especially important during cognitive processing such as decision making or motor planning. In this case, the PSTH may not reflect the true trial-by-trial dynamics. For example, a sharp change in firing rate that occurs with varying latency might appear as a slow smooth transition in the average response. An alternative approach is to adopt latent variable methods and to identify a hidden dynamical system that can summarize and explain the simultaneously-recorded spike trains. The central idea is that the responses of different neurons reflect different views of a common dynamical process in the network, whose effective dimensionality is much smaller than the total number of neurons in the network. While the underlying state trajectory may be slightly different on each trial, the commonalities among these trajectories can be captured by the network’s parameters, which are shared across trials. These parameters define how the network evolves over time, as well as how the observed spike trains relate to the network’s state at each time point. Dimensionality reduction in a latent dynamical model is crucial and yields benefits beyond simple noise elimination. Some of these benefits can be illustrated by a simple physical example. Consider a set of noisy video sequences of a bouncing ball. The trajectory of the ball may not be identical in each sequence, and so simply averaging the sequences together would provide little information about the dynamics. Independently smoothing the dynamics of each pixel might identify a dynamical process; however, correctly rejecting noise might be difficult, and in any case this would yield an inefficient and opaque representation of the underlying physical process. By contrast, a hidden dynamical system account could capture the video sequence data using a low-dimensional latent variable that represented only the ball’s position and momentum over time, with dynamical rules that captured the physics of ballistics and elastic collision. This representation would exploit shared information from all pixels, vastly simplifying the problem of noise rejection, and would provide a scientifically useful depiction of the process. The example also serves to illustrate the two broad benefits of this type of model. The first is to obtain a low dimensional summary of the dynamical trajectory in any one trial. Besides the obvious benefits of denoising, such a trajectory can provide an invaluable representation for prediction of associated phenomena. In the video sequence example, predicting the loudness of the sound on impact might be easy given the estimate of the ball’s trajectory (and thus its speed), but would be difficult from the raw pixel trajectories, even if denoised. In the neural case, behavioral variables such as reaction time might similarly be most easily predicted from the reconstructed trajectory. The second broad goal is systems identification: learning the rules that govern the dynamics. In the video example this would involve discovery of various laws of physics, as well as parameters describing the ball such as its coefficient of elasticity. In the neural case this would involve identifying the structure of dynamics available to the circuit: the number and relationship of attractors, appearance of oscillatory limit cycles and so on. The use of latent variable models with hidden dynamics for neural data has, thus far, been limited. In [1], [2], small groups of neurons in the frontal cortex were modeled using hidden Markov models, in which the latent dynamical system is assumed to transition between a set of discrete states. In [3], a state space model with linear hidden dynamics and pointprocess outputs was applied to simulated data. However, these restricted latent models cannot capture the richness of dynamics that recurrent networks exhibit. In particular, systems that converge toward point or line attractors, exhibit limit cycle oscillations, or even transition into chaotic regimes have long been of interest in neural modeling. If such systems are relevant to real neural data, we must seek to identify hidden models capable of reflecting this range of behaviors. In this work, we consider a latent variable model having (1) hidden underlying recurrent structure with continuous-valued states, and (2) Poisson-distributed output spike counts (conditioned on the state), as described in Section 2. Inference and learning for this nonlinear model are detailed in Section 3. The methods developed are applied to a delayed-reach task described in Section 4. Evidence of motor preparation in PMd is given in Section 5. In Section 6, we characterize the neural dynamics of motor preparation on a trial-by-trial basis. 2 Hidden non-linear dynamical system A useful dynamical system model capable of expressing the rich behavior expected of neural systems is the recurrent neural network (RNN) with Gaussian perturbations xt | xt−1 ∼N (ψ(xt−1), Q) (1) ψ(x) = (1 −k)x + kWg(x), (2) where xt ∈IRp×1 is the vector of the node values in the recurrent network at time t ∈{1, . . . , T}, W ∈IRp×p is the connection weight matrix, g is a non-linear activation function which acts element-by-element on its vector argument, k ∈IR is a parameter related to the time constant of the network, and Q ∈IRp×p is a covariance matrix. The initial state is Gaussian-distributed x0 ∼N (p0, V0) , (3) where p0 ∈IRp×1 and V0 ∈IRp×p are the mean vector and covariance matrix, respectively. Models of this class have long been used, albeit generally without stochastic pertubation, to describe the dynamics of neuronal responses (e.g., [4]). In this classical view, each node of the network represents a neuron or a column of neurons. Our use is more abstract. The RNN is chosen for the range of dynamics it can exhibit, including convergence to point or surface attractors, oscillatory limit cycles, or chaotic evolution; but each node is simply an abstract dimension of latent space which may couple to many or all of the observed neurons. The output distribution is given by a generalized linear model that describes the relationship between all nodes in the state xt and the spike count yi t ∈IR of neuron i ∈{1, . . . , q} in the tth time bin yi t | xt ∼Poisson h ci· xt + di Δ , (4) where ci ∈IRp×1 and di ∈IR are constants, h is a link function mapping IR →IR+, and Δ ∈IR is the time bin width. We collect the spike counts from all q simultaneouslyrecorded physical neurons into a vector yt ∈IRq×1, whose ith element is yi t. The choice of the link functions g and h is discussed in Section 3. 3 Inference and Learning The Expectation-Maximization (EM) algorithm [5] was used to iteratively (1) infer the underlying hidden state trajectories (i.e., recover a distribution over the hidden sequence {x}T 1 corresponding to the observations {y}T 1 ), and (2) learn the model parameters (i.e., estimate θ = W, Q, k, p0, V0, {ci}, {di} ), given only a set of observation sequences. Inference (the E-step) involves computing or approximating P {x}T 1 | {y}T 1 , θk for each sequence, where θk are the parameter estimates at the kth EM iteration. A variant of the Extended Kalman Smoother (EKS) was used to approximate these joint smoothed state posteriors. As in the EKS, the non-linear time-invariant state system (1)-(2) was transformed into a linear time-variant sytem using local linearization. The difference from EKS arises in the measurement update step of the forward pass P xt | {y}t 1 ∝P (yt | xt) P xt | {y}t−1 1 . (5) Because P (yt | xt) is a product of Poissons rather than a Gaussian, the filtered state posterior P (xt | {y}t 1) cannot be easily computed. Instead, as in [3], we approximated this posterior with a Gaussian centered at the mode of log P (xt | {y}t 1) and whose covariance is given by the negative inverse Hessian of the log posterior at that mode. Certain choices of h, including ez and log (1 + ez), lead to a log posterior that is strictly concave in xt. In these cases, the unique mode can easily be found by Newton’s method. Learning (the M-step) requires finding the θ that maximizes E log P {x}T 1 , {y}T 1 | θ , where the expectation is taken over the posterior state distributions found in the E-step. Note that, for multiple sequences that are independent conditioned on θ, we use the sum of expectations over all sequences. Because the posterior state distributions are approximated as Gaussians in the E-step, the above expectation is a Gaussian integral that involves non-linear functions g and h and cannot be computed analytically in general. Fortunately, this high-dimensional integral can be reduced to many one-dimensional Gaussian integrals, which can be accurately and reasonably efficiently approximated using Gaussian quadrature [6], [7]. We found that setting g to be the error function g(z) = 2 √π z 0 e−t2dt (6) made many of the one-dimensional Gaussian integrals involving g analytically tractable. Those that were not analytically tractable were approximated using Gaussian quadrature. The error function is one of a family of sigmoid activation functions that yield similar behavior in a RNN. If h were chosen to be a simple exponential, all the Gaussian integrals involving h could be computed exactly. Unfortunately, this exponential mapping would distort the relationship between perturbations in the latent state (whose size is set by the covariance matrix Q) and the resulting fluctuations in firing rates. In particular, the size of firing-rate fluctations would grow exponentially with the mean, an effect that would then add to the usual linear increase in spike-count variance that comes from the Poisson output distribution. Since neural firing does not show such a severe scaling in variability, such a model would fit poorly. Therefore, to maintain more even firing-rate fluctuations, we instead take h(z) = log (1 + ez) . (7) The corresponding Gaussian integrals must then be approximated by quadrature methods. Regardless of the forms of g and h chosen, numerical Newton methods are needed for maximization with respect to {ci} and {di}. The main drawback of these various approximations is that the overall observation likelihood is no longer guaranteed to increase after each EM iteration. However, in our simulations, we found that sensible results were often produced. As long as the variances of the posterior state distribution did not diverge, the output distributions described by the learned model closely approximated those of the actual model that generated the simulated data. 4 Task and recordings We trained a rhesus macaque monkey to perform delayed center-out reaches to visual targets presented on a fronto-parallel screen. On a given trial, the peripheral target was presented at one of eight radial locations (30, 70, 110, 150, 190, 230, 310, 350°) 10 cm away, as shown in Figure 1. After a pseudo-randomly chosen delay period of 200, 750, or 1000 ms, the target increased in size as the go cue and the monkey reached to the target. A 96-channel silicon electrode array (Cyberkinetics, Inc.) was implanted straddling PMd and motor cortex (M1). Spike sorting was performed offline to isolate 22 single-neuron and 109 multi-neuron units. 200 ms 0 100 spikes / s Peri-Movement Peri-Movement Activity Activity Delay Delay Activity Activity Figure 1: Delayed reach task and average action potential (spike) emission rate from one representative unit. Activity is arranged by target location. Vertical dashed lines indicate peripheral reach target onset (left) and movement onset (right). 5 Motor preparation in PMd Motor preparation is often studied using the “instructed delay” behavioral paradigm, as described in Section 4, where a variable-length “planning” period temporally separates an instruction stimulus from a go cue [8]–[13]. Longer delay periods typically lead to shorter reaction times (RT, defined as time between go cue and movement onset), and this has been interpreted as evidence for a motor preparation process that takes time [11], [12], [14], [15]. In this view, the delay period allows for motor preparation to complete prior to the go cue, thus shortening the RT. Evidence for motor preparation at the neural level is taken from PMd (and, to a lesser degree, M1), where neurons show sustained activity during the delay period (Figure 1, delay activity) [8]–[10]. A number of findings support the hypothesis that such activity is related to motor preparation. First, delay period activity typically shows tuning for the instruction (i.e., location of reach target; note that the PMd neuron in Figure 1 has greater delay activity before leftward than before rightward reaches), consistent with the idea that something specific is being prepared [8], [9], [11], [13]. Second, in the absence of a delay period, a brief burst of similarly-tuned activity is observed during the RT interval, consistent with the idea that motor preparation is taking place at that time [12]. Third, we have recently reported that firing rates across trials to the same reach target become more consistent as the delay period progresses [16]. The variance of firing rate, measured across trials, divided by mean firing rate (similar to the Fano factor) was computed for each unit and each time point. Averaged across 14 single- and 33 multi-neuron units, we found that this Normalized Variance (NV) declined 24% (t-test, p <10−10) from 200 ms before target onset to the median time of the go cue. This decline spanned ∼119 ms just after target onset and appears to, at least roughly, track the time-course of motor preparation. The NV may be interpreted as a signature of the approximate degree of motor preparation yet to be accomplished. Shortly after target onset, firing rates are frequently far from their mean. If the go cue arrives then, it will take time to correct these “errors” and RTs will therefore be longer. By the time the NV has completed its decline, firing rates are consistently near their mean (which we presume is near an “optimal” configuration for the impending reach), and RTs will be shorter if the go cue arrives then. This interpretation assumes that there is a limit on how quickly firing rates can converge to their ideal values (a limit on how quickly the NV can drop) such that a decline during the delay period saves time later. The NV was found to be lower at the time of the go cue for trials with shorter RTs than those with longer RTs [16]. The above data strongly suggest that the network underlying motor preparation exhibits rich dynamics. Activity is initially variable across trials, but appears to settle during the delay period. Because the RNN (1)-(2) is capable of exhibiting such dynamics and may underly motor preparation, we sought to identify such a dynamical system in delay activity. 6 Results and discussion The NV reveals an average process of settling by measuring the convergence of firing across different trials. However, it provides little insight into the course of motor planning on a single trial. A gradual fall in trial-to-trial variance might reflect a gradual convergence on each trial, or might reflect rapid transitions that occur at different times on different trials. Similarly, all the NV tells us about the dynamic properties of the underlying network is the basic fact of convergence from uncontrolled initial conditions to a consistent premovement preparatory state. The structure of any underlying attractors and corresponding basins of attraction is unobserved. Furthermore, the NV is first computed per-unit and averaged across units, thus ignoring any structure that may be present in the correlated firing of units on a given trial. The methods presented here are well-suited to extending the characterization of this settling process. We fit the dynamical system model (1)–(4) with three latent dimensions (p = 3) to training data, consisting of delay activity preceding 70 reaches to the same target (30°). Spike counts were taken in non-overlapping Δ = 20 ms bins at 20 ms time steps from 50 ms after target onset to 50 ms after the go cue. Then, the fitted model parameters were used to infer the latent space trajectories for 146 test trials, which are plotted in Figure 2. Despite the trial-to-trial variability in the delay period neural responses, the state evolves along a characteristic path on each trial. It could have been that the neural variability across trials would cause the state trajectory to evolve in markedly different ways on different trials. Even with the characteristic structure, the state trajectories are not all identical, however. This presumably reflects the fact that the motor planning process is internallyregulated, and its timecourse may differ from trial to trial, even when the presented stimulus (in this case, the reach target) is identical. How these timecourses differ from trial to trial would have been obscured had we combined the neural data across trials, as with the NV in Section 5. Is this low-dimensional description of the system dynamics adequate to describe the firing of all 131 recorded units? We transformed the inferred latent trajectories into trial-by-trial -2 -1 0 1 -20 0 20 40 60 -50 0 50 Latent Dimension 2 Latent Dimension 1 Latent Dimension 3 Target onset + 50 ms Go cue + 50 ms Figure 2: Inferred modal state trajectories in latent (x) space for 146 test trials. Dots indicate 50 ms after target onset (blue) and 50 ms after the go cue (green). The radius of the green dots is logarithmically-related to delay period length (200, 750, or 1000ms). inhomogeneous firing rates using the output relationship from (4) λi t = h ci· xt + di , (8) where λi t is the imputed firing rate of the ith unit at the tth time bin. Figure 3 shows the imputed firing rates for 15 representative units overlaid with empirical firing rates obtained by directly averaging raw spike counts across the same test trials. If the imputed firing rates truly reflect the rate functions underlying the observed spikes, then the mean behavior of the imputed firing rates should track the empirical firing rates. On the other hand, if the latent system were inadequate to describe the activity, we should expect to see dynamical features in the empirical firing that could not be captured by the imputed firing rates. The strong agreement observed in Figure 3 and across all 131 units suggests that this simple dynamical system is indeed capable of capturing significant components of the dynamics of this neural circuit. We can view the dyamical system approach adopted in this work as a form of non-linear dynamical embedding of point-process data. This is in contrast to most current embedding algorithms that rely on continuous data. Figure 2 effectively represents a three-dimensional manifold in the space of firing rates along which the dynamics unfold. Beyond the agreement of imputed means demonstrated by Figure 3, we would like to directly test the fit of the model to the neural spike data. Unfortunately, current goodnessof-fit methods for spike trains, such as those based on time-rescaling [17], cannot be applied directly to latent variable models. The difficulty arises because the average trajectory obtained from marginalizing over the latent variables in the system (by which we might hope to rescale the inter-spike intervals) is not designed to provide an accurate estimate of the trial-by-trial firing rate functions. Instead, each trial must be described by a distinct trajectory in latent space, which can only be inferred after observing the spike trains themselves. This could lead to overfitting. We are currently exploring extensions to the standard methods which infer latent trajectories using a subset of recorded neurons, and then test the quality of firing-rate predictions for the remaining neurons. In addition, we plan to compare models of different latent dimensionalities; here, the latent space was arbitrarily chosen to be three-dimensional. To validate the learned latent space and inferred trajectories, we would also like to relate these results to trial-by-trial behavior. In particular, given the evidence from Section 5, how “settled” the activity is at the time of the go cue should be predictive of RT. 0 40 80 0 40 80 0 500 1000 0 40 80 Time relative to target onset (ms) Firing rate (spikes/s) 0 500 1000 0 500 1000 0 500 1000 0 500 1000 Figure 3: Imputed trial-by-trial firing rates (blue) and empirical firing rates (red). Gray vertical line indicates the time of the go cue. Each panel corresponds to one unit. For clarity, only test trials with delay periods of 1000 ms (44 trials) are plotted for each unit. Acknowledgments This work was supported by NIH-NINDS-CRCNS-R01, NSF, NDSEGF, Gatsby, MSTP, CRPF, BWF, ONR, Sloan, and Whitaker. We would like to thank Dr. Mark Churchland for valuable discussions and Missy Howard for expert surgical assistance and veterinary care. References [1] M. Abeles, H. Bergman, I. Gat, I. Meilijson, E. Seidemann, N. Tishby, and E. Vaadia. Proc Natl Acad Sci USA, 92:8616–8620, 1995. [2] I. Gat, N. Tishby, and M. Abeles. Network, 8(3):297–322, 1997. [3] A. Smith and E. Brown. Neural Comput, 15(5):965–991, 2003. [4] S. Amari. Biol Cybern, 27(2):77–87, 1977. [5] A. Dempster, N. Laird, and D. Rubin. J R Stat Soc Ser B, 39:1–38, 1977. [6] S. Julier and J. Uhlmann. In Proc. AeroSense: 11th Int. Symp. Aerospace/Defense Sensing, Simulation and Controls, pp. 182–193, 1997. [7] U. Lerner. Hybrid Bayesian networks for reasoning about complex systems. PhD thesis, Stanford University, Stanford, CA, 2002. [8] J. Tanji and E. Evarts. J Neurophysiol, 39:1062–1068, 1976. [9] M. Weinrich, S. Wise, and K. Mauritz. Brain, 107:385–414, 1984. [10] M. Godschalk, R. Lemon, H. Kuypers, and J. van der Steen. Behav Brain Res, 18:143–157, 1985. [11] A. Riehle and J. Requin. J Neurophysiol, 61:534–549, 1989. [12] D. Crammond and J. Kalaska. J Neurophysiol, 84:986–1005, 2000. [13] J. Messier and J. Kalaska. J Neurophysiol, 84:152–165, 2000. [14] D. Rosenbaum. J Exp Psychol Gen, 109:444–474, 1980. [15] A. Riehle and J. Requin. J Behav Brain Res, 53:35–49, 1993. [16] M. Churchland, B. Yu, S. Ryu, G. Santhanam, and K. Shenoy. Soc. for Neurosci. Abstr., 2004. [17] E. Brown, R. Barbieri, V. Ventura, R. Kass, and L. Frank. Neural Comput, 14(2):325– 346, 2002.
|
2005
|
62
|
2,880
|
Large-scale biophysical parameter estimation in single neurons via constrained linear regression Misha B. Ahrens∗, Quentin J.M. Huys∗, Liam Paninski Gatsby Computational Neuroscience Unit University College London {ahrens, qhuys, liam}@gatsby.ucl.ac.uk Abstract Our understanding of the input-output function of single cells has been substantially advanced by biophysically accurate multi-compartmental models. The large number of parameters needing hand tuning in these models has, however, somewhat hampered their applicability and interpretability. Here we propose a simple and well-founded method for automatic estimation of many of these key parameters: 1) the spatial distribution of channel densities on the cell’s membrane; 2) the spatiotemporal pattern of synaptic input; 3) the channels’ reversal potentials; 4) the intercompartmental conductances; and 5) the noise level in each compartment. We assume experimental access to: a) the spatiotemporal voltage signal in the dendrite (or some contiguous subpart thereof, e.g. via voltage sensitive imaging techniques), b) an approximate kinetic description of the channels and synapses present in each compartment, and c) the morphology of the part of the neuron under investigation. The key observation is that, given data a)-c), all of the parameters 1)-4) may be simultaneously inferred by a version of constrained linear regression; this regression, in turn, is efficiently solved using standard algorithms, without any “local minima” problems despite the large number of parameters and complex dynamics. The noise level 5) may also be estimated by standard techniques. We demonstrate the method’s accuracy on several model datasets, and describe techniques for quantifying the uncertainty in our estimates. 1 Introduction The usual tradeoff in parameter estimation for single neuron models is between realism and tractability. Typically, the more biophysical accuracy one tries to inject into the model, the harder the computational problem of fitting the model’s parameters becomes, as the number of (nonlinearly interacting) parameters increases (sometimes even into the thousands, in the case of complex multicompartmental models). ∗These authors contributed equally. Support contributed by the Gatsby Charitable Foundation (LP, MA), a Royal Society International Fellowship (LP), the BIBA consortium and the UCL School of Medicine (QH). We are indebted to P. Dayan, M. H¨ausser, M. London, A. Roth, and S. Roweis for helpful and interesting discussions, and to R. Wood for channel definitions. Previous authors have noted the difficulties of this large-scale, simultaneous parameter estimation problem, which are due both to the highly nonlinear nature of the “cost functions” minimized (e.g., the percentage of correctly-predicted spike times [1]) and the abundance of local minima on the very large-dimensional allowed parameter space [2, 3]. Here we present a method that is both computationally tractable and biophysically detailed. Our goal is to simultaneously infer the following dendritic parameters: 1) the spatial distribution of channel densities on the cell’s membrane; 2) the spatiotemporal pattern of synaptic input; 3) the channels’ reversal potentials; 4) the intercompartmental conductances; and 5) the noise level in each compartment. Achieving this somewhat ambitious goal comes at a price: our method assumes that the experimenter a) knows the geometry of the cell, b) has a good understanding of the kinetics of the channels present in each compartment, and c) most importantly, is able to observe the spatiotemporal voltage signal on the dendritic tree, or at least a fraction thereof (e.g. by voltage-sensitive imaging methods; in electrotonically compact cells, single electrode recordings can be used). The key to the proposed method is to recognise that, when we condition on data a)-c), the dynamics governing this observed spatiotemporal voltage signal become linear in the parameters we are seeking to estimate (even though the system itself may behave highly nonlinearly), so that the parameter estimation can be recast into a simple constrained linear regression problem (see also [4, 5]). This implies, somewhat counterintuitively, that optimizing the likelihood of the parameters in this setting is a convex problem, with no non-global local extrema. Moreover, linearly constrained quadratic optimization is an extremely well-studied problem, with many efficient algorithms available. We give examples of the resulting methods successfully applied to several types of model data below. In addition, we discuss methods for incorporating prior knowledge and analyzing uncertainty in our estimates, again basing our techniques on the well-founded probabilistic regression framework. 2 Methods Biophysically accurate models of single cells are typically formulated compartmentally – a set of first-order coupled differential equations that form a spatially discrete approximation to the cable equations. Modeling the cell under investigation in this discretized manner, a typical equation describing the voltage in compartment x is CxdVx(t) = X i ai,xJi,x(t) + Ix(t) ! dt + σxdNx,t. (1) Here σxNx,t is evolution (current) noise and Ix(t) is externally injected current. Dropping the subscript x where possible, the terms ai · Ji(t) represent currents due to: 1. voltage mismatch in neighbouring compartments, fx,y(Vy(t) −Vx(t)), 2. synaptic input, gs(t)(Es −V (t)) , 3. membrane channels, active (voltage-dependent) or passive, ¯gjgj(t)(Ej −V (t)). Here ai are parameters to be inferred: 1. the intercompartmental conductances fx,y, 2. the spatiotemporal input from synapse s, us(t), from which gs(t) is obtained by dgs(t)/dt = −gs(t)/τs + us(t), (2) a linear convolution operation (the synaptic kinetic parameter τs is assumed known) which may be written in matrix notation gs = Ku. 3. the ion channel concentrations ¯gj. The open probabilities of channel j, gj(t), are obtained from the channel kinetics, which are assumed to evolve deterministically, with a known dependence on V , as in the Hodgkin-Huxley model, gNa = m3h, τmdm(t)/dt = m∞(V ) −m, (3) and similarly for h. Again, we emphasize that the kinetic parameters τm and m∞(V ) are assumed known; only the inhomogeneous concentrations are unknown. (For passive channels gj is taken constant and independent of voltage.) The parameters 1-3 are relative to membrane capacitance Cx.1 When modeling the dynamics of a single neuron according to (1), the voltage V (t) and channel kinetics gj(t) are typically evolved in parallel, according to the injected current I(t) and synaptic inputs us(t). Suppose, on the other hand, that we have observed the voltage Vx(t) in each compartment. Since we have assumed we also know the channel kinetics (equation 3), the synaptic kinetics (equation 2) and the reversal potentials Ej of the channels present in each compartment, we may decouple the equations and determine the open probabilities gj,x(t) for t ∈[0, T]. This, in turn, implies that the currents Ji,x(t) and voltage differentials ˙Vx(t) are all known, and we may interpret equation 1 as a regression equation, linear in the unknown parameters ai, instead of an evolution equation. This is the key observation of this work. Thus we can use linear regression methods to simultaneously infer optimal values of the parameters {¯gj,x, us,x(t), fx,y}2. More precisely, rewrite equation (1) in matrix form, ˙V = Ma + ση, where each column of the matrix M is composed of one of the known currents {Ji(t), t ∈[0, T]} (with T the length of the experiment) and the column vectors ˙V, a, and η are defined in the obvious way. Then ˆaopt = arg min a ∥˙V −Ma∥2 2. (4) In addition, since on physical grounds the channel concentrations, synaptic input, and conductances must be non-negative, we require our solution ai ≥0. The resulting linearlyconstrained quadratic optimization problem has no local minima (due to the convexity of the objective function and of the domain gi ≥0), and allows quadratic programming (QP) tools (e.g., quadprog.m in Matlab) to be employed for highly efficient optimization. Quadratic programming tactics: As emphasized above, the dimension d of the parameter space to be optimized over in this application is quite large (d ∼Ncomp(TNsyn + Nchan), with N denoting the number of compartments, synapse types, and membrane channel types respectively). While our problem is convex, and therefore tractable in the sense of having no nonglobal local optima, the time-complexity of QP, implemented naively, is O(d3), which is too slow for our purposes. Fortunately, the correlational structure of the parameters allows us to perform this optimization more efficiently, by several natural decompositions: in particular, given the spatiotemporal voltage signal Vx(t), parameters which are distant in space (e.g., the densities of channels in widely-separated compartments) and time (i.e., the synaptic input us,x(t) for t = ti and tj with |ti −tj| large) may be optimized independently. This amounts to a kind of “coordinate descent” algorithm, in which we decompose our parameter set into a set of (not necessarily disjoint) subsets, and iteratively optimize the parameters in each subset 1Note that Cx is the proportionality constant between the externally injected electrode current and dV dt . It is linear in the data and can be included with the other parameters ai in the joint estimation. 2In the case that the reversal potentials Ej are unknown as well, we may estimate these terms by separating the term ¯gjgj(t)(V (t) −Ej) into ¯gjgj(t)V (t) and (¯gjEj)gj(t), thereby increasing the number of parameters in the regression by one per channel; Ej is then set to (¯gjEj)/¯gj. while holding all the other parameters fixed. (The quadratic nature of the original problem guarantees that each of these subset problems will be quadratic, with no local minima.) Empirically, we found that this decomposition / sequential optimization approach reduced the computation time from O(d3) to near O(d). 2.1 The probabilistic framework If we assume the noise Nx,t is Gaussian and white, then the mean-square regression solution for a described above coincides exactly with the (constrained) maximum likelihood estimate, ˆaML = arg mina ∥˙V−Ma∥2 2/2σ2. (The noise scale σ may also be estimated via maximum likelihood.) This suggests several straightforward likelihood-based techniques for representing the uncertainty in our estimates. Posterior confidence intervals: The assumption of Gaussian noise implies that the posterior distribution of the parameters a is of the form p(a|V) = 1 Z p(a)Gµ,Σ(a), with Z a normalizing constant, the prior p(a) supported on ai ≥0, and the mean and covariance of the likelihood Gaussian G(a) given by µ = (MT M)−1MT ˙V and Σ−1 = MT M/σ2. We will assume a flat prior distribution p(a) (that is, no prior knowledge) on the non-synaptic parameters {¯gj,x, fx,y} (although clearly non-flat priors can be easily incorporated here [6]); for the synaptic parameters us,x(t) it will be convenient to use a product-of-exponentials prior, p(u) = Q i λi exp(−λiui). In each case, computing confidence intervals for ai reduces to computing moments of multidimensional Gaussian distributions, truncated to ai ≥0. We use importance sampling methods [7] to compute these moments for the channel parameters. Sampling from high-dimensional truncated Gaussians via sample-reject is inefficient (since samples from the non-truncated Gaussian – call this distribution p∗(a|V) – may violate the constraint ai ≥0 with high probability). Therefore we sample instead from a proposal density q(a) with support on ai ≥0 (specifically, a product of univariate truncated Gaussians with mean ai and appropriate variance) and evaluate the second moments around aML by E[(ai−aMLi)2|V] ≈1 Z N X n=1 p∗(an|V) q(an) (an i −aMLi)2 where Z = N X n=1 p∗(an|V) q(an) . (5) Hessian Principal Components Analysis: The procedure described above allows us to quantify the uncertainty of individual estimated parameters ai. We are also interested in the uncertainty of our estimates in a joint sense (e.g., in the posterior covariance instead of just the individual variances). The negative Hessian of the loglikelihood function, A ∼MT M, contains a great deal of this information, which may be extracted via a kind of principal components analysis: the eigenvectors of A corresponding to the greatest eigenvalues tell us in which directions the model is most strongly constrained by the data, while low eigenvalues correspond to directions in which the likelihood changes relatively slowly, e.g. channels whose corresponding currents are highly correlated (and therefore approximately interchangeable). These ideas will be illustrated in section 3.4. 3 Results To test the validity, efficiency and accuracy of the proposed method we apply it to model data of varying complexity. 3.1 Inferring channel conductances in a multicompartmental model We take a simple 14-compartment model neuron, described by Cx dVx dt = Nchan X c=1 ¯gcgc(Vx, t)(Ec −Vx(t)) + X y fx,y · (Vy(t) −Vx(t)) + Ix(t) + σxdNx,t; recall fx,y are the intercompartmental conductances, gc(V, t) is channel c’s conductance state given the voltage history up to time t, and ¯gc is the channel concentration. We minimize a vectorized expression as above (equation 4). On biophysical grounds we require fx,y = fy,x; we enforce this (linear) constraint by only including one parameter for each connected pair of compartments (x, y). In this case the true channel kinetics were of standard Hodgkin-Huxley form (Na+, K+ and leak), with inhomogeneous densities (figure 1). To test the selectivity of the estimation procedure, we fitted Nchan = 8 candidate channels from [8, 9, 10] (five of which were absent in the true model cell). Figure 1 shows the performance of the inference; despite the fact that we used only 20 ms of model data, the last 7 ms of which were used for the actual fitting (the first 13 ms were used to evolve the random initial conditions to an approximately correct value), the fit is near perfect in the σ = 0 case, with vanishingly small errorbars. The concentrations of the five channels that were not present when generating the data were set to approximately zero, as desired (data not shown). The lower panels demonstrate the robustness of the methods on highly noisy (large σ) data, in which case the estimated errorbars become significant, but the performance degrades only slightly. 100 200 50 100 150 200 HH gNa inferred HH gNa 100 200 50 100 150 200 HH gNa inferred HH gNa Figure 1: Top panels: σ = 0. 14 compartment model neuron, Na+ channel concentration indicated by grey scale; estimated Na+ channel concentrations in the noiseless case; observed voltage traces (one per compartment); estimated concentrations. Bottom panels: σ large. Na+ channel concentration legend, values relative to Cm (e.g. in mS/cm2 if Cm = 1µF/cm2); estimated Na+ concentrations in the noisy case; noisy voltage traces; estimated channel concentrations. K+ channel concentrations and intercompartmental conductances fx,y not shown (similar performance). 3.2 Inferring synaptic input in a passive model Next we simulated a single-compartment, leaky neuron (i.e., no voltage-sensitive membrane channels) with synaptic input from three synapses, two excitatory (glutamatertic; τ = 3 ms, E = 0 mV) and one inhibitory (GABAA; τ = 5 ms, E = −75 mV). When we attempted to estimate the synaptic input us(t) via the ML estimator described above (figure 2, left), we observe an overfitting phenomenon: the current noise due to Nt is being “explained” by competing balanced excitatory and inhibitory synaptic inputs. This overfitting is unsurprising, given that we are modeling a T-dimensional observation, ˙V, with 2T regressor variables, u−(t) and u+(t), 0 < t < T (indeed, overfitting is much less apparent in the case that only one synapse is modeled, where no balance of excitation and inhibition is possible; data not shown). Once again, we may make use of well-known techniques from the regression literature to solve this problem: in this case, we need to regularize our estimated synaptic parameters. Instead of maximizing the likelihood, uML, we maximize the posterior likelihood ˆuMAP = arg min u 1 2σ2 ∥˙V −MKu∥2 2 + λu · n with ut ≥0 ∀t, (6) where n is a vector of ones and λ is the Lagrange multiplier for the regularizer, or equivalently parametrizes the exponential prior distribution over u(t). As mentioned above, this maximum a posteriori (MAP) estimate corresponds to a product exponential prior on the synaptic input ut; the multiplier λ may be chosen as the expected synaptic input per unit time. It is well known that this type of prior has a sparsening effect, shrinking small values of uML(t) to zero. This is visible in figure 2 (right); we see that the small, noise-matching synaptic activity is effectively suppressed, permitting much more accurate detection of the true input spike timing. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Time [s] with regularisation 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 23 −57 −53 −49 0 12 Time [s] Inh spikes | Voltage | Exc spikes [mS/cm2] [mV] [mS/cm2] without regularisation Figure 2: Inferring synaptic inputs to a passive membrane. Top traces: excitatory inputs; bottom: inhibitory inputs; middle: the resulting voltage trace. Left panels: synaptic inputs inferred by ML; right: MAP estimates under the exponential (shrinkage) prior. Note the overfitting by the ML estimate (left) and the higher accuracy under the MAP estimate (right); in particular note that the two excitatory synapses of differing magnitudes may easily be distinguished. 3.3 Inferring synaptic input and channel distribution in an active model The optimization is, as mentioned earlier, jointly convex in both channel densities and synaptic input. We illustrate the simultaneous inference of channel densities and synaptic inputs in a single compartment, writing the model as: dV dt = Nchan X c=1 ¯gcgc(V, t)(Vc −V (t)) + S X s=1 gs(t)(Vs −V (t)) + σdN(t), (7) with the same channels and synapse types as above. The combination of leak conductance and inhibitory synaptic input leads to very small eigenvalues in A and slow convergence when applying the above decomposition; thus, to speed convergence here we coarsened the time resolution of the synaptic input from 0.1 ms to 0.2 ms. Figure 3 demonstrates the accuracy of the results. 20 40 60 80 100 0 1 −69 mV −23 mV 22 mV 0 1 Synaptic conductances Time [ms] Inh spikes | Voltage [mV] | Exc spikes HHNa HHK Leak MNa MK SNa SKA SKDR 0 20 40 60 80 100 120 max conductance [mS/cm2] Channel conductances True parameters (spikes and conductances) Data (voltage trace) Inferred (MAP) spikes Inferred (ML) channel densities Figure 3: Joint inference of synaptic input and channel densities. The true parameters are in blue, the inferred parameters in red. The top left panel shows the excitatory synaptic input, the middle left panel the voltage trace (the only data) and the bottom left traces the inhibitory synaptic input. The right panel shows the true and inferred channel densities; channels are the same as in 3.1. 3.4 Eigenvector analysis for a single-compartment model Finally, as discussed above, the eigenvectors (“principal components”) of the loglikelihood Hessian A carry significant information about the dependence and redundancy of the parameters under study here. An example is given in figure 4; for simplicity, we restrict our attention again to the single-compartment case. In the leftmost panels, we see that the direction amost most highly-constrained by the data – the eigenvector corresponding to the largest eigenvalue of A – turns out to have the intuitive form of the balance between Na+ and K+ channels. When we perturb this balance slightly (that is, when we shift the model parameters slightly along this direction in parameter space, aML →aML+ǫamost), the cell’s behavior changes dramatically. Conversely, the least-sensitive direction, aleast, corresponds roughly to the balance between the concentrations of two Na+ channels with similar kinetics, and moving in this direction in parameter space (aML →aML + ǫaleast) has a negligible effect on the model’s dynamical behavior. Na . . . K . . . L R −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0 20 40 60 80 100 −100 −50 0 50 time [ms] voltage [mV] Na . . . K . . . L R −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 0 20 40 60 80 100 −100 −50 0 50 time [ms] voltage [mV] Figure 4: Eigenvectors of A corresponding to largest (amost, left) and smallest (aleast, right) eigenvalues, and voltage traces of the model neuron after equal sized perturbations by both (solid line: perturbed model; dotted line: original model). The first four parameters are the concentrations of four Na+ channels (the first two of which are in fact the same Hodgkin-Huxley channel, but with slightly different kinetic parameters); the next four of K+ channels; the next of the leak channel; the last of 1/C. 4 Discussion and future work We have developed a probabilistic regression framework for estimation of biophysical single neuron properties and synaptic input. This framework leads directly to efficient, globally-convergent algorithms for determining these parameters, and also to well-founded methods for analyzing the uncertainty of the estimates. We believe this is a key first step towards applying these techniques in detailed, quantitative studies of dendritic input and processing in vitro and in vivo. However, some important caveats – and directions for necessary future work – should be emphasized. Observation noise: While we have explicitly allowed current noise in our main evolution equation (1) (and experimented with a variety of other current- and conductance-noise terms; data not shown), we have assumed that the resulting voltage V (t) is observed noiselessly, with sufficiently high sampling rates. This is a reasonable assumption when voltage is recorded directly, via patch-clamp methods. However, while voltage-sensitive imaging techniques have seen dramatic improvements over the last few years (and will continue to do so in the near future), currently these methods still suffer from relatively low signalto-noise ratios and spatiotemporal sampling rates. While the procedure proved to be robust to low-level noise of various forms (data not shown), it will be important to relax the noiseless-observation assumption, most likely by adapting standard techniques from the hidden Markov model signal processing literature [11]. Hidden branches: Current imaging and dye technologies allow for the monitoring of only a fraction of a dendritic tree; therefore our focus will be on estimating the properties of these sub-structures. Furthermore, these dyes diffuse very slowly and may miss small branches of dendrites, thereby effectively creating unobserved current sources. Misspecified channel kinetics and channels with chemical dependence: Channels dependent on unobserved variables (e.g., Ca++-dependent K+ channels), have not been included in the model. The techniques described here may thus be applied unmodified to experimental data for which such channels have been blocked pharmacologically. However, we should note that our methods extend directly to the case where simultaneous access to voltage and calcium signals is possible; more generally, one could develop a semi-realistic model of calcium concentration, and optimize over the parameters of this model as well. We have discussed in some detail (e.g. figure 1) the effect of misspecifications of voltagedependent channel kinetics and how the most relevant channels may be selected by supplying sufficiently rich “channel libraries”. Such libraries can also contain several “copies” of the same channel, with one or more systematically varying parameters, thus allowing for a limited search in the nonlinear space of channel kinetics. Finally, in our discussion of “equivalence classes” of channels (figure 4), we illustrate how eigenvector analysis of our objective function allows for insights into the joint behaviour of channels. References [1] Jolivet, Lewis, and Gerstner, 2004. J. Neurophysiol., 92, 959-976. [2] Vanier and Bower, 1999. J. Comput. Neurosci., 7(2), 149-171. [3] Goldman, Golowasch, Marder and Abbott, 2001. J. Neurosci., 21(14), 5229-5238. [4] Wood, Gurney and Wilson, 2004. Neurocomputing, 58-60, 1109-1116. [5] Morse, Davison and Hines, 2001. Soc. Neurosci. Abs., 606.5. [6] Baldi, Vanier and Bower, 1998. J. Comp. Neurosci., 5(3), 285-314. [7] Press et al., 1992. Numerical Recipes in C, CUP. [8] Hodgkin and Huxley, 1952. J. Physiol., 117. [9] Poirazi, Brannon and Mel, 2003. Neuron, 37(6), 977-87. [10] Mainen, Joerges, Huguenard, and Sejnowski, 1995. Neuron, 15(6), 1427-39. [11] Rabiner, 1989. Proc. IEEE, 77(2), 257-286.
|
2005
|
63
|
2,881
|
Data-Driven Online to Batch Conversions Ofer Dekel and Yoram Singer School of Computer Science and Engineering The Hebrew University, Jerusalem 91904, Israel {oferd,singer}@cs.huji.ac.il Abstract Online learning algorithms are typically fast, memory efficient, and simple to implement. However, many common learning problems fit more naturally in the batch learning setting. The power of online learning algorithms can be exploited in batch settings by using online-to-batch conversions techniques which build a new batch algorithm from an existing online algorithm. We first give a unified overview of three existing online-to-batch conversion techniques which do not use training data in the conversion process. We then build upon these data-independent conversions to derive and analyze data-driven conversions. Our conversions find hypotheses with a small risk by explicitly minimizing datadependent generalization bounds. We experimentally demonstrate the usefulness of our approach and in particular show that the data-driven conversions consistently outperform the data-independent conversions. 1 Introduction Batch learning is probably the most common supervised machine-learning setting. In the batch setting, instances are drawn from a domain X and are associated with target values from a target set Y. The learning algorithm is given a training set of examples, where each example is an instance-target pair, and attempts to identify an underlying rule that can be used to predict the target values of new unseen examples. In other words, we would like the algorithm to generalize from the training set to the entire domain of examples. The target space Y can be either discrete, as in the case of classification, or continuous, as in the case of regression. Concretely, the learning algorithm is confined to a predetermined set of candidate hypotheses H, where each hypothesis h ∈H is a mapping from X to Y, and the algorithm must select a “good” hypothesis from H. The quality of different hypotheses in H is evaluated with respect to a loss function ℓ, where ℓ(y, y′) is interpreted as the penalty for predicting the target value y′ when the correct target is y. Therefore, ℓ(y, h(x)) indicates how well hypothesis h performs with respect to the example (x, y). When Y is a discrete set, we often use the 0-1 loss, defined by ℓ(y, y′) = 1y̸=y′. We also assume that there exists a probability distribution D over the product space X × Y, and that the training set was sampled i.i.d. from this distribution. Moreover, the existence of D enables us to reason about the average performance of an hypothesis over its entire domain. Formally, the risk of an hypothesis h is defined to be, RiskD(h) = E(x,y)∼D [ℓ(y, h(x))] . (1) The goal of a batch learning algorithm is to use the training set to find a hypothesis that does well on average, or more formally, to find h ∈H with a small risk. In contrast to the batch learning setting, online learning takes place in a sequence of rounds. On any given round, t, the learning algorithm receives a single instance xt ∈X and predicts its target value using an hypothesis ht−1, which was generated on the previous round. On the first round, the algorithm uses a default hypothesis h0. Immediately after the prediction is made, the correct target value yt is revealed and the algorithm suffers an instantaneous loss of ℓ(yt, ht−1(xt)). Finally, the online algorithm may use the newly obtained example (xt, yt) to improve its prediction strategy, namely to replace ht−1 with a new hypothesis ht. Alternatively, the algorithm may choose to stick with its current hypothesis and sets ht = ht−1. An online algorithm is therefore defined by its default hypothesis h0 and the update rule it uses to define new hypotheses. The cumulative loss suffered on a sequence of rounds is the sum of instantaneous losses suffered on each one of the rounds in the sequence. In the online setting there is typically no need for any statistical assumptions since there is no notion of generalization. The goal of the online algorithm is simply to suffer a small cumulative loss on the sequence of examples it is given, and examples that are not in this sequence are entirely irrelevant. Throughout this paper, we assume that we have access to a good online learning algorithm A for the task on hand. Moreover, A is computationally efficient and easy to implement. However, the learning problem we face fits much more naturally within the batch learning setting. We would like to develop a batch algorithm B that exhibits the desirable characteristics of A but also has good generalization properties. A simple and powerful way to achieve this is to use an online-to-batch conversion technique. This is a general name for any technique which uses A as a building block in the construction of B. Several different online-to-batch conversion techniques have been developed over the years. Littlestone and Warmuth [11] introduced an explicit relation between compression and learnability, which immediately lent itself to a conversion technique for classification algorithms. Gallant [7] presented the Pocket algorithm, a conversion of Rosenblatt’s online Perceptron to the batch setting. Littlestone [10] presented the Cross-Validation conversion which was further developed by Cesa-Bianchi, Conconi and Gentile [2]. All of these techniques begin by presenting the training set (x1, y1), . . . , (xm, ym) to A in some arbitrary order. As A performs the m online rounds, it generates a sequence of online hypotheses which it uses to make predictions on each round. This sequence includes the default hypothesis h0 and the m hypotheses h1, . . . , hm generated by the update rule. The aforementioned techniques all share a common property: they all choose h, the output of the batch algorithm B, to be one of the online hypotheses h0, . . . , hm. In this paper, we focus on a second family of conversions, which evolved somewhat later and is due to the work of Helmbold and Warmuth [8], Freund and Schapire [6] and CesaBianchi, Conconi and Gentile [2]. The conversion strategies in this family also begin by using A to generate the sequence of online hypotheses. However, instead of relying on a single hypothesis from the sequence, they set h to be some combination of the entire sequence. Another characteristic shared by these three conversions is that the training data does not play a part in determining how the online hypotheses are combined. That is, the training data is not used in any way other than to generate the sequence h0, . . . , hm. In this sense, these conversion techniques are data-independent. In this paper, we build on the foundations of these data-independent conversions, and define conversion techniques that explicitly use the training data to derive the batch algorithm from the online algorithm. By doing so, we effectively define the data-driven counterparts of the algorithms in [8, 6, 2]. This paper is organized as follows. In Sec. 2 we review the data-independent conversion techniques from [8, 6, 2] and give a simple unified analysis for all three conversions. At the same time, we present a general framework which serves as a building-block for our datadriven conversions. Then, in Sec. 3, we derive three special cases of the general framework and demonstrate some useful properties of the data-driven conversions. Finally, in Sec. 4, we compare the different conversion techniques on several benchmark datasets and show that our data-driven approach outperforms the existing data-independent approach. 2 Voting, Averaging, and Sampling The first conversion we discuss is the voting conversion [6], which applies to problems where the target space Y is discrete (and relatively small), such as classification problems. The conversion presents the training set (x1, y1), . . . , (xm, ym) to the online algorithm A, which generates the sequence of online hypotheses, h0, . . . , hm. The conversion then outputs the hypothesis hV, which is defined as follows: given an input x ∈X, each online hypothesis casts a vote of hi(x) and then hV outputs the target value that receives the highest number of votes. For simplicity, assume that ties are broken arbitrarily. The second conversion is the averaging conversion [2] which applies to problems where Y is a convex set. For example, this conversion is applicable to margin-based online classifiers or to regression problems where, in both cases, Y = R. This conversion also begins by using A to generate h0, . . . , hm. Then the batch hypothesis hA is defined to be 1 m+1 Pm i=0 hi(x). The third and last conversion discussed here is the sampling conversion [8]. This conversion is the most general and applicable to any learning problem, however this generality comes at a price. The resulting hypothesis, hS, is a stochastic function and not a deterministic one. In other words, if applied twice to the same instance, hS may output different target values. Again, this conversion begins by applying A to the training set and obtaining the sequence of online hypotheses. Every time hS is evaluated, it randomly selects one of h0, . . . , hm and uses it to make the prediction. Since hS is a stochastic function, the definition of RiskD(hS) changes slightly and expectation in Eq. (1) is taken also over the random function hS. Simple data-dependent bounds on the risk of hV, hA and hS can be derived, and these bounds are special cases of the more general analysis given below. We now describe a simple generalization of these three conversion techniques. It is reasonable to assume that some of the online hypotheses generated by A are better than others. For instance, the default hypothesis h0 is determined without observing even a single training example. This surfaces the question whether it is possible to isolate the “best” online hypotheses and only use them to define the batch hypothesis. Formally, let [m] denote the set {0, . . . , m} and let I be some non-empty subset of [m]. Now define hV I(x) to be the hypothesis which performs voting as described above, with the single difference that only the members of {hi : i ∈I} participate in the vote. Similarly, define hA I(x) = (1/|I|) P i∈I hi(x), and let hS I be the stochastic function that randomly chooses a function from the set {hi : i ∈I} every time it is evaluated, and predicts according to it. The data-independent conversions presented in the beginning of this section are obtained by setting I = [m]. Our idea is to use the training data to find a set I which induces the batch hypotheses hV I, hA I, and hS I with the smallest risk. Since there is an exponential number of potential subsets of [m], we need to restrict ourselves to a smaller set of candidate sets. Formally, let I be a family of subsets of [m], and we restrict our search for I to the family I. Following in the footsteps of [2], we make the simplifying assumption that none of the sets in I include the largest index m. This is a technical assumption which can be relaxed at the price of a slightly less elegant analysis. We use two intuitive concepts to guide our search for I. First, for any set J ⊆[m −1], define L(J) = (1/|J|) P j∈J ℓ(yj+1, hj(xj+1)). L(J) is the empirical evaluation of the loss of the hypotheses indexed by J. We would like to find a set J for which L(J) is small since we expect that good empirical loss of the online hypotheses indicates a low risk of the batch hypothesis. Second, we would like |J| to be large so that the presence of a few bad online hypotheses in J will not have a devastating effect on the performance of the batch hypothesis. The trade-off between these two competing concepts can be formalized as follows. Let C be a non-negative constant and define, β(J) = L(J) + C |J|−1 2 . (2) The function β decreases as the average empirical loss L(J) decreases, and also as |J| increases. It therefore captures the intuition described above. The function β serves as our yardstick when evaluating the candidates in I. Specifically, we set I = arg minJ∈I β(J). Below we formally justify our choice of β, and specifically show that β(J) is a rather tight upper bound on the risk of hA J, hV J and hS J. The first lemma relates the risk of these functions with the average risk of the hypotheses indexed by J. Lemma 1. Let (x1, y1), . . . , (xm, ym) be a sequence of examples which is presented to the online algorithm A and let h0, . . . , hm be the resulting sequence of online hypotheses. Let J be a non-empty subset of [m −1] and let ℓ: Y × Y →R+ be a loss function. (1) If ℓis the 0-1 loss then RiskD(hV J) ≤(2/|J|) P i∈J RiskD(hi(x)). (2) If ℓis convex in its second argument then RiskD(hA J) ≤(1/|J|) P i∈J RiskD(hi(x)). (3) For any loss function ℓit holds that RiskD(hS J) = (1/|J|) P i∈J RiskD(hi(x)). Proof. Beginning with the voting conversion, recall that the loss function being used is the 0-1 loss, namely there is a single correct prediction which incurs a loss of 0 and every other prediction incurs a loss of 1. For any example (x, y), if more than half of the hypotheses in {hi}i∈J predict the correct outcome then clearly hV J also predicts this outcome and ℓ(y, hV J(x)) = 0. Therefore, if ℓ(y, hV J(x)) = 1 then at least half of the hypotheses in {hi}i∈J make incorrect predictions and (|J|/2) ≤P i∈J ℓ(y, hi(x)). We therefore get, ℓ(y, hV J(x)) ≤ 2 |J| X i∈J ℓ(y, hi(x)) . The above holds for any example (x, y) and therefore also holds after taking expectations on both sides of the inequality. The bound now follows from the linearity of expectation and the definition of the risk function in Eq. (1). Moving on to the second claim of the lemma, we assume that ℓis convex in its second argument. The claim now follows from a direct application of Jensen’s inequality. Finally, hS J chooses its outcome by randomly choosing an hypothesis in {hi : i ∈J}, where the probability of choosing each hypothesis in this set equals (1/|J|). Therefore, the expected loss suffered by hS J on an example (x, y) is (1/|J|) P i∈J ℓ(y, hi(x)). The risk of hS J is simply the expected value of this term with respect to the random selection of (x, y). Again using the linearity of expectation, we obtain the third claim of the lemma. The next lemma relates the average risk of the hypotheses indexed by J with the empirical performance of these hypotheses, L(J). In the following lemma, we use capital letters to emphasize that we are dealing with random variables. Lemma 2. Let (X1, Y1), . . . , (Xm, Ym) be a sequence of examples independently sampled according to D. Let, H0, . . . , Hm be the sequence of online hypotheses generated by A while observing this sequence of examples. Assume that the loss function ℓis upperbounded by R. Then for any J ⊆[m −1], Pr " 1 |J| X i∈J RiskD(Hi) > β(J) # < exp −C2 2R2 , where C is the constant used in the definition of β (Eq. (2)). The proof of this lemma is a direct application of Azuma’s bound on the concentration of Lipschitz martingales [1], and is identical to that of Proposition 1 in [2]. For concreteness, we now focus on the averaging conversion and note that the analyses of the other two conversion strategies are virtually identical. By combining the first claim of Lemma 1 with Lemma 2, we get that for any J ∈I it holds that RiskD(hA J) ≤β(J) with probability at least 1 −exp −C2/(2R2) . Using the union bound, RiskD(hA J) ≤β(J) for all J ∈I simultaneously with probability at least, 1 −|I| exp −C2 2R2 . The greater the value of C, the more β is influenced by the term |J|. On the other hand, a large value of C increases the probability that β indeed upper bounds RiskD(hA J) for all J ∈I. In conclusion, we have theoretically justified our choice of β in Eq. (2). 3 Concrete Data-Driven Conversions In this section we build on the ideas of the previous section and derive three concrete datadriven conversion techniques. Suffix Conversion: An intuitive argument against selecting I = [m], as done by the dataindependent conversions, is that many online algorithms tend to generate bad hypotheses during the first few rounds of learning. As previously noted, the default hypothesis h0 is determined without observing any training data, and we should expect the first few online hypotheses to be inferior to those that are generated further along. This argument motivates us to consider subsets J of the form {a, a + 1, . . . , m −1}, where a is a positive integer less than or equal to m−1. Li [9] proposed this idea in the context of the voting conversion and gave a heuristic criterion for choosing a. Our formal setting gives a different criterion for choosing a. In this conversion we define I to be the set of all suffixes of [m −1]. After the algorithm generates h0, . . . , hm, we set I to be I = arg minJ∈I β(J). Interval Conversion: Kernel-based hypotheses are functions that take the form, h(x) = Pn j=1 αjK(zj, x), where K is a Mercer kernel, z1, . . . , zn are instances, often referred to as support patterns and α1, . . . , αn are real weights. A variety of different batch algorithms produce kernel-based hypotheses, including the Support Vector Machine [12]. An important learning problem, which is currently addressed by only a handful of algorithms, is to learn a kernel-based hypothesis h which is defined by at most B support patterns. The parameter B is a predefined constant often referred to as the budget of support patterns. Naturally, kernel-based hypotheses which are represented by a few support patterns are memory efficient and faster to calculate. A similar problem arises in the online learning setting where the goal is to construct online algorithms where each online hypothesis hi is a kernel-based function defined by at most B vectors. Several online algorithms have been proposed for this problem [4, 13, 5]. First note that the data-independent conversions, with I = [m], are inadequate for this setting. Although each individual online hypothesis is defined by at most B vectors, hA is defined by the union of these sets, which can be much larger than B. To convert a budget-constrained online algorithm A into a budget-constrained batch algorithm, we make an additional assumption on the update strategy employed by A. We assume that whenever A updates its online hypothesis, it adds a single new support pattern into the set used to represent the kernel hypothesis, and possibly removes some other pattern from this set. The algorithms in [4, 13, 5] all fall into this category. Therefore, if we choose I to be the set {a, a + 1, . . . , b} for some integers 0 ≤a < b < m, and A updates its hypothesis k times during rounds a + 1 through b, then hA I is defined by at most B + k support patterns. Concretely, define I to be the set of all non-empty intervals in [m −1]. With C set properly, β(J) bounds RiskD(hA J) for every J ∈I with high probability. Next, J0,7 z }| { J0,3 z }| { J0,1 z }| { h0 h1 J2,3 z }| { h2 h3 J4,7 z }| { J4,5 z }| { h4 h5 J6,7 z }| { h6 h7 J8,11 z }| { J8,9 z }| { h8 h9 J10,11 z }| { h10 h11 h12 . . . Figure 1: An illustration of the tree-based conversion. generate h0, . . . , hm by running A with a budget parameter of B/2. Finally, choose I to be the set in I which contains at most B/2 updates and also minimizes the β function. By construction, the resulting hypothesis, hA I, is defined using at most B support patterns. Tree-Based Conversion: A drawback of the suffix conversions is that it must be performed in two consecutive stages. First h0, . . . , hm are generated and stored in memory. Only then can we calculate β(J) for every J ∈I and perform the conversion. Therefore, the memory requirements of this conversions grow linearly with m. We now present a conversion that can sidestep this problem by interleaving the conversion with the online hypothesis generation. This conversion slightly deviates from the general framework described in the previous section: instead of predefining a set of candidates I, we construct the optimal subset I in a recursive manner. As a consequence, the analysis in the previous section does not directly provide a generalization bound for this conversion. Assume for a moment that m is a power of 2. For all 0 ≤a ≤m −1 define Ja,a = {a}. Now, assume that we have already constructed the sets Ja,b and Jc,d, where a, b, c, d are integers such that a < d, b = (a + d −1)/2, and c = b + 1. Given these sets, define Ja,d as follows: Ja,d = ( Ja,b if β(Ja,b) ≤β(Jc,d) ∧β(Ja,b) ≤β(Ja,b ∪Jc,d) Jc,d if β(Jc,d) ≤β(Ja,b) ∧β(Jc,d) ≤β(Ja,b ∪Jc,d) Ja,b ∪Jc,d otherwise . (3) Finally, define I = J0,m−1 and output the batch hypothesis hA I. An illustration of this process is given in Fig. 1. Note that the definition of I requires only m −1 recursive evaluations of Eq. (3). When m is not a power of 2, we can pad the sequence of online hypotheses with virtual hypotheses, each of which attains an infinite loss. This conversion can be performed in parallel with the online rounds since on round t we already have all of the information required to calculate Ja,b for all b < t. In the special case where the instances are vectors in Rn, h0, . . . , hm are linear hypotheses and we use the averaging technique, the implementation of the tree-based conversion becomes memory efficient. Specifically, assume that each hi takes the form hi(x) = wi · x where wi is a vector of weights in Rn. In this case, storing an online hypothesis hi is equivalent to storing its weight vector wi. For any J ⊆[m −1], storing P j∈J hj requires storing the single n-dimensional vector P j∈J wj. Hence, once we calculate Ja,b we can discard the original online hypotheses ha, . . . , hb and instead merely keep hA Ja,b. Moreover, in order to calculate β we do not need to keep the set Ja,b itself but rather the values L(Ja,b) and |Ja,b|. Overall, storing hA Ja,b, L(Ja,b), and |Ja,b| requires only a constant amount of memory. It can be verified using an inductive argument that the overall memory utilization of this conversion is O(log(m)), which is significantly less than the O(m) space required by the suffix conversion. 4 Experiments We now turn to an empirical evaluation of the averaging and voting conversions. We chose multiclass classification as the underlying task and used the multiclass version of 3-fold 4-fold 5-fold 6-fold 7-fold 8-fold 9-fold 10-fold LETTER −2 0 2 MNIST −1 0 1 USPS −1 0 1 ISOLET S I T −4 0 4 S I T S I T S I T S I T S I T S I T S I T Figure 2: Comparison of the three data-driven averaging conversions with the dataindependent averaging conversion, for different datasets (Y-axis) and different training-set sizes (X-axis). Each bar shows the difference between the error percentages of a datadriven conversion (suffix (S), interval (I) or tree-based (T)) and of the data-independent conversion. Error bars show standard deviation over the k folds. the Passive-Aggressive (PA) algorithm [3] as the online algorithm. The PA algorithm is a kernel-based large-margin online classifier. To apply the voting conversion, Y should be a finite set. Indeed, in multiclass categorization problems the set Y consists of all possible labels. To apply the averaging conversion Y must be a convex set. To achieve this, we use the fact that PA associates a margin value with each class, and define Y = Rs (where s is the number of classes). In our experiments, we used the datasets LETTER, MNIST, USPS (training set only), and ISOLET. These datasets are of size 20000, 70000, 7291 and 7797 respectively. MNIST and USPS both contain images of handwritten digits and thus induce 10-class problems. The other datasets contain images (LETTER) and utterances (ISOLET) of the English alphabet. We did not use the standard splits into training set and test set and instead performed crossvalidation in all of our experiments. For various values of k, we split each dataset into k parts, trained each algorithm using each of these parts and tested on the k −1 remaining parts. Specifically, we ran this experiment for k = 3, . . . , 10. The reason for doing this is that the experiment is most interesting when the training sets are small and the learning task becomes difficult. We applied the data-independent averaging and voting conversions, as well as the three data-driven variants of these conversions (6 data-driven conversions in all). The interval conversion was set to choose an interval containing 500 updates. The parameter C was arbitrarily set to 3. Additionally, we evaluated the test error of the last hypothesis generated by the online algorithm, hm. It is common malpractice amongst practitioners to use hm as if it were a batch hypothesis, instead of using an online-to-batch conversion. As a byproduct of our experiments, we show that hm performs significantly worse than any of the conversion techniques discussed in this paper. The kernel used in all of the experiments is the Gaussian kernel with default kernel parameters. We would like to emphasize that our goal was not to achieve state-of-the-art results on these datasets but rather to compare the different conversion strategies on the same sequence of hypotheses. To achieve the best results, one would have to tune C and the various kernel parameters. The results for the different variants of the averaging conversion are depicted in Fig. 2. last average average-sfx voting voting-sfx LETTER 5-fold 29.9 ± 1.8 21.2 ± 0.5 20.5 ± 0.6 23.4 ± 0.8 21.5 ± 0.8 LETTER 10-fold 37.3 ± 2.1 26.9 ± 0.7 26.5 ± 0.6 30.2 ± 1.0 27.9 ± 0.6 MNIST 5-fold 7.2 ± 0.5 5.9 ± 0.4 5.3 ± 0.6 7.0 ± 0.5 6.5 ± 0.5 MNIST 10-fold 13.8 ± 2.3 9.5 ± 0.8 9.1 ± 0.8 8.7 ± 0.5 8.0 ± 0.5 USPS 5-fold 9.7 ± 1.0 7.5 ± 0.4 7.1 ± 0.4 9.4 ± 0.4 8.8 ± 0.3 USPS 10-fold 12.7 ± 4.7 10.1 ± 0.7 9.5 ± 0.8 12.5 ± 1.0 11.3 ± 0.6 ISOLET 5-fold 20.1 ± 3.8 17.6 ± 4.1 16.7 ± 3.3 20.6 ± 3.4 18.3 ± 3.9 ISOLET 10-fold 28.6 ± 3.6 25.8 ± 2.8 22.7 ± 3.3 29.3 ± 3.1 26.7 ± 4.0 Table 1: Percent of errors averaged over the k folds with standard deviation. Results are given for the last online hypothesis (hm), the data-independent averaging and voting conversions, and their suffix variants. The lowest error on each row is shown in bold. For each dataset and each training-set size, we present a bar-plot which represents by how much each of the data-driven averaging conversions improves over the data-independent averaging conversion. For instance, the left bar in each plot shows the difference between the test errors of the suffix conversion and the data-independent conversion. A negative value means that the data-driven technique outperforms the data-independent one. The results clearly indicate that the suffix and tree-based conversions consistently improve over the data-independent conversion. The interval conversion does not improve as much and occasionally even looses to the data-independent conversion. However, this is a small price to pay in situations where it is important to generate a compact kernel-based hypothesis. Due to the lack of space, we omit a similar figure for the voting conversion and merely note that the plots are very similar to the ones in Fig. 2. In Table 1 we give some concrete values of test error, and compare data-independent and data-driven versions of averaging and voting, using the suffix conversion. As a reference, we also give the results obtained by the last hypothesis generated by the online algorithm. In all of the experiments, the data-driven conversion outperforms the data-independentconversion. In general, averaging exhibits better results than voting, while the last online hypothesis is almost always inferior to all of the online-to-batch conversions. References [1] K. Azuma. Weighted sums of certain dependent random variables. Tohoku Mathematical Journal, 68:357–367, 1967. [2] N. Cesa-Bianchi, A. Conconi, and C.Gentile. On the generalization ability of on-line learning algorithms. IEEE Transactions on Information Theory, 2004. [3] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms. Journal of Machine Learning Research, 2006. [4] K. Crammer, J. Kandola, and Y. Singer. Online classification on a budget. NIPS 16, 2003. [5] O. Dekel, S. Shalev-Shwartz, and Y. Singer. The Forgetron: A kernel-based perceptron on a fixed budget. NIPS 18, 2005. [6] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296, 1999. [7] S. I. Gallant. Optimal linear discriminants. ICPR 8, pages 849–852. IEEE, 1986. [8] D. P. Helmbold and M. K. Warmuth. On weak learning. Journal of Computer and System Sciences, 50:551–573, 1995. [9] Y. Li. Selective voting for perceptron-like on-line learning. In ICML 17, 2000. [10] N. Littlestone. From on-line to batch learning. COLT 2, pages 269–284, July 1989. [11] N. Littlestone and M. Warmuth. Relating data compression and learnability. Unpublished manuscript, November 1986. [12] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [13] J. Weston, A. Bordes, and L. Bottou. Online (and offline) on a tighter budget. AISTAT 10, 2005.
|
2005
|
64
|
2,882
|
Learning Rankings via Convex Hull Separation Glenn Fung, R´omer Rosales, Balaji Krishnapuram Computer Aided Diagnosis, Siemens Medical Solutions USA, Malvern, PA 19355 {glenn.fung, romer.rosales, balaji.krishnapuram}@siemens.com Abstract We propose efficient algorithms for learning ranking functions from order constraints between sets—i.e. classes—of training samples. Our algorithms may be used for maximizing the generalized Wilcoxon Mann Whitney statistic that accounts for the partial ordering of the classes: special cases include maximizing the area under the ROC curve for binary classification and its generalization for ordinal regression. Experiments on public benchmarks indicate that: (a) the proposed algorithm is at least as accurate as the current state-of-the-art; (b) computationally, it is several orders of magnitude faster and—unlike current methods—it is easily able to handle even large datasets with over 20,000 samples. 1 Introduction Many machine learning applications depend on accurately ordering the elements of a set based on the known ordering of only some of its elements. In the literature, variants of this problem have been referred to as ordinal regression, ranking, and learning of preference relations. Formally, we want to find a function f : ℜn →ℜsuch that, for a set of test samples {xk ∈ℜn}, the output of the function f(xk) can be sorted to obtain a ranking. In order to learn such a function we are provided with training data, A, containing S sets (or classes) of training samples: A = SS j=1(Aj = {xj i}mj i=1), where the j-th set Aj contains mj samples, so that we have a total of m = PS j=1 mj samples in A. Further, we are also provided with a directed order graph G = (S, E) each of whose vertices corresponds to a class Aj, and the existence of a directed edge EP Q—corresponding to AP →AQ—means that all training samples xp ∈AP should be ranked higher than any sample xq ∈AQ: i.e. ∀(xp∈AP , xq∈AQ), f(xp) ≤f(xq). In general the number of constraints on the ranking function grows as O(m2) so that naive solutions are computationally infeasible even for moderate sized training sets with a few thousand samples. Hence, we propose a more stringent problem with a larger (infinite) set of constraints, that is nevertheless much more tractably solved. In particular, we modify the constraints to: ∀(xp∈CH(AP ), xq∈CH(AQ)), f(xp) ≤f(xq), where CH(Aj) denotes the set of all points in the convex hull of Aj. We show how this leads to: (a) a family of approximations to the original problem; and (b) considerably more efficient solutions that still enforce all of the original inter-group order constraints. Notice that, this formulation subsumes the standard ranking problem (e.g. [4]) as a special case when each set Aj is reduced to a singleton and the order graph is equal to (a) (b) {v,w,x} {y,z} {v,w} {x} {y,z} (c) {v,w} {x} {y,z} (d) {v} {y} {z} {w} {x} (e) {v,w,x} {y} {z} (f) {v,w} {y} {z} {x} Figure 1: Various instances of the proposed ranking problem consistent with the training set {v, w, x, y, z} satisfying v > w > x > y > z. Each problem instance is defined by an order graph. (a-d) A succession of order graphs with an increasing number of constraints (e-f) Two order graphs defining the same partial ordering but different problem instances. a full graph. However, as illustrated in Figure 1, the formulation is more general and does not require a total ordering of the sets of training samples Aj, i.e. it allows any order graph G to be incorporated into the problem. 1.1 Generalized Wilcoxon-Mann-Whitney Statistics A distinction is usually made between classification and ordinal regression methods on one hand, and ranking on the other. In particular, the loss functions used for classification and ordinal regression evaluate whether each test sample is correctly classified: in other words, the loss functions that are used to evaluate these algorithms—e.g. the 0–1 loss for binary classification—are computed for every sample individually, and then averaged over the training or test set. By contrast, bipartite ranking solutions are evaluated using the Wilcoxon-Mann-Whitney (WMW) statistic which measures the (sample averaged) probability that any pair of samples is ordered correctly; intuitively, the WMW statistic may be interpreted as the area under the ROC curve (AUC). We define a slight generalization of the WMW statistic that accounts for our notion of class-ordering: WMW(f, A) = X Eij Pmi k=1 Pmj l=1 δ f(xi k) < f(xj l ) Pmi k=1 Pmj l=1 1 . Hence, if a sample is individually misclassified because it falls on the wrong side of the decision boundary between classes it incurs a penalty in ordinal regression, whereas, in ranking, it may be possible that it is still correctly ordered with respect to every other test sample, and thus it may incur no penalty in the WMW statistic. 1.2 Previous Work Ordinal regression and methods for handling structured output classes: For a classic description of generalized linear models for ordinal regression, see [11]. A non-parametric Bayesian model for ordinal regression based on Gaussian processes (GP) was defined [1]. Several recent machine learning papers consider structured output classes: e.g. [13] presents SVM based algorithms for handling structured and interdependent output spaces, and [5] discusses automatic document categorization into pre-defined hierarchies or taxonomies of topics. Learning Rankings: The problem of learning rankings was first treated as a classification problem on pairs of objects by Herbrich [4] and subsequently used on a web page ranking task by Joachims [6]; a variety of authors have investigated this approach recently. The major advantage of this approach is that it considers a more explicit notion of ordering— However, the naive optimization strategy proposed there suffers from the O(m2) growth in the number of constraints mentioned in the previous section. This computational burden renders these methods impractical even for medium sized datasets with a few thousand samples. In other related work, boosting methods have been proposed for learning preferences [3], and a combinatorial structure called the ranking poset was used for conditional modeling of partially ranked data[8], in the context of combining ranked sets of web pages produced by various web-page search engines. Another, less related, approach is [2]. Relationship to the proposed work: Our algorithm penalizes wrong ordering of pairs of training instances in order to learn ranking functions (similar to [4]), but in addition, it can also utilize the notion of a structured class order graph. Nevertheless, using a formulation based on constraints over convex hulls of the training classes, our method avoids the prohibitive computational complexity of the previous algorithms for ranking. 1.3 Notation and Background In the following, vectors will be assumed to be column vectors unless transposed to a row vector by a prime superscript ′. For a vector x in the n-dimensional real space ℜn, the cardinality of a set A will be denoted by #(A). The scalar (inner) product of two vectors x and y in the n-dimensional real space ℜn will be denoted by x′y and the 2-norm of x will be denoted by ∥x∥. For a matrix A ∈ℜm×n, Ai is the ith row of A which is a row vector in ℜn, while A·j is the jth column of A. A column vector of ones of arbitrary dimension will be denoted by e. For A ∈ℜm×n and B ∈ℜn×k, the kernel K(A, B) maps ℜm×n ×ℜn×k into ℜm×k. In particular, if x and y are column vectors in ℜn then, K(x′, y) is a real number, K(x′, A′) is a row vector in ℜm and K(A, A′) is an m × m matrix. The identity matrix of arbitrary dimension will be denoted by I. 2 Convex Hull formulation We are interested in learning a ranking function f : ℜn →ℜgiven known ranking relationships between some training instances Ai, Aj ⊂A. Let the ranking relationships be specified by a set E = {(i, j)|Ai ≺Aj} To begin with, let us consider the linearly separable binary ranking case which is equivalent to the problem of classifying m points in the n-dimensional real space ℜn, represented by the m × n matrix A, according to membership of each point x = Ai in the class A+ or A− as specified by a given vector of labels d. In others words, for binary classifiers, we want a linear ranking function fw(x) = w′x that satisfies the following constraints: ∀(x+∈A+, x−∈A−), f(x−) ≤f(x+) ⇒f(x−)−f(x+) = w′x−−w′x+ ≤−1 ≤0. (1) Clearly, the number of constraints grows as O(m+m−), which is roughly quadratic in the number of training samples (unless we have severe class imbalance). While easily overcome–based on additional insights–in the separable problem, in the non-separable case, the quadratic growth in the number of constraints poses huge computational burdens on the optimization algorithm; indeed direct optimization with these constraints is infeasible even for moderate sized problems. We overcome this computational problem based on three key insights that are explained below. First, notice that (by negation) the feasibility constraints in (1) can also be defined as: ∀(x+∈A+, x−∈A−), w′x−−w′x+ ≤−1 ⇔∄(x+∈A+, x−∈A−), w′x−−w′x+ > −1. In other words, a solution w is feasible iff there exist no pair of samples from the two classes such that fw() orders them incorrectly. Second, we will make the constraints in (1) more stringent: instead of requiring that equation (1) be satisfied for each possible pair (x+∈A+, x−∈A−) in the training set, we will Figure 2: Example binary problem where points belonging to the A+ and A−sets are represented by blue circles and red triangles respectively. Note that two elements xi and xj of the set A− are not correctly ordered and hence generate positive values of the corresponding slack variables yi and yj. Note that the point xk (hollow triangle) is in the convex hull of the set A−and hence the corresponding yk error can be writen as a convex combination (yk = λk i yi + λk j yj) of the two nonzero errors corresponding to points of A− require (1) to be satisfied for each pair (x+∈CH(A+), x−∈CH(A−)), where CH(Ai) denotes the convex hull of the set Ai [12]. Thus, our constraints become: ∀(λ+, λ−) such that 0 ≤λ+ ≤1, P λ+ = 1 0 ≤λ−≤1, P λ−= 1 , w′A−′λ−−w′A+′λ+ ≤−1.(2) Next, notice that all the linear inequality and equality constraints on (λ+, λ−) may be conveniently grouped together as Bλ ≤b, where, λ = λ− λ+ m×1 b+ = " 0+m+×1 1 −1 # (m++2)×1 b−= " 0−m−×1 1 −1 # (m−+2)×1 b = b+ b− (3) B−= " −Im− 0 e′ 0 −e′ 0 # (m−+2)×m B+ = " 0 −Im+ 0 e′ 0 −e′ # (m++2)×m B = B− B+ (m+4)×m (4) Thus, our constraints on w can be written as: ∀λ s.t. Bλ ≤b, w′A−′λ−−w′A+′λ+ ≤−1 (5) ⇔∄λ s.t. Bλ ≤b, w′A−′λ−−w′A+′λ+ > −1 (6) ⇔∃u s.t. B′u−w′[A−′ −A+′] = 0, b′u ≤−1, u ≥0, (7) Where the second equivalent form of the constraints was obtained by negation (as before), and the third equivalent form results from our third key insight: the application of Farka’s theorem of alternatives[9]. The resulting linear system of m equalities and m + 5 inequalities in m + n + 4 variables can be used while minimizing any regularizer (such as ∥w∥2) to obtain the linear ranking function that satisfies (1); notice, however, that we avoid the O(m2) scaling in constraints. 2.1 The binary non-separable case In the non-separable case, CH(A+) T CH(A−) ̸= ∅so the requirements have to be relaxed by introducing slack variables. To this end, we allow one slack variable yi ≥0 for each training sample xi, and consider the slack for any point inside the convex hull CH(Aj) to also be a convex combination of y (see Fig. 2). For example, this implies that if only a subset of training samples have non-zero slacks yi> 0 (i.e. they are possibly misclassified), then the slacks of any points inside the convex hull also only depend on those yi. Thus, our constraints now become: ∀λ s.t. Bλ ≤b, w′A−′λ−−w′A+′λ+ ≤−1 + (λ−y−+ λ+y+), y+≥0, y−≥0. (8) Applying Farka’s theorem of alternatives, we get: (2) ⇔∃u s.t. B′u − A−w −A+w + y− y+ = 0, b′u ≤−1, u ≥0 (9) Replacing B from equation (4) and defining u′ = [u−′ u+′] ≥0 we get the constraints: B+′u+ + A+w + y+ = 0, (10) B−′u−−A−w + y− = 0, (11) b+u+ + b−u− ≤ −1, u ≥0 (12) 2.2 The general ranking problem Now we can extend the idea presented in the previous section for any given arbitrary directed order graph G = (S, E), as stated in the introduction, each of whose vertices corresponds to a class Aj and the existence of a directed edge Eij means that all training samples xi ∈Ai should be ranked higher than any sample xj ∈Aj, that is: f(xj) ≤f(xi) ⇒f(xj) −f(xi) = w′xj −w′xi ≤−1 ≤0 (13) Analogously we obtain the following set of equations that enforced the ordering between sets Ai and Aj: Bi′uij + Aiw + yi = 0 (14) Bj′ ˆuij −Ajw + yj = 0 (15) biuij + bj ˆuij ≤−1 (16) uij, ˆuij ≥0 (17) It can be shown that using the definitions of Bi,Bj,bi,bj and the fact that uij, ˆ uij ≥0, equations (14) can be rewritten in the following way: γij + Aiw + yi ≥ 0 (18) ˆγij −Ajw + yj ≥ 0 (19) γij + ˆγij ≤−1 (20) yi, yj ≥0 (21) where γij = biuij and ˆγij = bj ˆuij. Note that enforcing the constraints defined above indeed implies the desired ordering, since we have: Aiw + yi ≥−γij ≥ˆγij + 1 ≥ˆγij ≥Ajw −yj It is also important to note the connection with Support Vector Machines (SVM) formulation [10, 14] for the binary case. If we impose the extra constraints −γij = γ + 1 and ˆγij = γ−1, then equations (18) imply the constraints included in the standard primal SVM formulation. To obtain a more general formulation,we can “kernelize” equations (14) by making a transformation of the variable w as: w = A′v, where v can be interpreted as an arbitrary variable in Rm ,This transformation can be motivated by duality theory [10], then equations (14) become: γij + AiA′v + yi ≥ 0 (22) ˆγij −AjA′v + yj ≥ 0 (23) γij + ˆγij ≤−1 (24) yi, yj ≥0 (25) If we now replace the linear kernels AiA′ and AiA′ by more general kernels K(Ai, A′) and K(Aj, A′) we obtain a “kernelized” version of equations (14) Eij ≡ γij + K(Ai, A′)v + yi ≥ 0 ˆγij −K(Aj, A′)v + yj ≥ 0 γij + ˆγij ≤ −1 yi, yj ≥ 0 (26) Given a graph G = (V, E) representing the ordering of the training data and using equations (26) , we present next, a general mathematical programming formulation the ranking problem: min {v,yi,γij | (i,j)∈E} νǫ(y) + R(v) s.t. Eij ∀(i, j) ∈E (27) Where ǫ is a given loss function for the slack variables yi and R(v) represents a regularizer on the normal to the hyperplane v. For an arbitrary kernel K(x, x′) the number of variables of formulation (27) is 2 ∗m + 2#(E) and the number of linear equations(excluding the nonnegativity constraints) is m#(E) + #(E) = #(E)(m + 1). for a linear kernel i.e. K(x, x′) = xx′ the number of variables of formulation (27) becomes m + n + 2#(E) and the number of linear equations remains the same. When using a linear kernel and using ǫ(x) = R(x) = ∥x∥2 2, the optimization problem (27) becomes a linearly constrained quadratic optimization problem for which a unique solution exists due to the convexity of the objective function: min {w,yi,γij | (i,j)∈E} ν ∥y∥2 2 + 1 2w′w s.t. Eij ∀(i, j) ∈E (28) Unlike other SVM-like methods for ranking that need a O(m2) number of slack variables y our formulation only require one slack variable for example, only m slack variables are used, giving our formulation computational advantage over ranking methods. Next, we demonstrate the effectiveness of our algorithm by comparing it to two state-of-the-art algorithms. 3 Experimental Evaluation We test tested our approach in a set of nine publicly available datasets 1 shown in Tab. 1 (several large datasets are not reported since only the algorithm presented in this paper was able to run them). These datasets have been frequently used as a benchmark for ordinal regression methods (e.g. [1]). Here we use them for evaluating ranking performance. We compare our method against SVM for ranking (e.g. [4, 6]) using the SVM-light package 2 and an efficient Gaussian process method (the informative vector machine) 3 [7]. These datasets were originally designed for regression, thus the continuous target values for each dataset were discretized into five equal size bins. We use these bins to define our ranking constraints: all the datapoints with target value falling in the same bin were grouped together. Each dataset was divided into 10% for testing and 90% for training. Thus, the input to all of the algorithms tested was, for each point in the training set: (1) a vector in ℜn (where n is different for each set) and (2) a value from 1 to 5 denoting the rank of the group to which it belongs. Performance is defined in terms of the Wilcoxon statistic. Since we do not employ information about the ranking of the elements within each group, order constraints within a group 1Available at http:\\www.liacc.up.pt\˜ltorgo\Regression\DataSets.html 2http:\\www.cs.cornell.edu\People\tj\svm light\ 3http:\\www.dcs.shef.ac.uk\ neil\ivm\ Table 1: Benchmark Datasets Name m n Name m n 1 Abalone 4177 9 6 Machine-CPU 209 7 2 Airplane Comp. 950 10 7 Pyrimidines 74 28 3 Auto-MPG 392 8 8 Triazines 186 61 4 CA Housing 20640 9 9 WI Breast Cancer 194 33 5 Housing-Boston 506 14 1 2 3 4 5 6 7 8 9 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Dataset number Generalized Wilcoxon statistic (AUC) Accuracy SVM−light IVM Proposed (full−graph) Proposed (chain−graph) 1 2 3 4 5 6 7 8 9 10 −2 10 −1 10 0 10 1 10 2 10 3 Dataset number Run time (Log−scale) Run time Figure 3: Experimental comparison of the ranking SVM, IVM and the proposed method on nine benchmark datasets. Along with the mean values in 10 fold cross-validation, the entire range of variation is indicated in the error-bars. (a) The overall accuracy for all the three methods is comparable. (b) The proposed method has a much lower run time than the other methods, even for the full graph case for medium to large size datasets. NOTE: Both SVM-light and IVM ran out of memory and crashed on dataset 4; on dataset 1, SVM-light failed to complete even one fold after more than 24 hours of run time, so its results could not be compiled in time for submission. cannot be verified. Letting b(m) = m(m −1)/2, the total number of order constraints is equal to b(m) −P i b(mi), where mi is the number of instances in group i. The results for all of the algorithms are shown in Fig.3. Our formulation was tested employing two order graphs, the full directed acyclic graph and the chain graph. The performance for all datasets is generally comparable or significantly better for our algorithm (when using a chain order graph). Note that the performance for the full graph is consistently lower than that for the chain graph. Thus, interestingly enforcing more order constraints does not necessarily imply better performance. We suspect that this is due to the role that the slack variables play in both formulations, since the number of slack variables remains the same while the number of constraints increases. Adding more slack variables may positively affect performance in the full graph, but this comes at a computational cost. An interesting problem is to find the right compromise. A different but potentially related problem is that of finding good order graph given a dataset. Note also that the chain graph is much more stable regarding performance overall. Regarding run-time, our algorithm runs an order of magnitude faster than current implementations of state-of-the-art methods, even approximate ones (like IVM). 4 Discussions and future work We propose a general method for learning a ranking function from structured order constraints on sets of training samples. The proposed algorithm was illustrated on benchmark ranking problems with two different constraint graphs: (a) a chain graph; and (b) a full ordering graph. Although a chain graph was more accurate in the experiments shown in Figure 3, with either type of graph structure, the proposed method is at least as accurate (in terms of the WMW statistic for ordinal regression) as state-of-the-art algorithms such as the ranking-SVM and Gaussian Processes for ordinal regression. Besides being accurate, the computational requirements of our algorithm scale much more favorably with the number of training samples as compared to other state-of-the-art methods. Indeed it was the only algorithm capable of handling several large datasets, while the other methods either crashed due to lack of memory or ran for so long that they were not practically feasible. While our experiments illustrate only specific order graphs, we stress that the method is general enough to handle arbitrary constraint relationships. While the proposed formulation reduces the computational complexity of enforcing order constraints, it is entirely independent of the regularizer that is minimized (under these constraints) while learning the optimal ranking function. Though we have used a simple margin regularization (via ∥w∥2 in (28), and RKHS regularization in (27) in order to learn in a supervised setting, we can just as easily easily use a graph-Laplacian based regularizer that exploits unlabeled data, in order to learn in semi-supervised settings. We plan to explore this in future work. References [1] W. Chu and Z. Ghahramani, Gaussian processes for ordinal regression, Tech. report, University College London, 2004. [2] K. Crammer and Y. Singer, Pranking with ranking, Neural Info. Proc. Systems, 2002. [3] Y. Freund, R. Iyer, and R. Schapire, An efficient boosting algorithm for combining preferences, Journal of Machine Learning Research 4 (2003), 933–969. [4] R. Herbrich, T. Graepel, and K. Obermayer, Large margin rank boundaries for ordinal regression, Advances in Large Margin Classifiers (2000), 115–132. [5] T. Hofmann, L. Cai, and M. Ciaramita, Learning with taxonomies: Classifying documents and words, (NIPS) Workshop on Syntax, Semantics, and Statistics, 2003. [6] T. Joachims, Optimizing search engines using clickthrough data, Proc. ACM Conference on Knowledge Discovery and Data Mining (KDD), 2002. [7] N. Lawrence, M. Seeger, and R. Herbrich, Fast sparse gaussian process methods: The informative vector machine, Neural Info. Proc. Systems, 2002. [8] G. Lebanon and J. Lafferty, Conditional models on the ranking poset, Neural Info. Proc. Systems, 2002. [9] O. L. Mangasarian, Nonlinear programming, McGraw–Hill, New York, 1969, Reprint: SIAM Classic in Applied Mathematics 10, 1994, Philadelphia. [10] , Generalized support vector machines, Advances in Large Margin Classifiers, 2000, pp. 135–146. [11] P. McCullagh and J. Nelder, Generalized linear models, Chapman & Hall, 1983. [12] R. T. Rockafellar, Convex analysis, Princeton University Press, Princeton, New Jersey, 1970. [13] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun, Support vector machine learning for interdependent and structured output spaces, Int.Conf. on Machine Learning, 2004. [14] V. N. Vapnik, The nature of statistical learning theory, second ed., Springer, New York, 2000.
|
2005
|
65
|
2,883
|
Interpolating Between Types and Tokens by Estimating Power-Law Generators ∗ Sharon Goldwater Thomas L. Griffiths Mark Johnson Department of Cognitive and Linguistic Sciences Brown University, Providence RI 02912, USA {sharon goldwater,tom griffiths,mark johnson}@brown.edu Abstract Standard statistical models of language fail to capture one of the most striking properties of natural languages: the power-law distribution in the frequencies of word tokens. We present a framework for developing statistical models that generically produce power-laws, augmenting standard generative models with an adaptor that produces the appropriate pattern of token frequencies. We show that taking a particular stochastic process – the Pitman-Yor process – as an adaptor justifies the appearance of type frequencies in formal analyses of natural language, and improves the performance of a model for unsupervised learning of morphology. 1 Introduction In general it is important for models used in unsupervised learning to be able to describe the gross statistical properties of the data they are intended to learn from, otherwise these properties may distort inferences about the parameters of the model. One of the most striking statistical properties of natural languages is that the distribution of word frequencies is closely approximated by a power-law. That is, the probability that a word w will occur with frequency nw in a sufficiently large corpus is proportional to n−g w . This observation, which is usually attributed to Zipf [1] but enjoys a long and detailed history [2], stimulated intense research in the 1950s (e.g., [3]) but has largely been ignored in modern computational linguistics. By developing models that generically exhibit power-laws, it may be possible to improve methods for unsupervised learning of linguistic structure. In this paper, we introduce a framework for developing generative models for language that produce power-law distributions. Our framework is based upon the idea of specifying language models in terms of two components: a generator, an underlying generative model for words which need not (and usually does not) produce a power-law distribution, and an adaptor, which transforms the stream of words produced by the generator into one whose frequencies obey a power law distribution. This framework is extremely general: any generative model for language can be used as a generator, with the power-law distribution being produced as the result of making an appropriate choice for the adaptor. In our framework, estimation of the parameters of the generator will be affected by assumptions about the form of the adaptor. We show that use of a particular adaptor, the PitmanYor process [4, 5, 6], sheds light on a tension exhibited by formal approaches to natural language: whether explanations should be based upon the types of words that languages ∗This work was partially supported by NSF awards IGERT 9870676 and ITR 0085940 and NIMH award 1R0-IMH60922-01A2 exhibit, or the frequencies with which tokens of those words occur. One place where this tension manifests is in accounts of morphology, where formal linguists develop accounts of why particular words appear in the lexicon (e.g., [7]), while computational linguists focus on statistical models of the frequencies of tokens of those words (e.g., [8]). The tension between types and tokens also appears within computational linguistics. For example, one of the most successful forms of smoothing used in statistical language models, Kneser-Ney smoothing, explicitly interpolates between type and token frequencies [9, 10, 11]. The plan of the paper is as follows. Section 2 discusses stochastic processes that can produce power-law distributions, including the Pitman-Yor process. Section 3 specifies a twostage language model that uses the Pitman-Yor process as an adaptor, and examines some properties of this model: Section 3.1 shows that estimation based on type and token frequencies are special cases of this two-stage language model, and Section 3.2 uses these results to provide a novel justification for the use of Kneser-Ney smoothing. Section 4 describes a model for unsupervised learning of the morphological structure of words that uses our framework, and demonstrates that its performance improves as we move from estimation based upon tokens to types. Section 5 concludes the paper. 2 Producing power-law distributions Assume we want to generate a sequence of N outcomes, z = {z1, . . . , zN} with each outcome zi being drawn from a set of (possibly unbounded) size Z. Many of the stochastic processes that produce power-laws are based upon the principle of preferential attachment, where the probability that the ith outcome, zi, takes on a particular value k depends upon the frequency of k in z−i = {z1, . . . , zi−1} [2]. For example, one of the earliest and most widely used preferential attachment schemes [3] chooses zi according to the distribution P(zi = k | z−i) = a 1 Z + (1 −a)n(z−i) k i −1 (1) where n(z−i) k is the number of times k occurs in z−i. This “rich-get-richer” process means that a few outcomes appear with very high frequency in z – the key attribute of a power-law distribution. In this case, the power-law has parameter g = 1/(1 −a). One problem with these classical models is that they assume a fixed ordering on the outcomes z. While this may be appropriate for some settings, the assumption of a temporal ordering restricts the contexts in which such models can be applied. In particular, it is much more restrictive than the assumption of independent sampling that underlies most statistical language models. Consequently, we will focus on a different preferential attachment scheme, based upon the two-parameter species sampling model [4, 5] known as the Pitman-Yor process [6]. Under this scheme outcomes follow a power-law distribution, but remain exchangeable: the probability of a set of outcomes is not affected by their ordering. The Pitman-Yor process can be viewed as a generalization of the Chinese restaurant process [6]. Assume that N customers enter a restaurant with infinitely many tables, each with infinite seating capacity. Let zi denote the table chosen by the ith customer. The first customer sits at the first table, z1 = 1. The ith customer chooses table k with probability P(zi = k | z−i) = n (z−i) k −a i−1+b k ≤K(z−i) K(z−i)a+b i−1+b k = K(z−i) + 1 (2) where a and b are the two parameters of the process and K(z−i) is the number of tables that are currently occupied. The Pitman-Yor process satisfies our need for a process that produces power-laws while retaining exchangeability. Equation 2 is clearly a preferential attachment scheme. When (a) Generator Adaptor Adaptor Generator (b) t z ℓ θ w z ℓ c f w Figure 1: Graphical models showing dependencies among variables in (a) the simple twostage model, and (b) the morphology model. Shading of the node containing w reflects the fact that this variable is observed. Dotted lines delimit the generator and adaptor. a = 0 and b > 0, it reduces to the standard Chinese restaurant process [12, 4] used in Dirichlet process mixture models [13]. When 0 < a < 1, the number of people seated at each table follows a power-law distribution with g = 1 + a [5]. It is straightforward to show that the customers are exchangeable: the probability of a partition of customers into sets seated at different tables is unaffected by the order in which the customers were seated. 3 A two-stage language model We can use the Pitman-Yor process as the foundation for a language model that generically produces power-law distributions. We will define a two-stage model by extending the restaurant metaphor introduced above. Imagine that each table k is labelled with a word ℓk from a vocabulary of (possibly unbounded) size W. The first stage is to generate these labels, sampling ℓk from a generative model for words that we will refer to as the generator. For example, we could choose to draw the labels from a multinomial distribution θ. The second stage is to generate the actual sequence of words itself. This is done by allowing a sequence of customers to enter the restaurant. Each customer chooses a table, producing a seating arrangement, z, and says the word used to label that the table, producing a sequence of words, w. The process by which customers choose tables, which we will refer to as the adaptor, defines a probability distribution over the sequence of words w produced by the customers, determining the frequency with which tokens of the different types occur. The statistical dependencies among the variables in one such model are shown in Figure 1 (a). Given the discussion in the previous section, the Pitman-Yor process is a natural choice for an adaptor. The result is technically a Pitman-Yor mixture model, with zi indicating the “class” responsible for generating the ith word, and ℓk determining the multinomial distribution over words associated with class k, with P(wi = w | zi = k, ℓk) = 1 if ℓk = w, and 0 otherwise. Under this model the probability that the ith customer produces word w given previously produced words w−i and current seating arrangement z−i is P(wi = w | w−i, z−i, θ) = X k X ℓk P(wi = w | zi = k, ℓk)P(ℓk | w−i, z−i, θ)P(zi = k | z−i) = K(z−i) X k=1 n (z−i) k −a i −1 + b I(ℓk = w) + K(z−i)a + b i −1 + b θw (3) where I(·) is an indicator function, being 1 when its argument is true and 0 otherwise. If θ is uniform over all W words, then the distribution over w reduces to the Pitman-Yor process as W →∞. Otherwise, multiple tables can receive the same label, increasing the frequency of the corresponding word and producing a distribution with g < 1 + a. Again, it is straightforward to show that words are exchangeable under this distribution. 3.1 Types and tokens The use of the Pitman-Yor process as an adaptor provides a justification for the role of word types in formal analyses of natural language. This can be seen by considering the question of how to estimate the parameters of the multinomial distribution used as a generator, θ.1 In general, the parameters of generators can be estimated using Markov chain Monte Carlo methods, as we demonstrate in Section 4. In this section, we will show that estimation schemes based upon type and token frequencies are special cases of our language model, correspondingto the extreme values of the parameter a. Values of a between these extremes identify estimation methods that interpolate between types and tokens. Taking a multinomial distribution with parameters θ as a generator and the Pitman-Yor process as an adaptor, the probability of a sequence of words w given θ is P(w | θ) = X z,ℓ P(w, z, ℓ| θ) = X z,ℓ Γ(b) Γ(N + b) K(z) Y k=1 θℓk((k −1)a + b)Γ(n(z) k −a) Γ(1 −a) ! where in the last sum z and ℓare constrained such that ℓzi = wi for all i. In the case where b = 0, this simplifies to P(w | θ) = X z,ℓ K(z) Y k=1 θℓk · Γ(K(z)) Γ(N) · aK(z)−1 · K(z) Y k=1 Γ(n(z) k −a) Γ(1 −a) (4) The distribution P(w | θ) determines how the data w influence estimates of θ, so we will consider how P(w | θ) changes under different limits of a. In the limit as a approaches 1, estimation of θ is based upon word tokens. When a →1, Γ(nz k−a) Γ(1−a) is 1 for n(z) k = 1 but approaches 0 for n(z) k > 1. Consequently, all terms in the sum over (z, ℓ) go to zero, except that in which every word token has its own table. In this case, K(z) = N and ℓk = wk. It follows that lima→1 P(w | θ) = QN k=1 θwk. Any form of estimation using P(w | θ) will thus be based upon the frequencies of word tokens in w. In the limit as a approaches 0, estimation of θ is based upon word types. The appearance of aK(z)−1 in Equation 4 means that as a →0, the sum over z is dominated by the seating arrangement that minimizes the total number of tables. Under the constraint that ℓzi = wi for all i, this minimal configuration is the one in which every word type receives a single table. Consequently, lima→0 P(w | θ) is dominated by a term in which there is a single instance of θw for each word w that appears in w.2 Any form of estimation using P(w | θ) will thus be based upon a single instance of each word type in w. 3.2 Predictions and smoothing In addition to providing a justification for the role of types in formal analyses of language in general, use of the Pitman-Yor process as an adaptor can be used to explain the assumptions behind a specific scheme for combining token and type frequencies: Kneser-Ney smoothing. Smoothing methods are schemes for regularizing empirical estimates of the probabilities of words, with the goal of improving the predictive performance of language models. The Kneser-Ney smoother estimates the probability of a word by combining type and token frequencies, and has proven particularly effective for n-gram models [9, 10, 11]. 1Under the interpretation of this model as a Pitman-Yor process mixture model, this is analogous to estimating the base measure G0 in a Dirichlet process mixture model (e.g. [13]). 2Despite the fact that P(w | θ) approaches 0 in this limit, aK(z)−1 will be constant across all choices of θ. Consequently, estimation schemes that depend only on the non-constant terms in P(w | θ), such as maximum-likelihood or Bayesian inference, will remain well defined. To use an n-gram language model, we need to estimate the probability distribution over words given their history, i.e. the n preceding words. Assume we are given a vector of N words w that all share a common history, and want to predict the next word, wN+1, that will occur with that history. Assume that we also have vectors of words from H other histories, w(1), . . . , w(H). The interpolated Kneser-Ney smoother [11] makes the prediction P(wN+1 = w | w) = n(w) w −I(n(w) w > D)D N + P w I(n(w) w > D)D N P h I(w ∈w(h)) P w P h I(w ∈w(h)) (5) where we have suppressed the dependence on w(1), . . . , w(H), D is a “discount factor” specified as a parameter of the model, and the sum over h includes w. We can define a two-stage model appropriate for this setting by assuming that the sets of words for all histories are produced by the same adaptor and generator. Under this model, the probability of word wN+1 given w and θ is P(wN+1 = w | w, θ) = X z P(wN+1 = w|w, z, θ)P(z|w, θ) where P(wN+1 = w|w, z, θ) is given by Equation 3. Assuming b = 0, this becomes P(wN+1 = w | w, θ) = nw w −Ez[Kw(z)] a N + P w Ez[Kw(z)] a N θw (6) where Ez[Kw(z)] = P z Kw(z)P(z|w, θ), and Kw(z) is the number of tables with label w under the seating assignment z. The other histories enter into this expression via θ. Since the words associated with each history is assumed to be produced from a single set of parameters θ, the maximum-likelihood estimate of θw will approach θw = P h I(w ∈w(h)) P w P h I(w ∈w(h)) as a approaches 0, since only a single instance of each word type in each context will contribute to the estimate of θ. Substituting this value of θw into Equation 6 reveals the correspondence to the Kneser-Ney smoother (Equation 5). The only difference is that the constant discount factor D is replaced by aEz[Kw(z)], which will increase slowly as nw increases. This difference might actually lead to an improved smoother: the Kneser-Ney smoother seems to produce better performance when D increases as a function of nw [11]. 4 Types and tokens in modeling morphology Our attempt to develop statistical models of language that generically produce power-law distributions was motivated by the possibility that models that account for this statistical regularity might be able to learn linguistic information better than those that do not. Our two-stage language modeling framework allows us to create exactly these sorts of models, with the generator producing individual lexical items, and the adaptor producing the power-law distribution over words. In this section, we show that taking a generative model for morphology as the generator and varying the parameters of the adaptor results in an improvement in unsupervised learning of the morphological structure of English. 4.1 A generative model for morphology Many languages contain words built up of smaller units of meaning, or morphemes. These units can contain lexical information (as stems) or grammatical information (as affixes). For example, the English word walked can be parsed into the stem walk and the past-tense suffix ed. Knowledge of morphological structure enables language learners to understand and produce novel wordforms, and facilitates tasks such as stemming (e.g., [14]). As a basic model of morphology, we assume that each word consists of a single stem and suffix, and belongs to some inflectional class. Each class is associated with a stem distribution and a suffix distribution. We assume that stems and suffixes are independent given the class, so we have P(ℓk = w) = X c,t,f I(w = t.f)P(ck = c)P(tk = t | ck = c)P(fk = f | ck = c) (7) where ck, tk, and fk are the class, stem, and suffix associated with ℓk, and t.f indicates the concatenation of t and f. In other words, we generate a label by first drawing a class, then drawing a stem and a suffix conditioned on the class. Each of these draws is from a multinomial distribution, and we will assume that these multinomials are in turn generated from symmetric Dirichlet priors, with parameters κ, τ, and φ respectively. The resulting generative model can be used as the generator in a two-stage language model, providing a more structured replacement for the multinomial distribution, θ. As before, we will use the Pitman-Yor process as an adaptor, setting b = 0. Figure 1 (b) illustrates the dependencies between the variables in this model. Our morphology model is similar to that used by Goldsmith in his unsupervised morphological learning system [8], with two important differences. First, Goldsmith’s model is recursive, i.e. a word stem can be further split into a smaller stem plus suffix. Second, Goldsmith’s model assumes that all occurrences of each word type have the same analysis, whereas our model allows different tokens of the same type to have different analyses. 4.2 Inference by Gibbs sampling Our goal in defining this morphology model is to be able to automatically infer the morphological structure of a language. This can be done using Gibbs sampling, a standard Markov chain Monte Carlo (MCMC) method [15]. In MCMC, variables in the model are repeatedly sampled, with each sample conditioned on the current values of all other variables in the model. This process defines a Markov chain whose stationary distribution is the posterior distribution over model variables given the input data. Rather than sampling all the variables in our two-stage model simultaneously, our Gibbs sampler alternates between sampling the variables in the generator and those in the adaptor. Fixing the assignment of words to tables, we sample ck, tk, and fk for each table from P(ck = c, tk = t, fk = f | c−k, t−k, f−k, ℓ) ∝ I(ℓk = tk.fk) P(ck = c | c−k) P(tk = t | t−k, c) P(fk = f | f−k, c) = I(ℓk = tk.fk) · nc + κ K(z) −1 + κC · nc,t + τ nc + τT · nc,f + φ nc + φF (8) where nc is the number of other labels assigned to class c, nc,t and nc,f are the number of other labels in class c with stem t and suffix f, respectively, and C, T , and F, are the total number of possible classes, stems, and suffixes, which are fixed. We use the notation c−k here to indicate all members of c except for ck. Equation 8 is obtained by integrating over the multinomial distributions specified in Equation 7, exploiting the conjugacy between multinomial and Dirichlet distributions. Fixing the morphological analysis (c, t, f), we sample the table zi for each word token from P(zi = k | z−i, w, c, t, f) ∝ ( I(ℓk = wi)(n(z−i) k −a) n(z−i) k > 0 P(ℓk = wi)(K(z−i)a + b) n(z−i) k = 0 (9) where P(ℓk = wi) is found using Equation 7, with P(c), P(t), and P(f) replaced with the corresponding conditional distributions from Equation 8. 0 0.2 0.4 0.6 0.8 1 (true dist) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Proportion of types with each suffix Value of a NULL e ed d ing s es n en other 0 0.2 0.4 0.6 0.8 1 (true dist) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Proportion of tokens with each suffix Value of a (a) NU.e ed d ing s es n enoth. oth. en n es s ing d ed e NU. Found Types True Types NU.e ed d ing s es n enoth. oth. en n es s ing d ed e NU. Found Tokens True Tokens (b) Figure 2: (a) Results for the morphology model, varying a. (b) Confusion matrices for the morphology model with a = 0. The area of a square at location (i, j) is proportional to the number of word types (top) or tokens (bottom) with true suffix i and found suffix j. 4.3 Experiments We applied our model to a data set consisting of all the verbs in the training section of the Penn Wall Street Journal treebank (137,997 tokens belonging to 7,761 types). This simple test case using only a single part of speech makes our results easy to analyze. We determined the true suffix of each word using simple heuristics based on the part-of-speech tag and spelling of the word.3 We then ran a Gibbs sampler using 6 classes, and compared the results of our learning algorithm to the true suffixes found in the corpus. As noted above, the Gibbs sampler does not converge to a single analysis of the data, but rather to a distribution over analyses. For evaluation, we used a single sample taken after 1000 iterations. Figure 2 (a) shows the distribution of suffixes found by the model for various values of a, as well as the true distribution. We analyzed the results in two ways: by counting each suffix once for each word type it was associated with, and by counting once for each word token (thus giving more weight to the results for frequent words). The most salient aspect of our results is that, regardless of whether we evaluate on types or tokens, it is clear that low values of a are far more effective for learning morphology than higher values. With higher values of a, the system has too strong a preference for empty suffixes. This observation seems to support the linguists’ view of type-based generalization. It is also worth explaining why our morphological learner finds so many e and es suffixes. This problem is common to other morphological learning systems with similar models (e.g. [8]) and is due to the spelling rule in English that deletes stem-final e before certain suffixes. Since the system has no knowledge of spelling rules, it tends to hypothesize analyses such as {stat.e, stat.ing, stat.ed, stat.es}, where the e and es suffixes take the place of NULL and s. This effect can be seen clearly in the confusion matrices shown in Figure 2 (b). The remaining errors seen in the confusion matrices are those where the system hypothesized an empty suffix when in fact a non-empty suffix was present. Analysis of our results showed that these cases were mostly words where no other form with the same stem was present in 3The part-of-speech tags distinguish between past tense, past participle, progressive, 3rd person present singular, and infinitive/unmarked verbs, and therefore roughly correlate with actual suffixes. the corpus. There was therefore no reason for the system to prefer a non-empty suffix. 5 Conclusion We have shown that statistical language models that exhibit one of the most striking properties of natural languages – power-law distributions – can be defined by breaking the process of generating words into two stages, with a generator producing a set of words, and an adaptor determining their frequencies. Our morphology model and the Pitman-Yor process are particular choices for a generator and an adaptor. These choices produce empirical and theoretical results that justify the role of word types in formal analyses of natural language. However, the greatest strength of this framework lies in its generality: we anticipate that other choices of generators and adaptors will yield similarly interesting results. References [1] G. Zipf. Selective Studies and the Principle of Relative Frequency in Language. Harvard University Press, Cambridge, MA, 1932. [2] M. Mitzenmacher. A brief history of generative models for power law and lognormal distributions. Internet Mathematics, 1(2):226–251, 2003. [3] H.A. Simon. On a class of skew distribution functions. Biometrika, 42(3/4):425–440, 1955. [4] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability Theory and Related Fields, 102:145–158, 1995. [5] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. Annals of Probability, 25:855–900, 1997. [6] H. Ishwaran and L. F. James. Generalized weighted Chinese restaurant processes for species sampling mixture models. Statistica Sinica, 13:1211–1235, 2003. [7] J. B. Pierrehumbert. Probabilistic phonology: discrimination and robustness. In R. Bod, J. Hay, and S. Jannedy, editors, Probabilistic linguistics. MIT Press, Cambridge, MA, 2003. [8] J. Goldsmith. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27:153–198, 2001. [9] H. Ney, U. Essen, and R. Kneser. On structuring probabilistic dependences in stochastic language modeling. Computer, Speech, and Language, 8:1–38, 1994. [10] R. Kneser and H. Ney. Improved backing-off for n-gram language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1995. [11] S. F. Chen and J. Goodman. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Center for Research in Computing Technology, Harvard University, 1998. [12] D. Aldous. Exchangeability and related topics. In ´Ecole d’´et´e de probabilit´es de Saint-Flour, XIII—1983, pages 1–198. Springer, Berlin, 1985. [13] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000. [14] L. Larkey, L. Ballesteros, and M. Connell. Improving stemming for arabic information retrieval: Light stemming and co-occurrence analysis. In Proceedings of the 25th International Conference on Research and Development in Information Retrieval (SIGIR), 2002. [15] W.R. Gilks, S. Richardson, and D. J. Spiegelhalter, editors. Markov Chain Monte Carlo in Practice. Chapman and Hall, Suffolk, 1996.
|
2005
|
66
|
2,884
|
Response Analysis of Neuronal Population with Synaptic Depression Wentao Huang Institute of Intelligent Information Processing, Xidian University, Xi'an 710071, China wthuang@mail.xidian.edu.cn Licheng Jiao Institute of Intelligent Information Processing, Xidian University, Xi'an 710071, China lchjiao@mail.xidian.edu.cn Shan Tan Institute of Intelligent Information Processing, Xidian University, Xi'an 710071, China shtan@mail.xidian.edu.cn Maoguo Gong Institute of Intelligent Information Processing, Xidian University, Xi'an 710071, China mggong@mail.xidian.edu.cn Abstract In this paper, we aim at analyzing the characteristic of neuronal population responses to instantaneous or time-dependent inputs and the role of synapses in neural information processing. We have derived an evolution equation of the membrane potential density function with synaptic depression, and obtain the formulas for analytic computing the response of instantaneous re rate. Through a technical analysis, we arrive at several signicant conclusions: The background inputs play an important role in information processing and act as a switch betwee temporal integration and coincidence detection. the role of synapses can be regarded as a spatio-temporal lter; it is important in neural information processing for the spatial distribution of synapses and the spatial and temporal relation of inputs. The instantaneous input frequency can affect the response amplitude and phase delay. 1 Introduction Noise has an important impact on information processing of the nervous system in vivo. It is signicance for us to study the stimulus-and-response behavior of neuronal populations, especially to transients or time-dependent inputs in noisy environment, viz. given this stochastic environment, the neuronal output is typically characterized by the instantaneous ring rate. It has come in for a great deal of attention in recent years[1-4]. Moreover, it is revealed recently that synapses have a more active role in information processing[5-7]. The synapses are highly dynamic and show use-dependent plasticity over a wide range of time scales. Synaptic short-term depression is one of the most common expressions of plasticity. At synapses with this type of modulation, pre-synaptic activity produces a decrease in synaptic. The present work is concerned with the processes underlying investigating the collectivity dynamics of neuronal population with synaptic depression and the instantaneous response to time-dependence inputs. First, we deduce a one-dimension Fokker-Planck (FP) equation via reducing the high-dimension FP equations. Then, we derive the stationary solution and the response of instantaneous re rate from it. Finally, the models are analyzed and discussed in theory and some conclusions are presented. 2 Models and Methods 2.1 Single Neuron Models and Density Evolution Equations Our approach is based on the integrate-and-re(IF) neurons. The population density based on the integrate-and-re neuronal model is low-dimensional and thus can be computed efciently, although the approach could be generalized to other neuron models. It is completely characterized by its membrane potential below threshold. Details of the generation of an action potential above the threshold are ignored. Synaptic and external inputs are summed until it reaches a threshold where a spike is emitted. The general form of the dynamics of the membrane potential v in IF model can be written as v dv(t) dt = v(t) + Se(t) + v N X k=1 Jk(t)(t tsp k ); (1) where 0 v 1, v is the membrane time constant, Se(t) is an external current directly injected in the neuron, N is the number of synaptic connections, tsp k is occurring time of the ring of a presynaptic neuron k and obeys a Poisson distribution with mean k, Jk(t) is the efcacy of synapse k. The transmembrane potential, v, has been normalized so that v = 0 marks the rest state, and v = 1 the threshold for ring. When the latter is achieved, v is reset to zero. Jk(t) = ADk(t), where A is a constant representing the absolute synaptic efcacy corresponding to the maximal postsynaptic response obtained if all the synaptic resources are released at once, and Dk(t) act in accordance with complex dynamics rule. We use the phenomenological model by Tsodyks & Markram [7] to simulate short-term synaptic depression: dDk(t) dt = (1 Dk(t)) d UkDk(t)(t tsp k ); (2) where Dk is a `depression' variable, Dk 2 [0; 1], d is the recovery time constant, Uk is a constant determining the step decrease in Dk. Using the diffusion approximation, we can get from (1) and (2) v dv(t) dt = v(t) + Se(t) + v N X k=1 ADk(k + p kk(t)); dDk(t) dt = (1 Dk) d UkDk(k + p kk(t)): (3) The Fokker-Planck equation of equations (3) is @p(t; v; D) @t = @ @v ( v + Kv v p) N X k=1 @ @Dk (KDkp) N X k=1 @ @v@Dk (kAUkD2 kp) + 1 2f @2 @v2 ( N X k=1 kA2D2 kp) + N X k=1 @2 @D2 k (kU 2 kD2 kp)g; Kv = Se + N X k=1 vkADk; KDk = (1 Dk) d kUkDk: (4) where D = (D1; D2; :::DN), and p(t; v; D) = pd(t; Djv)pv(t; v); Z 1 1 pd(t; Djv)dD = 1: (5) We assume that D1; D2; :::DN are uncorrelated, then we have pd(t; Djv) = N Y k=1 ~pk d(t; Dkjv); (6) where ~pk d(t; Dkjv) is the conditional probability density. Moreover, we can assume ~pk d(t; Dkjv) pk d(t; Dk): (7) Substituting (5) into (4), we get pd @pv @t + pv @pd @t = @ @v ( v + Kv v pvpd) N X k=1 pv @ @Dk (KDkpd) N X k=1 @ @v@Dk (AUkD2 kkpvpd)+ 1 2f @2 @v2 ( N X k=1 kA2D2 kpvpd) + N X k=1 @2 @D2 k (kU 2 kD2 kpvpd)g: (8) Integrating Eqation (8) over D, we get v @pv(t; v) @t = @ @v ( v + ~Kv)pv(t; v) + Qv 2 @2pv(t; v) @v2 ; (9) where ~Kv = Z KvpddD =Se + N X k=1 vkAmk; Qv = N X k=1 vkA2
k; mk = Z Dkpk d(t; Dk)dDk;
k = Z D2 kpk d(t; Dk)dDk; (10) and pk d(t; Dk) satises the following equation Fokker-Planck equation @pk d @t = @ @Dk (KDkpk d) + 1 2 @2 @D2 k (U 2 kD2 kkpk d): (11) From (10) and (11), we can get dmk dt = ( 1 d + Uk)mk + 1 d ; d
k dt = ( 2 d + (2U U 2)k)
k + 2mk d : (12) Let Jv(t; v) = ( v + ~Kv v )pv(t; v) Qv 2 v @pv(t; v) @v ; r(t) = Jv(t; 1); (13) where Jv(t; v) is the probability ux of pv, r(t) is the re rate. The boundary conditions of equation (9) are pv(t; 1) = 0; Z 1 0 pv(t; v)dv = 1; r(t) = Jv(t; 0): (14) 2.2 Stationary Solution and Response Analysis When the system is in the stationary states, @pv=@t = 0; dmk=dt = 0; d
k=dt = 0; pv(t; v) = p0 v(v); r(t) = r0; mk(t) = m0 k;
k(t) =
0 k and k(t) = 0 k. are timeindependent. From (9), (12), (13) and (14), we get p0 v(v) = 2 vr0 Q0v exp[ (v ~K0 v)2 Q0v ] Z 1 v exp[(v 0 ~K0 v)2 Q0v ]dv 0; 0 v 1; r0 = 0 B @ v p Z 1 ~ K0 v p Q0v ~ K0v p Q0v exp(u2)[erf( ~K0 v p Q0v ) + erf(u)]du 1 C A 1 ; ~K0 v = Se + N X k=1 vA0 km0 k; Q0 v = N X k=1 vA20 k
0 k; m0 k = 1 1 + Uk d0 k ;
0 k = 2m0 k 2 + d(2Uk U 2 k)0 k : (15) Sometimes, we are more interested in the instantaneous response to time-dependence random uctuation inputs. The inputs take the form: k = 0 k(1 + "k1 k(t)); (16) where "k 1. Then mk and
k have the forms, i.e., mk = m0 k(1 + "km1 k(t) + O("2 k));
k =
0 k(1 + "k
1 k(t) + O("2 k)); (17) and ~Kv and Qv are ~Kv = Se + N X k=1 vA0 km0 k + N X k=1 "k vA0 km0 k(1 k + m1 k)) + O("2 k); Qv = N X k=1 vA20 k
0 k + N X k=1 "k vA20 k
0 k(1 k +
1 k) + O("2 k): (18) Substituting (17) into (12), and ignoring the high order item, it yields: dm1 k dt = ( 1 d + Uk0 k)m1 k Uk0 k1 k(t); d
1 k dt = ( 2 d + (2Uk U 2 k)0 k)
1 k + 2m1 k d (2Uk U 2 k)0 k1 k(t): (19) With the denitions ~Kv = ~K0 v + ~K1 v(t) + O(2); Qv = Q0 v + Q1 v(t) + O(2); pv = p0 v + p1(t) + O(2); r = r0 + r1(t) + O(2); (20) where 1; and boundary conditions of p1 p1(t; 1) = 0; Z 1 0 p1(t; v)dv = 0; (21) using the perturbative expansion in powers of ; we can get 0 = @ @v ( v + ~K0 v)p0 v(v) + Qv 2 @2p0 v(v) @v2 ; v @p1 @t = @ @v ( v + ~K0 v)p1 + Q0 v 2 @2p1 @v2 @f0(t; v) @v ; f0(t; v) = ~K1 v(t)p0 v Q1 v(t) 2 @p0 v @v ; r1 = Q0 v 2 v @p1(t; 1) @v Q1 v(t) 2 v @p0 v(1) @v : (22) For the oscillatory inputs ~K1 v(t) = k(!)ej!t, Q1 v(t) = q(!)ej!t, the output has the same frequency and takes the forms p1(t; v) = p!(!; v)ej!t; @p1=@t = j!p1. For inputs that vary on a slow enough time scale, satisfy v! 1; we dene l = v!; p1 = p0 1 + lp1 1 + O(2 l ); r1 = r0 1 + lr1 1 + O(2 l ): (23) Using the perturbative expansion in powers of l; we get @f0(t; v) @v = @ @v ( v + ~K0 v)p0 1 + Q0 v 2 @2p0 1 @v2 ; jp0 1 = @ @v ( v + ~K0 v)p1 1 + Q0 v 2 @2p1 1 @v2 : (24) The solutions of equtions (24) are pn 1 = 2 Q0v exp[ (v ~K0 v)2 Q0v ] Z 1 v ( vrn 1 Fn) exp[(v 0 ~K0 v)2 Q0v ]dv 0; rn 1 = 2r0 Q0v Z 1 0 exp[ (v ~K0 v)2 Q0v ] Z 1 v Fn exp[(v 0 ~K0 v)2 Q0v ]dv 0dv; F0 = f0(t; v); F1 = j Z v 0 p0 1(v 0)dv 0; n = 0; 1. (25) In general, Q1 v(t) ~K1 v(t), then we have F0 = f0(t; v) ~K1 v(t)p0 v: (26) From (23), (25) and (26), we can get r1 2r0 Q0v ~K1 v(t) Z 1 0 exp[ (v ~K0 v)2 Q0v ] Z 1 v p0 v exp[(v 0 ~K0 v)2 Q0v ]dv 0dv + j! v 2r0 Q0v Z 1 0 exp[ (v ~K0 v)2 Q0v ] Z 1 v [ Z v 0 0 p0 1(v 00)dv 00] exp[(v 0 ~K0 v)2 Q0v ]dv 0dv: (27) In the limit of high frequency inputs, i.e. 1= v! 1; with the denitions h = 1 v! ; p1 = p0 h + hp1 h + O(2 h); (28) we obtain p0 h = 0; p1 h = j @f0(t; v) @v ; r1 = Q1 v(t) 2 v @p0 v(1) @v jh Q0 v 2 v @2f0(t; 1) @v2 + O(2 h) Q1 v(t) Q0 r0 jh Q0 v 2 v ( ~K1 v(t)@2p0 v(1) @v2 Q1 v(t) 2 @3p0 v @v3 ) = Q1 v(t)r0 Q0v 2jh ~K1 v(t)r0 Q0v (1 ~K0 v) Q1 v(t) ~K1v(t)Q0v 1 ~K0 v Q0 v : (29) When Q1 v(t) ~K1 v(t), we have r1 Q1 v(t)r0 Q0v 2j ~K1 v(t)r0 v!Q0v (1 ~K0 v)(1 Q1 v(t) ~K1v(t)Q0v ); (30) 3 Discussion In equation (15), ~K0 v reects the average intensity of background inputs and Q0 v reects the intensity of background noise. When 1 dUk0 k, we have ~K0 v Se + N X k=1 vA dUk ; Q0 v N X k=1 vA2 dUk(1 + dUk0 k(1 Uk=2)): (31) From (31), we can know the change of background inputs 0 k has little inuence on ~K0 v which is dominated by parameter vA= dUk, but more inuence on Q0 v which decreases with 0 k increasing. In the low input frequency regime, from (27), we can know that the input frequency ! increasing will result in the response amplitude and the phase delay increasing. However, in the high input frequency limit regime, from (30), we can know the input frequency ! increasing will result in the response amplitude and the phase delay decreasing. Moreover, from (27) and (30), we know the stationary background re rate r0 play an important part in response to changes in uctuation outputs. The instantaneous response r1 increases monotonically with background re rate r0:But the background re rate r0 is a function of the background noise Q0 v: In equation (27),
r1= ~K1 v
reects the response amplitude, and in equation (30), r0=Q0 v reects the response amplitude. As Figure 1 (A) and (B) show that
r1= ~K1 v
and r0=Q0 v changes with variables Q0 v and ~K0 v respectively. We can know, for the subthreshold regime ( ~K0 v < 1), they increase monotonically with Q0 v when ~K0 v is a constant. However, for the suprathreshold regime ( ~K0 v > 1), they decrease monotonically with Q0 v when ~K0 v is a constant. When inputs remain, if the instantaneous response amplitude increases, then we can take for the role of neurons are more like coincidence detection than temporal integration. And from this viewpoint, it suggests that the background inputs play an important role in information processing and act as a switch between temporal integration and coincidence detection. In equation (16), if the inputs take the oscillatory form, 1 k(t) = ej!t; according to (19), Figure 1: Response amplitude versus Q0 v and eK0 v. (A)
r1= ~K1 v
(for equation (27)) changes with Q0 v and ~K0 v. (B) r0=Q0 v (for equation (30)) changes with Q0 v and ~K0 v. we get m1 k = dUk0 kej(!t m) q ( d!)2 + (1 + dUk0 k)2 ; (32) where m =arctg( d! 1+ dUk0 k ) is the phase delay, dUk0 k= q ( d!)2 + (1 + dUk0 k)2 is the amplitude. The minus shows it is a `depression' response amplitude. The phase delay increases with the input frequency ! and decreases with the background input 0 k. The `depression' response amplitude decrease with the input frequency ! and increase with the background input 0 k. The equations (15) (18), (12), (19), (27), (30) and (32) show us a point of view that the synapses can be regarded as a time-dependent external eld which impacts on the neuronal population through the time-dependent mean and variance. We assume the inputs are composed of two parts, viz. 1 k1(t) = 1 k2(t) = 1 2ej!t; then we can get m1 k1 and m1 k2. However, in general m1 k 6= m1 k1 + m1 k2, this suggest for us that the spatial distribution of synapses and inputs is important on neural information processing. In conclusion, the role of synapses can be regarded as a spatio-temporal lter. Figure 2 is the results of simulation of a network of 2000 neurons and the analytic solution for equation (15) and equation (27) in different conditions. 4 Summary In this paper, we deal with the model of the integrate-and-re neurons with synaptic current dynamics and synaptic depression. In Section 2, rst, using the membrane potential equation (1) and combining the synaptic depression equation (2), we derive the evolution equation (4) of the joint distribution density function. Then, we give an approach to cut the evolution equation of the high dimensional function down to one dimension, and get equation (9). Finally, we give the stationary solution and the response of instantaneous re rate to time-dependence random uctuation inputs. In Section 3, the analysis and discussion of the model is given and several signicant conclusions are presented. This paper can only investigate the IF neuronal model without internal connection. We can also extend to other models, such as the non-linear IF neuronal models of sparsely connected networks of excitatory and inhibitory neurons. Figure 2: Simulation of a network of 2000 neurons (thin solid line) and the analytic solution (thick solid line) for equation (15) and equation (27), with v = 15(ms), d = 1(s), A = 0:5, Uk = 0:5, N = 30, ! = 6:28(Hz); 1 k = sin(!t), "k0 k = 10(Hz), 0 k = 70(Hz) (A and C) and 100(Hz) (B and D), Se = 0:5(A and B) and 0:8(C and D). The horizontal axis is time (0-2s), and the longitudinal axis is the re rate. References [1] Fourcaud N. & Brunel, N. (2005) Dynamics of the Instantaneous Firing Rate in Response to Changes in Input Statistics. Journal of Computational Neuroscience 18(3):311-321. [2] Fourcaud, N. & Brunel, N. (2002) Dynamics of the Firing Probability of Noisy Integrate-and-Fire Neurons. Neural Computation 14(9):2057-2110. [3] Gerstner, W. (2000) Population Dynamics of Spiking Neurons: Fast Transients, Asynchronous States, and Locking. Neural Computation 12(1):43-89. [4] Silberberg, G., Bethge, M., Markram, H., Pawelzik, K. & Tsodyks, M. (2004) Dynamics of Population Rate Codes in Ensembles of Neocortical Neurons. J Neurophysiol 91(2):704-709. [5] Abbott, L.F. & Regehr, W.G. (2004) Synaptic Computation. Nature 431(7010):796-803. [6] Destexhe, A. & Marder, E. (2004) Plasticity in Single Neuron and Circuit Computations. Nature 431(7010):789-795. [7] Markram, H., Wang, Y. & Tsodyks, M. (1998) Differential Signaling Via the Same Axon of Neocortical Pyramidal Neurons. Proc Natl Acad Sci USA 95(9):5323-5328.
|
2005
|
67
|
2,885
|
Sequence and Tree Kernels with Statistical Feature Mining Jun Suzuki and Hideki Isozaki NTT Communication Science Laboratories, NTT Corp. 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto,619-0237 Japan {jun, isozaki}@cslab.kecl.ntt.co.jp Abstract This paper proposes a new approach to feature selection based on a statistical feature mining technique for sequence and tree kernels. Since natural language data take discrete structures, convolution kernels, such as sequence and tree kernels, are advantageous for both the concept and accuracy of many natural language processing tasks. However, experiments have shown that the best results can only be achieved when limited small sub-structures are dealt with by these kernels. This paper discusses this issue of convolution kernels and then proposes a statistical feature selection that enable us to use larger sub-structures effectively. The proposed method, in order to execute efficiently, can be embedded into an original kernel calculation process by using sub-structure mining algorithms. Experiments on real NLP tasks confirm the problem in the conventional method and compare the performance of a conventional method to that of the proposed method. 1 Introduction Since natural language data take the form of sequences of words and are generally analyzed into discrete structures, such as trees (parsed trees), discrete kernels, such as sequence kernels [7, 1] and tree kernels [2, 5], have been shown to offer excellent results in the natural language processing (NLP) field. Conceptually, these proposed kernels are defined as instances of convolution kernels [3, 11], which provides the concept of kernels over discrete structures. However, unfortunately, experiments have shown that in some cases there is a critical issue with convolution kernels in NLP tasks [2, 1, 10]. That is, since natural language data contain many types of symbols, NLP tasks usually deal with extremely high dimension and sparse feature space. As a result, the convolution kernel approach can never be trained effectively, and it behaves like a nearest neighbor rule. To avoid this issue, we generally eliminate large sub-structures from the set of features used. However, the main reason for using convolution kernels is that we aim to use structural features easily and efficiently. If their use is limited to only very small structures, this negates the advantages of using convolution kernels. This paper discusses this issue of convolution kernels, in particular sequence and tree kernels, and proposes a new method based on statistical significant test. The proposed method deals only with those features that are statistically significant for solving the target task, and large significant sub-structures can be used without over-fitting. Moreover, by using sub-structure mining algorithms, the proposed method can be executed efficiently by embedding it in an original kernel calculation process, which is defined by the dynamicprogramming (DP) based calculation. 2 Convolution Kernels for Sequences and Trees Convolution kernels have been proposed as a concept of kernels for discrete structures, such as sequences, trees and graphs. This framework defines the kernel function between input objects as the convolution of “sub-kernels”, i.e. the kernels for the decompositions (parts or sub-structures) of the objects. Let X and Y be discrete objects. Conceptually, convolution kernels K(X, Y ) enumerate all sub-structures occurring in X and Y and then calculate their inner product, which is simply written as: K(X, Y ) = ⟨φ(X), φ(Y )⟩= P i φi(X) · φi(Y ). φ represents the feature mapping from the discrete object to the feature space; that is, φ(X) = (φ1(X), . . . , φi(X), . . .). Therefore, with sequence kernels, input objects X and Y are sequences, and φi(X) is a sub-sequence; with tree kernels, X and Y are trees, and φi(X) is a sub-tree. Up to now, many kinds of sequence and tree kernels have been proposed for a variety of different tasks. To clarify the discussion, this paper basically follows the framework of [1], which proposed a gapped word sequence kernel, and [5], which introduced a labeled ordered tree kernel. We can treat that sequence is one of the special form of trees if we say sequences are rooted by their last symbol and each node has one child each of a previous symbol. Thus, in this paper, the word ‘tree’ is always including sequence. Let L be a set of finite symbols. Then, let Ln be a set of symbols whose sizes are n and P(Ln) be a set of trees that are constructed by Ln. The meaning of “size” in this paper is the the number of nodes in a tree. We denote a tree u ∈P(Ln 1) whose size is n or less, where ∪n m=1Lm = Ln 1. Let T be a tree and sub(T) be a function that returns a set of all possible sub-trees in T. We define a function Cu(t) that returns a constant, λ(0 < λ ≤1), if the sub-tree t covers u with the same root symbol. For example, a sub-tree ‘a-b-c-d’, where ‘a’, ‘b’, ‘c’ and ‘d’ represent symbols and ‘-’ represents an edge between symbols, covers sub-trees ‘d’, ‘a-c-d’ and ‘b-d’. That is, Cu(t) = λ if u matches t allowing the node skip, 0 otherwise. We also define a function γu(t) that returns the difference of size of sub-trees t and u. For example, if t = a-b-c-d and u = a-b, then γu(t) = |4 −2| = 2. Formally, sequence and tree kernels can be defined as the same form as KSK,TK(T 1, T 2) = X u∈P (Ln 1 ) X t1∈sub(T 1) Cu(t1)γu(t1) X t2∈sub(T 2) Cu(t2)γu(t2). (1) Note that this formula is also including the node skip framework that is generally introduced only in sequence kernels[7, 1]; λ is the decay factor that handles the gap present in sub-trees u and t. Sequence and tree kernels are defined in recursive formula to calculate them efficiently instead of the explicit calculation of Equation (1). Moreover, when implemented, these kernels can calculated in O(n|T 1||T 2|), where |T| represents the number of nodes in T, by using the DP technique. Note, that if the kernel does not use size restriction, the calculation cost becomes O(|T 1||T 2|). 3 Problem of Applying Convolution Kernels to Real tasks According to the original definition of convolution kernels, all of the sub-structures are enumerated and calculated for the kernels. The number of sub-structures in the input object usually becomes exponential against input object size. The number of symbols, |L|, is generally very large number (i.e. more than 10,000) since words are treated as symbols. Moreover, the appearance of sub-structures (sub-sequences and sub-trees) are highly correlated with that of sub-structures of sub-structures themselves. As a result, the dimension of feature space becomes extremely high, and all kernel values K(X, Y ) are very small compared to the kernel value of the object itself, K(X, X). In this situation, the convolution kernel approach can never be trained effectively, and it will behave like a nearest neighbor rule; we obtain a result that is very precise but with very low recall. The details of this issue were described in [2]. To avoid this, most conventional methods use an approach that involves smoothing the kernel values or eliminating features based on the sub-structure size. For sequence kernels, [1] use a feature elimination method based on the size of sub-sequence n. This means that the kernel calculation deals only with those sub-sequences whose length is n or less. As well as the sequence kernel, [2] proposed a method that restricts the features based on subtree depth for tree kernels. These methods seem to work well on the surface, however, good results can only be achieved when n is very small, i.e. n = 2 or 3. For example, n = 3 showed the best performance for parsing in the experimental results of [2], and n = 2 showed the best for the text classification task in [1]. The main reason for using these kernels is that they allow us to employ structural features simply and efficiently. When only small-sized sub-structures are used (i.e. n = 2 or 3), the full benefits of the kernels are missed. Moreover, these results do not mean that no larger-sized sub-structures are useful. In some cases we already know that certain larger sub-structures can be significant features for solving the target problem. That is, significant larger sub-structures, which the conventional methods cannot deal with efficiently, should have the possibility of further improving the performance. The aim of the work described in this paper is to be able to use any significant sub-structure efficiently, regardless of its size, to better solve NLP tasks. 4 Statistical Feature Mining Method for Sequence and Tree Kernels This section proposes a new approach to feature selection, which is based on statistical significant test, in contrast to the conventional methods, which use sub-structure size. To simplify the discussion, we restrict ourselves to dealing hereafter with the twoclass (positive and negative) supervised classification problem. In our approach, we test the statistical deviation of all sub-structures in the training samples between the appearance of positive samples and negative samples, and then, select only the substructures which are larger than a certain threshold τ as features. This allows us to select only the statistically significant sub-structures. In this paper, we explains our proposed method by using the chi-squared (χ2) value as a statistical metric. Table 1: Contingency table and notation for the chi-squared value c ¯c P row u Ouc Ou¯c Ou ¯u O¯uc O¯u¯c O¯u P column Oc O¯c N We note, however, we can use many types of statistical metrics in our proposed method. First, we briefly explain how to calculate the χ2 value by referring to Table 1. c and ¯c represent the names of classes, c for the positive class and ¯c for the negative class. Oij, where i ∈{u, ¯u} and j ∈{c, ¯c}, represents the number of samples in each case. Ou¯c, for instance, represents the number of u that appeared in ¯c. Let N be the total number of training samples. Since N and Oc are constant for training samples, χ2 can be obtained as a function of Ou and Ouc. The χ2 value expresses the normalized deviation of the observation from the expectation: chi(Ou, Ouc) = P i∈{u,¯u},j∈{c,¯c}(Oij −Eij)2/Eij, where Eij = n · Oi/n · Oj/n, which represents the expectation. We simply represent chi(Ou, Ouc) as χ2(u). In the kernel calculation with the statistical feature selection, if χ2(u) < τ holds, that is, u is not statistically significant, then u is eliminated from the features, and the value of u is presumed to be 0 for the kernel value. Therefore, the sequence and tree kernel with feature selection (SK,TK+FS) can be defined as follows: KSK,TK+FS(T 1, T 2) = X u∈{u|τ≤χ2(u),u∈P (Ln 1 )} X t1∈sub(T 1) Cu(t1)γu(t1) X t2∈sub(T 2) Cu(t2)γu(t2). (2) The difference with their original kernels is simply the condition of the first summation, which is τ ≤χ2(u). The basic idea of using a statistical metric to select features is quite natural, but it is not a very attractive approach. We note, however, it is not clear how to calculate that kernels efficiently with a statistical feature selection. It is computationally infeasible to calculate χ2(u) for all possible u with a naive exhaustive method. In our approach, we take advantage of sub-structure mining algorithms in order to calculate χ2(u) efficiently and to embed statistical feature selection to the kernel calculation. Formally, sub-structure mining is to find the complete set, but no-duplication, of all significant (generally frequent) sub-structures from dataset. Specifically, we apply combination of a sequential pattern mining technique, PrefixSpan [9], and a statistical metric pruning (SMP) method, Apriori SMP [8]. PrefixSpan can substantially reduce the search space of enumerating all significant sub-sequences. Briefly saying, it finds any sub-sequences uw whose size is n, by searching a single symbol w in the projected database of the sub-sequence (prefix) u of size n −1. The projected database is a partial database which only contains all postfixes (pointers in the implementation) of appeared the prefix u in the database. It starts searching from n = 1, that is, it enumerates all the significant sub-sequences by the recursive calculation of pattern-growth, searching in the projected database of prefix u and adding a symbol w to u, and prefix-projection, making projected database of uw. Before explaining the algorithm of the proposed kernels, we introduce the upper bound of the χ2 value. The upper bound of the χ2 value of a sequence uv, which is the concatenation of sequences u and v, can be calculated by the value of the contingency table of the prefix u [8]: χ2(uv) ≤bχ2(u) = max (chi(Ouc, Ouc), chi(Ou −Ouc, 0)) . This upper bound indicates that if bχ2(u) < τ holds, no (super-)sequences uv, whose prefix is u, can be larger than threshold, τ ≤χ2(uv). In our context, we can eliminate all (super-)sequences uv from candidates of the feature without the explicit evaluation of uv. Using this property in the PrefixSpan algorithm, we can eliminate to evaluate all the (super)sequences uv by evaluating the upper bound of sequence u. After finding the number of individual symbol w appeared in projected database of u, we evaluate uw in the following three conditions: (1) τ ≤χ2(uw), (2) τ > χ2(uw), τ > bχ2(uw), and (3) τ > χ2(uw), τ ≤bχ2(uw). With condition (1), sub-sequence uw is selected as the feature. With condition (2), uw is pruned, that is, all uwv are also pruned from search space. With condition (3), uw is not a significant, however, uwv can be a significant; thus uw is not selected as features, however, mining is continue to uwv. Figure 1 shows an example of searching and pruning the sub-sequences to select significant features by the PrefixSpan with SMP algorithm. ⊥ a b c d e +1 -1 +1 -1 -1 -1 . . . class training data a b c d a e c a e f b c d d b c a e b a c b b a c a d d a b d e c . . . ( ) 2 ' u χ ( ) 2ˆ ' u χ w 3.2 1.5 4.8 0.2 2.5 1.9 0.9 0.9 5.2 1.8 1:2 2:3 Projected database 5:2 6:3 b c d e 2.2 0.5 0.5 0.1 3.2 1.5 1.8 1.5 c 0.5 0.1 0.5 0.1 d c d 0.4 0.3 3.2 1.5 a b 2.2 0.5 1.2 1.2 a e 1.5 1.5 2.2 1.5 d2.2 1.5 threshold n=1 n=2 n=3 c e 0.2 0.1 3.2 1.5 3:5 4:3 1:3 2:6 Projected database 4:5 6:4 Sample id: pointer Ex. 2:3 Projected database select as a feature pruning 3, 2, 1, continue 00 .1 = τ ( ) τ χ ≥ u 2 ( ) ( ) τ χ τ χ < < u u 2 2 ˆ and , ( ) τ χ ≥ u 2ˆ Figure 1: Example of searching and pruning the sub-sequences by PrefixSpan with SMP algorithm a b c d d a d a (((d (b) d (d a) a) b c) a) 1 T String encoding under the postorder traversal : 1 2 3 4 5 6 7 8 d (b) d a) b b 9 d a ) b sub-tree b d d a 1 2 3 6 7 b b d a 4 5 7 * a b c d d a d 1 3 4 6 7 8 9 d d (d) a) b c) a) Figure 2: Example of the string encoding for trees under the postorder traversal The famous tree mining algorithm [12] cannot be simply applied as a feature selection method for the proposed tree kernels, because this tree mining executes preorder search of trees while tree kernels calculate the kernel in postorder. Thus, we take advantage of the string (sequence) encoding method for trees and treat them in sequence kernels. Figure 2 shows an example of the string encoding for trees under the postorder traversal. The brackets indicate the hierarchical relation between their left and right hand side nodes. We treat these brackets as a special symbol during the sequential pattern mining phase. Sub-trees are evaluated as the same if and only if the string encoded sub-sequences are exactly the same including brackets. For example, ‘d ) b ) a’ and ‘d b ) a’ are different. We previously said that sequence can be treated as one of trees. We also encode in the case of sequence; for example a sequence ’a b c d’ is encoded in ‘((((a) b) c) d)’. That is, we can define sequence and tree kernels with our feature selection method in the same form. Sequence and Tree Kernels with Statistical Feature Mining: Sequence and Tree kernels with our proposed feature selection method is defined in the following equations. KSK,TK+FS(T 1, T 2; D) = X 1≤i≤|T 1| X 1≤j≤|T 2| Hn(T 1 i , T 2 j ; D) (3) D represents the training data, and i and j represent indices of nods in postorder of T 1 and T 2, respectively. Let Hn(T 1 i , T 2 j ; D) be a function that returns the sum value of all statistically significant common sub-sequences u if t1 i = t2 j and |u| ≤n. Hn(T 1 i , T 2 j ; D) = X u∈Γn(T 1 i ,T 2 j ;D) Ju(T 1 i , T 2 j ; D), (4) where Γn(T 1 i , T 2 j ; D) represents a set of sub-sequences, which is |u| ≤n, that satisfy the above condition 1. Then, let Ju(T 1 i , T 2 j ; D), J ′ u(T 1 i , T 2 j ; D) and J ′′ u (T 1 i , T 2 j ; D) be functions that calculate the value of the common sub-sequences between T 1 i and T 2 j recursively. Juw(T 1 i , T 2 j ) = ½ J ′ u(T 1 i , T 2 j ; D) · Iw(t1 i , t2 j) if uw ∈bΓn(T 1 i , T 2 j ; D), 0 otherwise, (5) where Iw(t1 i , t2 j) is a function that returns 1 iff t1 i = w and t2 j = w, and 0 otherwise. bΓn(T 1 i , T 2 j ; D) is a set of sub-sequences, which is |u| ≤n, that satisfy condition (3). We introduce a special symbol Λ to represent an “empty sequence”, and define Λw = w and |Λw| = 1. J ′ u(T 1 i , T 2 j ; D) = 1 if u = Λ, 0 if j = 0 and u ̸= Λ, λJ ′ u(T 1 i , T 2 j−1; D) + J ′′ u (T 1 i , T 2 j−1, D) otherwise, (6) J ′′ u (T 1 i , T 2 j ; D) = ½0 if i = 0, λJ ′′ u (T 1 i−1, T 2 j ; D) + Ju(T 1 i−1, T 2 j ; D) otherwise. (7) The following equations are introduced to select a set of significant sub-sequences. Γn(T 1 i , T 2 j ; D) = {u | u ∈bΓn(T 1 i , T 2 j ; D), τ ≤χ2(u), u|u| ∈∩|u|−1 i=1 ans(ui)} (8) u|u| ∈∩|u|−1 i=1 ans(ui) evaluates if a sub-sequence u is complete sub-tree, where ans(ui) returns ancestor of the node ui. For example, ‘d ) b a’ is not a complete subtree, because the last node ‘a’ is not an ancestor of ‘d’ and ‘b’. bΓn(T 1 i , T 2 j ; D) = ½ Ψn(bΓ′ n(T 1 i , T 2 j ; D), t1 i ) ∪{t1 i } if t1 i = t2 j, ∅ otherwise, (9) where Ψn(F, w) = {uw | u ∈F, τ ≤bχ2(uw), |uw| ≤n}, and F represents a set of subsequences. Note that Γn(T 1 i , T 2 j ; D) and bΓn(T 1 i , T 2 j ; D) have only sub-sequences u that satisfy τ ≤χ2(uw) and τ ≤bχ2(uw), respectively, iff t1 i = t2 j and |uw| ≤n; otherwise they become empty sets. The following two equations are introduced for recursive the set operation to calculate Γn(T 1 i , T 2 j ; D) and bΓn(T 1 i , T 2 j ; D). bΓ′ n(T 1 i , T 2 j ; D) = ½∅ if j = 0, bΓ′ n(T 1 i , T 2 j−1; D) ∪bΓ′′ n(T 1 i , T 2 j−1; D) otherwise, (10) bΓ′′ n(T 1 i , T 2 j ; D) = ½∅ if i = 0 , bΓ′′ n(T 1 i−1, T 2 j ; D) ∪bΓn(T 1 i−1, T 2 j ; D) otherwise. (11) In the implementation, χ2(uw) and bχ2(uw), where uw represents a concatenation of a sequence u and a symbol w, can be calculated by a set of pointers of u against data and the number of appearance of w in backside of the pointers. We note that the set of pointers of uw can be simply obtained from previous search of u. With condition (1), uw is stored in Γn and bΓn. With condition (3), uw is only stored in bΓn. There are some technique in order to calculate kernel faster in the implementation. For example, since χ2(u) and ˆχ2(u) are constant against the same data, we only have to calculate them once. We store the internal search results of PrefixSpan with SMP algorithm in a TRIE structure. After that, we look in that results in TRIE instead of explicitly calculate χ2(u) again when the kernel finds the same sub-sequence. Moreover, when the projected database is exactly the same, these sub-sequences can be merged since the value of χ2(uv) and ˆχ2(uv) for any postfix v are exactly the same. Moreover, we introduce a ‘transposed index’ for fast evaluation of χ2(u) and ˆχ2(u). By using that, we only have to look up that index of w to evaluate whether or not any uw are significant features. Equations (4) to (7) can be performed in the same as the original DP based kernel calculation. The recursive set operations of Equations (9) to (11) can be executed as well as Table 2: Experimental Results Question Classification Subjectivity Detection Polarity Identification n SK+FS SK TK+FS TK BOW-K 1 2 3 4 ∞ .823 .827 .824 .822 .808 .818 .808 .797 .812 .815 .812 .812 .802 .802 .797 .783 .754 .792 .790 .778 1 2 3 4 ∞ .822 .839 .841 .842 .823 .824 .809 .772 .834 .857 .854 .856 .842 .850 .830 .755 .717 729 .715 .649 1 2 3 4 ∞ .824 .838 .839 .839 .835 .835 .833 .789 .830 .832 .835 .833 .828 .827 .820 .745 .740 .810 .822 .795 Equations (5) to (7). Moreover, calculating χ2(u) and ˆχ2(u) with sub-structure mining algorithms allow to calculate the same order of the DP based kernel calculation. As a result, statistical feature selection can be embedded in original kernel calculation based on the DP. Essentially, the worst case time complexity of the proposed method will become exponential, since we enumerate individual sub-structures in sub-structure mining phase. However, actual calculation time in the most cases of our experiments is even faster than original kernel calculation, since search space pruning efficiently remove vain calculation and the implementation techniques briefly explained above provide practical calculation speed. We note that if we set τ = 0, which means all features are dealt with kernel calculation, we can get exactly the same kernel value as the original tree kernel. 5 Experiments and Results We evaluated the performance of the proposed method in actual NLP tasks, namely English question classification (EQC), subjectivity detection (SD) and polarity identification (PI) tasks. These tasks are defined as a text categorization task: it maps a given sentence into one of the pre-defined classes. We used data provided by [6] for EQC, that contains about 5500 questions with 50 question types. SD data was created from Mainichi news articles, and the size was 2095 sentences consisting of 822 subjective sentences. PI data has 5564 sentences with 2671 positive opinion. By using these data, we compared the proposed method (SK+FS and TK+FS) with a conventional method (SK or TK), as discussed in Section 3, and with bag-of-words (BOW) Kernel (BOW-K)[4] as baseline methods. We used word sequences for input objects of sequence kernels and word dependency trees for tree kernels. Support Vector Machine (SVM) was selected as the kernel-based classifier for training and classification with a soft margin parameter C = 1000. We used the one-vs-rest classifier of SVM as the multi-class classification method for EQC. We evaluated the performance with label accuracy by using ten-fold cross validation: eight for training, one for development and remaining one for test set. The parameter λ and τ was automatically selected from the value set of λ = {0.1, 0.3, 0.5, 0.7, 0.9} and τ = {3.84, 6.63} by the development test. Note that these two values represent the 10% and 5% levels of significance in the χ2 distribution with one degree of freedom, which used the χ2 significant test. Tables 2 shows our experimental results. where n in each table indicates the restriction of the sub-structure size, and n = ∞means all possible sub-structures are used. As shown in this table, SK or TK achieve maximum performance when n = 2 or 3. The performance deteriorates considerably once n exceeds 4 or more. This implies that larger sub-structures degrade classification performance, which showed the same tendency as in the previous studies discussed in Section 3. This is evidence of over-fitting in learning. On the other hand, SK+FS and TK+FS provided consistently better performance than the conventional methods. Moreover, the experiments confirmed one important fact: in some cases, maximum performance was achieved with n = ∞. This indicates that certain sub-sequences created using very large structures can be extremely effective. If the performance is improved by using a larger n, this means that significant features do exist. Thus, we can improve the performance of some classification problems by dealing with larger substructures. Even if optimum performance was not achieved with n = ∞, the difference from the performance of a smaller n is quite small compared to that of SK and TK. This indicates that our method is very robust against sub-structure size. 6 Conclusions This paper proposed a statistical feature selection method for sequence kernels and tree kernels. Our approach can select significant features automatically based on a statistical significance test. The proposed method can be embedded in the original DP based kernel calculation process by using sub-structure mining algorithms. Our experiments demonstrated that our method is superior to conventional methods. Moreover, the results indicate that complex features exist and can be effective. Our method can employ them without over-fitting problems, which yields benefits in terms of concept and performance. References [1] N. Cancedda, E. Gaussier, C. Goutte, and J.-M. Renders. Word-Sequence Kernels. Journal of Machine Learning Research, 3:1059–1082, 2003. [2] M. Collins and N. Duffy. Convolution kernels for natural language. In Proc. of Neural Information Processing Systems (NIPS’2001), 2001. [3] D. Haussler. Convolution kernels on discrete structures. In Technical Report UCS-CRL-99-10. UC Santa Cruz, 1999. [4] T. Joachims. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proc. of European Conference on Machine Learning (ECML ’98), pages 137–142, 1998. [5] H. Kashima and T. Koyanagi. Kernels for Semi-Structured Data. In Proc. 19th International Conference on Machine Learning (ICML2002), pages 291–298, 2002. [6] X. Li and D. Roth. Learning Question Classifiers. In Proc. of the 19th International Conference on Computational Linguistics (COLING 2002), pages 556–562, 2002. [7] H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C. Watkins. Text Classification Using String Kernel. Journal of Machine Learning Research, 2:419–444, 2002. [8] S. Morishita and J. Sese. Traversing Itemset Lattices with Statistical Metric Pruning. In Proc. of ACM SIGACT-SIGMOD-SIGART Symp. on Database Systems (PODS’00), pages 226–236, 2000. [9] J. Pei, J. Han, B. Mortazavi-Asl, and H. Pinto. PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth. In Proc. of the 17th International Conference on Data Engineering (ICDE 2001), pages 215–224, 2001. [10] J. Suzuki, Y. Sasaki, and E. Maeda. Kernels for Structured Natural Language Data. In Proc. of the 17th Annual Conference on Neural Information Processing Systems (NIPS2003), 2003. [11] C. Watkins. Dynamic alignment kernels. In Technical Report CSD-TR-98-11. Royal Holloway, University of London Computer Science Department, 1999. [12] M. J. Zaki. Efficiently Mining Frequent Trees in a Forest. In Proc. of the 8th International Conference on Knowledge Discovery and Data Mining (KDD’02), pages 71–80, 2002.
|
2005
|
68
|
2,886
|
Faster Rates in Regression via Active Learning Rui Castro Rice University Houston, TX 77005 rcastro@rice.edu Rebecca Willett University of Wisconsin Madison, WI 53706 willett@cae.wisc.edu Robert Nowak University of Wisconsin Madison, WI 53706 nowak@engr.wisc.edu Abstract This paper presents a rigorous statistical analysis characterizing regimes in which active learning significantly outperforms classical passive learning. Active learning algorithms are able to make queries or select sample locations in an online fashion, depending on the results of the previous queries. In some regimes, this extra flexibility leads to significantly faster rates of error decay than those possible in classical passive learning settings. The nature of these regimes is explored by studying fundamental performance limits of active and passive learning in two illustrative nonparametric function classes. In addition to examining the theoretical potential of active learning, this paper describes a practical algorithm capable of exploiting the extra flexibility of the active setting and provably improving upon the classical passive techniques. Our active learning theory and methods show promise in a number of applications, including field estimation using wireless sensor networks and fault line detection. 1 Introduction In this paper we address the theoretical capabilities of active learning for estimating functions in noise. Several empirical and theoretical studies have shown that selecting samples or making strategic queries in order to learn a target function/classifier can outperform commonly used passive methods based on random or deterministic sampling [1–5]. There are essentially two different scenarios in active learning: (i) selective sampling, where we are presented a pool of examples (possibly very large), and for each of these we can decide whether to collect a label associated with it, the goal being learning with the least amount of carefully selected labels [3]; (ii) adaptive sampling, where one chooses an experiment/sample location based on previous observations [4,6]. We consider adaptive sampling in this paper. Most previous analytical work in active learning regimes deals with very stringent conditions, like the ability to make perfect or nearly perfect decisions at every stage in the sampling procedure. Our working scenario is significantly less restrictive, and based on assumptions that are more reasonable for a broad range of practical applications. We investigate the problem of nonparametric function regression, where the goal is to estimate a function from noisy point-wise samples. In the classical (passive) setting the sampling locations are chosen a priori, meaning that the selection of the sample locations precedes the gathering of the function observations. In the active sampling setting, however, the sample locations are chosen in an online fashion: the decision of where to sample next depends on all the observations made previously, in the spirit of the “Twenty Questions” game (in passive sampling all the questions need to be asked before any answers are given). The extra degree of flexibility garnered through active learning can lead to significantly better function estimates than those possible using classical (passive) methods. However, there are very few analytical methodologies for these Twenty Questions problems when the answers are not entirely reliable (see for example [6–8]); this precludes performance guarantees and limits the applicability of many such methods. To address this critical issue, in this paper we answer several pertinent questions regarding the fundamental performance limits of active learning in the context of regression under noisy conditions. Significantly faster rates of convergence are generally achievable in cases involving functions whose complexity (in a the Kolmogorov sense) is highly concentrated in small regions of space (e.g., functions that are smoothly varying apart from highly localized abrupt changes such as jumps or edges). We illustrate this by characterizing the fundamental limits of active learning for two broad nonparametric function classes which map [0, 1]d onto the real line: (i) H¨older smooth functions (spatially homogeneous complexity) and (ii) piecewise constant functions that are constant except on a d −1 dimensional boundary set or discontinuity embedded in the d dimensional function domain (spatially concentrated complexity). The main result of this paper is two-fold. First, when the complexity of the function is spatially homogeneous, passive learning algorithms are near-minimax optimal over all estimation methods and all (active or passive) learning schemes, indicating that active learning methods cannot provide faster rates of convergence in this regime. Second, for piecewise constant functions, active learning methods can capitalize on the highly localized nature of the boundary by focusing the sampling process in the estimated vicinity of the boundary. We present an algorithm that provably improves on the best possible passive learning algorithm and achieves faster rates of error convergence. Furthermore, we show that this performance cannot be significantly improved on by any other active learning method (in a minimax sense). Earlier existing work had focused on one dimensional problems [6, 7], and very specialized multidimensional problems that can be reduced to a series of one dimensional problems [8]. Unfortunately these techniques cannot be extended to more general piecewise constant/smooth models, and to the best of our knowledge our work is the first addressing active learning in this class of models. Our active learning theory and methods show promise for a number of problems. In particular, in imaging techniques such as laser scanning it is possible to adaptively vary the scanning process. Using active learning in this context can significantly reduce image acquisition times. Wireless sensor network constitute another key application area. Because of necessarily small batteries, it is desirable to limit the number of measurements collected as much as possible. Incorporating active learning strategies into such systems can dramatically lengthen the lifetime of the system. In fact, active learning problems like the one we pose in Section 4 have already found application in fault line detection [7] and boundary estimation in wireless sensor networking [9]. 2 Problem Statement Our goal is to estimate f : [0, 1]d →R from a finite number of noise-corrupted samples. We consider two different scenarios: (a) passive learning, where the location of the sample points is chosen statistically independently of the measurement outcomes; and (b) active learning, where the location of the ith sample point can be chosen as a function of the samples points and samples collected up to that instant. The statistical model we consider builds on the following assumptions: (A1) The observations {Yi}n i=1 are given by Yi = f(Xi) + Wi, i ∈{1, . . . , n}. (A2) The random variables Wi are Gaussian zero mean and variance σ2. These are independent and identically distributed (i.i.d.) and independent of {Xi}n i=1. (A3.1) Passive Learning: The sample locations Xi ∈[0, 1]d are either deterministic or random, but independent of {Yj}j̸=i. They do not depend in any way on f. (A3.2) Active Learning: The sample locations Xi are random, and depend only on {Xj, Yj}i−1 j=1. In other words the sample locations Xi have only a causal dependency on the system variables {Xi, Yi}. Finally, given {Xj, Yj}i−1 j=1 the random variable Xi does not depend in any way on f. Let ˆfn : [0, 1]d →R denote an estimator based on the training samples {Xi, Yi}n i=1. When constructing an estimator under the active learning paradigm there is another degree of freedom: we are allowed to choose our sampling strategy, that is, we can specify Xi|X1 . . . Xi−1, Y1 . . . Yi−1. We will denote the sampling strategy by Sn. The pair ( ˆfn, Sn) is called the estimation strategy. Our goal is to construct estimation strategies which minimize the expected squared error, Ef,Sn[∥ˆfn −f∥2], where Ef,Sn is the expectation with respect to the probability measure of {Xi, Yi}n i=1 induced by model f and sampling strategy Sn, and ∥· ∥is the usual L2 norm. 3 Learning in Classical Smoothness Spaces In this section we consider classes of functions whose complexity is homogeneous over the entire domain, so that there are no localized features, as in Figure 1(a). In this case we do not expect the extra flexibility of the active learning strategies to provide any substantial benefit over passive sampling strategies, since a simple uniform sampling scheme is naturally matched to the homogeneous “distribution” of the target function’s complexity. To exemplify this consider the H¨older smooth function class: a function f : [0, 1]d →R is H¨older smooth if it has continuous partial derivatives up to order k = ⌊α⌋1 and ∀z, x ∈[0, 1]d : |f(z) −Px(z)| ≤L∥z −x∥α, where L, α > 0, and Px(·) denotes the order k Taylor polynomial approximation of f expanded around x. Denote this class of functions by Σ(L, α). Functions in Σ(L, α) are essentially Cα functions when α ∈N. The first of our two main results is a minimax lower bound on the performance of all active estimation strategies for this class of functions. Theorem 1. Under the requirements of the active learning model we have the minimax bound inf ( ˆ fn,Sn)∈Θactive sup f∈Σ(L,α) Ef,Sn[∥ˆfn −f∥2] ≥cn− 2α 2α+d , (1) where c ≡c(L, α, σ2) > 0 and Θactive is the set of all active estimation strategies (which includes also passive strategies). Note that the rate in Theorem 1 is the same as the classical passive learning rate [10, 11] but the class of estimation strategies allowed is now much bigger. The proof of Theorem 1 is presented in our technical report [12] and uses standard tools of minimax analysis, such as Assouad’s Lemma. The key idea of the proof is to reduce the problem of estimating a function in Σ(L, α) to the problem of deciding among a finite number of hypotheses. The key aspects of the proof for the passive setting [13] apply to the active scenario due to the fact that we can choose an adequate set of hypotheses without knowledge of the sampling strategy, although some modifications are required due to the extra flexibility of the sampling strategy. There are various practical estimators achieving the performance predicted by Theorem 1, including some based on kernels, splines or wavelets [13]. 1k = ⌊α⌋is the maximal integer such that k < α. 4 The Active Advantage In this section we address two key questions: (i) when does active learning provably yield better results, and (ii) what are the fundamental limitations of active learning? These are difficult questions to answer in general. We expect that, for functions whose complexity is spatially non-uniform and highly concentrated in small subsets of the domain, the extra spatial adaptivity of the active learning paradigm can lead into significant performance gains. We study a class of functions which highlights this notion of “spatially concentrated complexity”. Although this is a canonical example and a relatively simple function class, it is general enough to provide insights into methodologies for broader classes. A function f : [0, 1]d →R is called piecewise constant if it is locally constant2 in any point x ∈[0, 1]d \ B(f), where B(f) ⊂[0, 1]d, the boundary set, has upper box-counting dimension at most d −1. Furthermore let f be uniformly bounded on [0, 1]d (that is, |f(x)| ≤M, ∀x ∈[0, 1]d) and let B(f) satisfy N(r) ≤βr−(d−1) for all r > 0, where β > 0 is a constant and N(r) is the minimal number of closed balls of diameter r that covers B(f). The set of all piecewise constant functions f satisfying the above conditions is denoted by PC(β, M). The conditions above mean that (a) the functions are constant except along d −1dimensional “boundaries” where they are discontinuous and (b) the boundaries between the various constant regions are (d −1)-dimensional non-fractal sets. If the boundaries B(f) are smooth then β is an approximate bound on their total d −1 dimensional volume (e.g., the length if d = 2). An example of such a function is depicted in Figure 1(b). The class PC(β, M) has the main ingredients that make active learning appealing: a function f is “well-behaved” everywhere on the unit square, except on a small subset B(f). We will see that the critical task for any good estimator is to accurately find the location of the boundary B(f). 4.1 Passive Learning Framework To obtain minimax lower bounds for PC(β, M) we consider a smaller class of functions, namely the boundary fragment class studied in [11]. Let g : [0, 1]d−1 →[0, 1] be a Lipshitz function with graph in [0, 1]d, that is |g(x) −g(z)| ≤∥x −z∥, 0 ≤g(x) ≤1, ∀x, z ∈[0, 1]d−1. Define G = {(x, y) : 0 ≤y ≤g(x), x ∈[0, 1]d−1}. Finally define f : [0, 1]d →R by f(x) = 2M1G(x) −M. The class of all the functions of this form is called the boundary fragment class (usually M = 1), denoted by BF(M). Note that there are only two regions, and the boundary separating those is a function of the first d −1 variables. It is straightforward to show that BF(M) ⊆PC(β, M) for a suitable constant β; therefore a minimax lower bound for the boundary fragment class is trivially a lower bound for the piecewise constant class. From the results in [11] we have inf ( ˆ fn,Sn)∈Θpassive sup f∈PC(β,M) Ef,Sn[d2( ˆfn, f)] ≥cn−1 d , (2) where c ≡c(β, M, σ2) > 0. There exist practical passive learning strategies that are near-minimax optimal. For example, tree-structured estimators based on Recursive Dyadic Partitions (RDPs) are capable of 2A function f : [0, 1]d →R is locally constant at a point x ∈[0, 1]d if ∃ϵ > 0 : ∀y ∈[0, 1]d : ∥x −y∥< ϵ ⇒f(y) = f(x). (a) (b) Figure 1: Examples of functions in the classes considered: (a) H¨older smooth function. (b) Piecewise constant function. nearly attaining the minimax rate above [14]. These estimators are constructed as follows: (i) Divide [0, 1]d into 2d equal sized hypercubes. (ii) Repeat this process again on each hypercube. Repeating this process log2 m times gives rise to a partition of the unit hypercube into md hypercubes of identical size. This process can be represented as a 2d-ary tree structure (where a leaf of the tree corresponds to a partition cell). Pruning this tree gives rise to an RDP with non-uniform resolution. Let Π denote the class of all possible pruned RDPs. The estimators we consider are constructed by decorating the elements of a partition with constants. Let π be an RDP; the estimators built over this RDP have the form ˜f (π)(x) ≡P A∈π cA1{x ∈A}. Since the location of the boundary is a priori unknown it is natural to distribute the sample points uniformly over the unit cube. There are various ways of doing this; for example, the points can be placed deterministically over a lattice, or randomly sampled from a uniform distribution. We will use the latter strategy. Assume that {Xi}n i=1 are i.i.d. uniform over [0, 1]d. Define the complexity regularized estimator as ˆfn ≡arg min ˜ f (π):π∈Π ( 1 n n X i=1 ˜f (π)(Xi) −Yi 2 + λlog n n |π| ) , (3) where |π| denotes the number of elements of π and λ > 0. The above optimization can be solved efficiently in O(n) operations using a bottom-up tree pruning algorithm [14]. The performance of the estimator in (3) can be assessed using bounding techniques in the spirit of [14,15]. From that analysis we conclude that sup f∈PC(β,M) Ef[∥ˆfn −f∥2] ≤C(n/ log n)−1 d , (4) where C ≡C(β, M, σ2) > 0. This shows that, up to a logarithmic factor, the rate in (2) is the optimal rate of convergence for passive strategies. A complete derivation of the above result is available in [12]. 4.2 Active Learning Framework We now turn our attention to the active learning scenario. In [8] this was studied for the boundary fragment class. From that work and noting again that BF(M) ⊆PC(β, M) we have, for d ≥2, inf ( ˆ fn,Sn)∈Θactive sup f∈PC(β,M) Ef,Sn[∥ˆfn −f∥2] ≥cn− 1 d−1 , (5) where c ≡c(M, σ2) > 0. In contrast with (2), we observe that with active learning we have a potential performance gain over passive strategies, effectively equivalent to a dimensionality reduction. Essentially the exponent in (5) depends now on the dimension of the boundary set, d −1, instead of the dimension of the entire domain, d. In [11] an algorithm capable of achieving the above rate for the boundary fragment class is presented, but this algorithm takes advantage of the very special functional form of the boundary fragment functions. The algorithm begins by dividing the unit hypercube into “strips” and performing a one-dimensional change-point estimation in each of the strips. This change-point detection can be performed extremely accurately using active learning, as shown in the pioneering work of Burnashev and Zigangirov [6]. Unfortunately, the boundary fragment class is very restrictive and impractical for most applications. Recall that boundary fragments consist of only two regions, separated by a boundary that is a function of the first d −1 coordinates. The class PC(β, M) is much larger and more general and the algorithmic ideas that work for boundary fragments can no longer be used. A completely different approach is required, using radically different tools. We now propose an active learning scheme for the piecewise constant class. The proposed scheme is a two-step approach based in part on the tree-structured estimators described above for passive learning. In the first step, called the preview step, a rough estimator of f is constructed using n/2 samples (assume for simplicity that n is even), distributed uniformly over [0, 1]d. In the second step, called the refinement step, we select n/2 samples near the perceived locations of the boundaries (estimated in the preview step) separating constant regions. At the end of this process we will have half the samples concentrated in the vicinity of the boundary set B(f). Since accurately estimating f near B(f) is key to obtaining faster rates, the strategy described seems quite sensible. However, it is critical that the preview step is able to detect the boundary with very high probability. If part of the boundary is missed, then the error incurred is going to propagate into the final estimate, ultimately degrading the performance. Therefore extreme care must be taken to detect the boundary in the preview step, as described below. Preview: The goal of this stage is to provide a coarse estimate of the location of B(f). Specifically, collect n′ ≡n/2 samples at points distributed uniformly over [0, 1]d. Next proceed by using the passive learning algorithm described before, but restrict the estimator to RDPs with leafs at a maximum depth of J = d−1 (d−1)2+d log(n′/ log(n′)). This ensures that, on average, every element of the RDP contains many sample points; therefore we obtain a low variance estimate, although the estimator bias is going to be large. In other words, we obtain a very “stable” coarse estimate of f, where stable means that the estimator does not change much for different realizations of the data. The above strategy ensures that most of the time, leafs that intersect the boundary are at the maximum allowed depth (because otherwise the estimator would incur too much empirical error) and leafs away from the boundary are at shallower depths. Therefore we can “detect” the rough location of the boundary just by looking at the deepest leafs. Unfortunately, if the set B(f) is somewhat aligned with the dyadic splits of the RDP, leafs intersecting the boundary can be pruned without incurring a large error. This is illustrated in Figure 2(b); the cell with the arrow was pruned and contains a piece of the boundary, but the error incurred by pruning is small since that region is mostly a constant region. However, worstcase analysis reveals that the squared bias induced by these small volumes can add up, precluding the desired rates. A way of mitigating this issue is to consider multiple RDPbased estimators, each one using RDPs appropriately shifted. We use d + 1 estimators in the preview step: one on the initial uniform partition, and d over partitions whose dyadic splits have been translated by 2−J in each one of the d coordinates. Any leaf that is at the maximum depth of any of the d + 1 RDPs pruned in the preview step indicates the highly probable presence of a boundary, and will be refined in the next stage. Refinement: With high probability, the boundary is contained in the leafs at the maximum depth. In the refinement step we collect additional n/2 samples in the corresponding partition cells, using these to obtain a refined estimate of the function f by again applying (a) (b) (c) (d) Figure 2: The two step procedure for d = 2: (a) Initial unpruned RDP and n/2 samples. (b) Preview step RDP. Note that the cell with the arrow was pruned, but it contains a part of the boundary. (c) Additional sampling for the refinement step. (d) Refinement step. an RDP-based estimator. This produces a higher resolution estimate in the vicinity of the boundary set B(f), yielding better performance than the passive learning technique. To formally show that this algorithm attains the faster rates we desire we have to consider a further technical assumption, namely that the boundary set is “cusp-free”3. This condition is rather technical, but it is not very restrictive, and encompasses many interesting situations, including of course boundary fragments. For a more detailed explanation see [12]. Under this condition we have the following: Theorem 2. Under the active learning scenario we have, for d ≥2 and functions f whose boundary is cusp-free, E h ∥ˆfn −f∥2i ≤C n log n − 1 d−1+1/d , (6) where C > 0. This bound improves on (4), demonstrating that this technique performs better than the best possible passive learning estimator. The proof of Theorem 2 is quite involved and is presented in detail in [12]. The main idea behind the proof is to decompose the error of the estimator for three different cases: (i) the error incurred during the preview stage in regions “away” from the boundary; (ii) the error incurred by not detecting a piece of the boundary (and therefore not performing the refinement step in that area); (iii) the error remaining in the refinement region at the end of the process. By restricting the maximum depth of the trees in the preview stage we can control the type-(i) error, ensuring that it does not exceed the error rate in (6). Type-(ii) error corresponds to the situations when a part of the boundary was not detected in the preview step. This can happen because of the inherent randomness of the noise and sampling distribution, or because the boundary is somewhat aligned with the dyadic splits. The latter can be a problem and this is why one needs to perform d + 1 preview estimates over shifted partitions. If the boundary is cusp-free then it is guaranteed that one of those preview estimators is going to “feel” the boundary since it is not aligned with the corresponding partition. Finally, the type-(iii) error is very easy to analyze, using the same techniques we used for the passive estimator. A couple of remarks are important at this point. Instead of a two-step procedure one can reiterate this idea, performing multiple steps (e.g., for a three-step approach replace the refinement step with the two-step approach described above). Doing so can further improve the performance. One can show that the expected error will decay like n−1/(d−1+ϵ), with ϵ > 0, given a sufficiently large number of steps. Therefore we can get rates arbitrarily close to the lower bound rates in (5). 3A cusp-free boundary cannot have the behavior you observe in the graph of |x|1/2 at the origin. Less “aggressive” kinks are allowed, such as in the graph of |x|. 5 Final Remarks The results presented in this paper show that in certain scenarios active learning attains provable gains over the classical passive approaches. Active learning is an intuitively appealing idea and may find application in many practical problems. Despite these draws, the analysis of such active methods is quite challenging due to the loss of statistical independence in the observations (recall that now the sample locations are coupled with all the observations made in the past). The two function classes presented are non-trivial canonical examples illustrating under what conditions one might expect active learning to improve rates of convergence. The algorithm presented here for actively learning members of the piecewise constant class demonstrates the possibilities of active learning. In fact, this algorithm has already been applied in the context of field estimation using wireless sensor networks [9]. Future work includes the further development of the ideas presented here to the context of binary classification and active learning of the Bayes decision boundary. References [1] D. Cohn, Z. Ghahramani, and M. Jordan, “Active learning with statistical models,” Journal of Artificial Intelligence Research, pp. 129–145, 1996. [2] D. J. C. Mackay, “Information-based objective functions for active data selection,” Neural Computation, vol. 4, pp. 698–714, 1991. [3] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby, “Information, prediction, and query by committee,” Proc. Advances in Neural Information Processing Systems, 1993. [4] K. Sung and P. Niyogi, “Active learning for function approximation,” Proc. Advances in Neural Information Processing Systems, vol. 7, 1995. [5] G. Blanchard and D. Geman, “Hierarchical testing designs for pattern recognition,” to appear in Annals of Statistics, 2005. [6] M. V. Burnashev and K. Sh. Zigangirov, “An interval estimation problem for controlled observations,” Problems in Information Transmission, vol. 10, pp. 223–231, 1974. [7] P. Hall and I. Molchanov, “Sequential methods for design-adaptive estimation of discontinuities in regression curves and surfaces,” The Annals of Statistics, vol. 31, no. 3, pp. 921–941, 2003. [8] Alexander Korostelev, “On minimax rates of convergence in image models under sequential design,” Statistics & Probability Letters, vol. 43, pp. 369–375, 1999. [9] R. Willett, A. Martin, and R. Nowak, “Backcasting: Adaptive sampling for sensor networks,” in Proc. Information Processing in Sensor Networks, 26-27 April, Berkeley, CA, USA, 2004. [10] Charles J. Stone, “Optimal rates of convergence for nonparametric estimators,” The Annals of Statistics, vol. 8, no. 6, pp. 1348–1360, 1980. [11] A.P. Korostelev and A.B. Tsybakov, Minimax Theory of Image Reconstruction, Springer Lecture Notes in Statistics, 1993. [12] R. Castro, R. Willett, and R. Nowak, “Fast rates in regression via active learning,” Tech. Rep., University of Wisconsin, Madison, June 2005, ECE-05-3 Technical Report (available at http://homepages.cae.wisc.edu/ rcastro/ECE-05-3.pdf). [13] Alexandre B. Tsybakov, Introduction `a l’estimation non-param´etrique, Math´ematiques et Applications, 41. Springer, 2004. [14] R. Nowak, U. Mitra, and R. Willett, “Estimating inhomogeneous fields using wireless sensor networks,” IEEE Journal on Selected Areas in Communication, vol. 22, no. 6, pp. 999–1006, 2004. [15] Andrew R. Barron, “Complexity regularization with application to artificial neural networks,” in Nonparametric Functional Estimation and Related Topics. 1991, pp. 561–576, Kluwer Academic Publishers.
|
2005
|
69
|
2,887
|
Learning vehicular dynamics, with application to modeling helicopters Pieter Abbeel Computer Science Dept. Stanford University Stanford, CA 94305 Varun Ganapathi Computer Science Dept. Stanford University Stanford, CA 94305 Andrew Y. Ng Computer Science Dept. Stanford University Stanford, CA 94305 Abstract We consider the problem of modeling a helicopter’s dynamics based on state-action trajectories collected from it. The contribution of this paper is two-fold. First, we consider the linear models such as learned by CIFER (the industry standard in helicopter identification), and show that the linear parameterization makes certain properties of dynamical systems, such as inertia, fundamentally difficult to capture. We propose an alternative, acceleration based, parameterization that does not suffer from this deficiency, and that can be learned as efficiently from data. Second, a Markov decision process model of a helicopter’s dynamics would explicitly model only the one-step transitions, but we are often interested in a model’s predictive performance over longer timescales. In this paper, we present an efficient algorithm for (approximately) minimizing the prediction error over long time scales. We present empirical results on two different helicopters. Although this work was motivated by the problem of modeling helicopters, the ideas presented here are general, and can be applied to modeling large classes of vehicular dynamics. 1 Introduction In the last few years, considerable progress has been made in finding good controllers for helicopters. [7, 9, 2, 4, 3, 8] In designing helicopter controllers, one typically begins by constructing a model for the helicopter’s dynamics, and then uses that model to design a controller. In our experience, after constructing a simulator (model) of our helicopters, policy search [7] almost always learns to fly (hover) very well in simulation, but may perform less well on the real-life helicopter. These differences between simulation and real-life performance can therefore be directly attributed to errors in the simulator (model) of the helicopter, and building accurate helicopter models remains a key technical challenge in autonomous flight. Modeling dynamical systems (also referred to as system identification) is one of the most basic and important problems in control. With an emphasis on helicopter aerodynamics, in this paper we consider the problem of learning good dynamical models of vehicles. Helicopter aerodynamics are, to date, somewhat poorly understood, and (unlike most fixedwing aircraft) no textbook models will accurately predict the dynamics of a helicopter from only its dimensions and specifications. [5, 10] Thus, at least part of the dynamics must be learned from data. CIFER R⃝(Comprehensive Identification from Frequency Responses) is the industry standard for learning helicopter (and other rotorcraft) models from data. [11, 6] CIFER uses frequency response methods to identify a linear model. The models obtained from CIFER fail to capture some important aspects of the helicopter dynamics, such as the effects of inertia. Consider a setting in which the helicopter is flying forward, and suddenly turns sideways. Due to inertia, the helicopter will continue to travel in the same direction as before, so that it has “sideslip,” meaning that its orientation is not aligned with its direction of motion. This is a non-linear effect that depends both on velocity and angular rates. The linear CIFER model is unable to capture this. In fact, the models used in [2, 8, 6] all suffer from this problem. The core of the problem is that the naive body-coordinate representation used in all these settings makes it fundamentally difficult for the learning algorithm to capture certain properties of dynamical systems such as inertia and gravity. As such, one places a significantly heavier burden than is necessary on the learning algorithm. In Section 4, we propose an alternative parameterization for modeling dynamical systems that does not suffer from this deficiency. Our approach can be viewed as a hybrid of physical knowledge and learning. Although helicopter dynamics are not fully understood, there are also many properties—such as the direction and magnitude of acceleration due to gravity; the effects of inertia; symmetry properties of the dynamical system; and so on—which apply to all dynamical systems, and which are well-understood. All of this can therefore be encoded as prior knowledge, and there is little need to demand that our learning algorithms learn them. It is not immediately obvious how such prior knowledge can be encoded into a complex learning algorithm, but we will describe an acceleration based parameterization in which this can be done. Given any model class, we can choose the parameter learning criterion used to learn a model within the class. CIFER finds the parameters that minimize a frequency domain error criterion. Alternatively, we can minimize the squared one-step prediction error in the time domain. Forward simulation on a held-out test set is a standard way to assess model quality, and we use it to compare the linear models learned using CIFER to the same linear models learned by optimizing the one-step prediction error. As suggested in [1], one can also learn parameters so as to optimize a “lagged criterion” that directly measures simulation accuracy—i.e., predictive accuracy of the model over long time scales. However, the EM algorithm given in [1] is expensive when applied in a continuous state-space setting. In this paper, we present an efficient algorithm that approximately optimizes the lagged criterion. Our experiments show that the resulting model consistently outperforms the linear models trained using CIFER or using the one-step error criterion. Combining this with the acceleration based parameterization results in our best helicopter model. 2 Helicopter state, input and dynamics The helicopter state s comprises its position (x, y, z), orientation (roll φ, pitch θ, yaw ω), velocity ( ˙x, ˙y, ˙z) and angular velocity ( ˙φ, ˙θ, ˙ω). The helicopter is controlled via a 4dimensional action space: 1. u1 and u2: The longitudinal (front-back) and latitudinal (left-right) cyclic pitch controls cause the helicopter to pitch forward/backward or sideways, and can thereby also affect acceleration in the longitudinal and latitudinal directions. 2. u3: The tail rotor collective pitch control affects tail rotor thrust, and can be used to yaw (turn) the helicopter. 3. u4: The main rotor collective pitch control affects the pitch angle of the main rotor’s blades, by rotating the blades around an axis that runs along the length of the blade. As the main rotor blades sweep through the air, the resulting amount of upward thrust (generally) increases with this pitch angle; thus this control affects the main rotor’s thrust. Following standard practice in system identification ([8, 6]), the original 12-dimensional helicopter state is reduced to an 8-dimensional state represented in body (or robot-centric) coordinates sb = (φ, θ, ˙x, ˙y, ˙z, ˙φ, ˙θ, ˙ω). Where there is risk of confusion, we will use superscript s and b to distinguish between spatial (world) coordinates and body coordinates. The body coordinate representation specifies the helicopter state using a coordinate frame in which the x, y, and z axes are forwards, sideways, and down relative to the current orientation of the helicopter, instead of north, east and down. Thus, ˙xb is the forward velocity, whereas ˙xs is the velocity in the northern direction. (φ and θ are always expressed in world coordinates, because roll and pitch relative to the body coordinate frame is always zero.) By using a body coordinate representation, we encode into our model certain “symmetries” of helicopter flight, such as that the helicopter’s dynamics are the same regardless of its absolute position (x, y, z) and heading ω (assuming the absence of obstacles). Even in the reduced coordinate representation, only a subset of the state variables needs to be modeled explicitly using learning. Given a model that predicts only the angular velocities ( ˙φ, ˙θ, ˙ω), we can numerically integrate to obtain the orientation (φ, θ, ω). We can integrate the reduced body coordinate states to obtain the complete world coordinate states. Integrating body-coordinate angular velocities to obtain world-coordinate angles is nonlinear, thus the model resulting from this process is necessarily nonlinear. 3 Linear model The linear model we learn with CIFER has the following form: ˙φb t+1 −˙φb t = “ Cφ ˙φb t + C1(u1)t + D1 ” ∆t, ˙xb t+1 −˙xb t = “ Cx ˙xb t −gθt ” ∆t, ˙θb t+1 −˙θb t = “ Cθ ˙θb t + C2(u2)t + D2 ” ∆t, ˙yb t+1 −˙yb t = “ Cy ˙yb t + gφt + D0 ” ∆t, ˙ωb t+1 −˙ωb t = ` Cω ˙ωb t + C3(u3)t + D3 ´ ∆t, ˙zb t+1 −˙zb t = “ Cz ˙zb t + g + C4(u4)t + D4 ” ∆t, φt+1 −φt = ˙φb t∆t, θt+1 −θt = ˙θb t∆t. Here g = 9.81m/s2 is the acceleration due to gravity and ∆t is the time discretization, which is 0.1 seconds in our experiments. The free parameters in the model are Cx, Cy, Cz, Cφ, Cθ, Cω, which model damping, and D0, C1, D1, C2, D2, C3, D3, C4, D4, which model the influence of the inputs on the states.1 This parameterization was chosen using the “coherence” feature selection algorithm of CIFER. CIFER takes as input the stateaction sequence {( ˙xb t, ˙yb t, ˙zb t, ˙φb t, ˙θb t, ˙ωb t, φt, θt, ut)}t and learns the free parameters using a frequency domain cost function. See [11] for details. Frequency response methods (as used in CIFER) are not the only way to estimate the free parameters. Instead, we can minimize the average squared prediction error of next state given current state and action. Doing so only requires linear regression. In our experiments (see Section 6) we compare the simulation accuracy over several time-steps of the differently learned linear models. We also compare to learning by directly optimizing the simulation accuracy over several time-steps. The latter approach is presented in Section 5. 4 Acceleration prediction model Due to inertia, if a forward-flying helicopter turns, it will have sideslip (i.e., the helicopter will not be aligned with its direction of motion). The linear model is unable to capture the sideslip effect, since this effect depends non-linearly on velocity and angular rates. In fact, the models used in [2, 8, 6] all suffer from this problem. More generally, these models do not capture conservation of momentum well. Although careful engineering of (many) additional non-linear features might fix individual effects such as, e.g., sideslip, it is unclear how to capture inertia compactly in the naive body-coordinate representation. 1D0 captures the sideways acceleration caused by the tail rotor’s thrust. From physics, we have the following update equation for velocity in body-coordinates: ( ˙x, ˙y, ˙z)b t+1 = R ( ˙φ, ˙θ, ˙ω)b t ∗ ( ˙x, ˙y, ˙z)b t + (¨x, ¨y, ¨z)b t∆t . (1) Here, R ( ˙φ, ˙θ, ˙ω)b t is the rotation matrix that transforms from the body-coordinate frame at time t to the body-coordinate frame at time t+1 (and is determined by the angular velocity ( ˙φ, ˙θ, ˙ω)b t at time t); and (¨x, ¨y, ¨z)b t denotes the acceleration vector in body-coordinates at time t. Forces and torques (and thus accelerations) are often a fairly simple function of inputs and state. This suggests that a model which learns to predict the accelerations, and then uses Eqn. (1) to obtain velocity over time, may perform well. Such a model would naturally capture inertia, by using the velocity update of Eqn. (1). In contrast, the models of Section 3 try to predict changes in body-coordinate velocity. But the change in bodycoordinate velocity does not correspond directly to physical accelerations, because the body-coordinate velocity at times t and t + 1 are expressed in different coordinate frames. Thus, ˙xb t+1 −˙xb t is not the forward acceleration—because ˙xb t+1 and ˙xb t are expressed in different coordinate frames. To capture inertia, these models therefore need to predict not only the physical accelerations, but also the non-linear influence of the angular rates through the rotation matrix. This makes for a difficult learning problem, and puts an unnecessary burden on the learning algorithm. Our discussion above has focused on linear velocity, but a similar argument also holds for angular velocity. The previous discussion suggests that we learn to predict physical accelerations and then integrate the accelerations to obtain the state trajectories. To do this, we propose: ¨φb t = Cφ ˙φt + C1(u1)t + D1, ¨xb t = Cx ˙xb t + (gx)b t, ¨θb t = Cθ ˙θt + C2(u2)t + D2, ¨yb t = Cy ˙yb t + (gy)b t + D0, ¨ωb t = Cω ˙ωt + C3(u3)t + D3, ¨zb t = Cz ˙zb t + (gz)b t + C4(u4)t + D4. Here (gx)b t, (gy)b t, (gz)b t are the components of the gravity acceleration vector in each of the body-coordinate axes at time t; and C·, D· are the free parameters to be learned from data. The model predicts accelerations in the body-coordinate frame, and is therefore able to take advantage of the same invariants as discussed earlier, such as invariance of the dynamics to the helicopter’s (x, y, z) position and heading (ω). Further, it additionally captures the fact that the dynamics are invariant to roll (φ) and pitch (θ) once the (known) effects of gravity are subtracted out. Frequency domain techniques cannot be used to learn the acceleration model above, because it is non-linear. Nevertheless, the parameters can be learned as easily as for the linear model in the time domain: Linear regression can be used to find the parameters that minimize the squared error of the one-step prediction in acceleration.2 5 The lagged error criterion To evaluate the performance of a dynamical model, it is standard practice to run a simulation using the model for a certain duration, and then compare the simulated trajectory with the real state trajectory. To do well on this evaluation criterion, it is therefore important for the dynamical model to give not only accurate one-step predictions, but also predictions that are accurate at longer time-scales. Motivated by this, [1] suggested learning the model parameters by optimizing the following “lagged criterion”: PT −H t=1 PH h=1 ∥ˆst+h|t −st+h∥2 2. (2) Here, H is the time horizon of the simulation, and ˆst+h|t is the estimate (from simulation) of the state at time t + h given the state at time t. 2Note that, as discussed previously, the one-step difference of body coordinate velocities is not the acceleration. To obtain actual accelerations, the velocity at time t + 1 must be rotated into the body-frame at t before taking the difference. Unfortunately the EM-algorithm given in [1] is prohibitively expensive in our continuous state-action space setting. We therefore present a simple and fast algorithm for (approximately) minimizing the lagged criterion. We begin by considering a linear model with update equation: st+1 −st = Ast + But, (3) where A, B are the parameters of the model. Minimizing the one-step prediction error would correspond to finding the parameters that minimize the expected squared difference between the left and right sides of Eqn. (3). By summing the update equations for two consecutive time steps, we get that, for simulation to be exact over two time steps, the following needs to hold: st+2 −st = Ast + But + Aˆst+1|t + But+1. (4) Minimizing the expected squared difference between the left and right sides of Eqn. (4) would correspond to minimizing the two-step prediction error. More generally, by summing up the update equations for h consecutive timesteps and then minimizing the left and right sides’ expected squared difference, we can minimize the h-step prediction error. Thus, it may seem that we can directly solve for the parameters that minimize the lagged criterion of Eqn. (2) by running least squares on the appropriate set of linear combinations of state update equations. The difficulty with this procedure is that the intermediate states in the simulation—for example, ˆst+1|t in Eqn. (4)—are also an implicit function of the parameters A and B. This is because ˆst+1|t represents the result of a one-step simulation from st using our model. Taking into account the dependence of the intermediate states on the parameters makes the right side of Eqn. (4) non-linear in the parameters, and thus the optimization is non-convex. If, however, we make an approximation and neglect this dependence, then optimizing the objective can be done simply by solving a linear least squares problem. This gives us the following algorithm. We will alternate between a simulation step that finds the necessary predicted intermediate states, and a least squares step that solves for the new parameters. LEARN-LAGGED-LINEAR: 1. Use least squares to minimize the one-step squared prediction error criterion to obtain an initial model A(0), B(0). Set i = 1. 2. For all t = 1, . . . , T, h = 1, . . . , H, simulate in the current model to compute ˆst+h|t. 3. Solve the following least squares problem: ( ¯A, ¯B) = arg minA,B PT −H t=1 PH h=1 ∥(st+h −st) −(Ph−1 τ=0 Aˆst+τ|t + But+τ)∥2 2. 4. Set A(i+1) = (1 −α)A(i) + α ¯A, B(i+1) = (1 −α)B(i) + α ¯B.3 5. If ∥A(i+1) −A(i)∥+ ∥B(i+1) −B(i)∥≤ϵ exit. Otherwise go back to step 2. Our helicopter acceleration prediction model is not of the simple form st+1 −st = Ast + But described above. However, a similar derivation still applies: The change in velocity over several time-steps corresponds to the sum of changes in velocity over several single time-steps. Thus by adding the one-step acceleration prediction equations as given in Section 4, we might expect to obtain equations corresponding to the acceleration over several time-steps. However, the acceleration equations at different time-steps are in different coordinate frames. Thus we first need to rotate the equations and then add them. In the algorithm described below, we rotate all accelerations into the world coordinate frame. The acceleration equations from Section 4 give us (¨x, ¨y, ¨z)b t = Aposst + Bposut, and 3This step of the algorithm uses a simple line search to choose the stepsize α. (a) (b) Figure 1: The XCell Tempest (a) and the Bergen Industrial Twin (b) used in our experiments. (¨φ, ¨θ, ¨ω)b t = Arotst + Brotut, where Apos, Bpos, Arot, Brot are (sparse) matrices that contain the parameters to be learned.4 This gives us the LEARN-LAGGED-ACCELERATION algorithm, which is identical to LEARN-LAGGED-LINEAR except that step 3 now solves the following least squares problems: ( ¯Apos, ¯Bpos) = arg min A,B T −H X t=1 H X h=1 ∥ h−1 X τ=0 ˆRbt+τ →s “ (¨x, ¨y, ¨z)b t+τ −(Aˆst+τ|t + But+τ) ” ∥2 2 ( ¯Arot, ¯Brot) = arg min A,B T −H X t=1 H X h=1 ∥ h−1 X τ=0 ˆRbt+τ →s “ (¨φ, ¨θ, ¨ω)b t+τ −(Aˆst+τ|t + But+τ) ” ∥2 2 Here ˆRbt→s denotes the rotation matrix (estimated from simulation using the current model) from the body frame at time t to the world frame. 6 Experiments We performed experiments on two RC helicopters: an XCell Tempest and a Bergen Industrial Twin helicopter. (See Figure 1.) The XCell Tempest is a competition-class aerobatic helicopter (length 54”, height 19”), is powered by a 0.91-size, two-stroke engine, and has an unloaded weight of 13 pounds. It carries two sensor units: a Novatel RT2 GPS receiver and a Microstrain 3DM-GX1 orientation sensor. The Microstrain package contains triaxial accelerometers, rate gyros, and magnetometers, which are used for inertial sensing. The larger Bergen Industrial Twin helicopter is powered by a twin cylinder 46cc, two-stroke engine, and has an unloaded weight of 18 lbs. It carries three sensor units: a Novatel RT2 GPS receiver, MicroStrain 3DM-G magnetometers, and an Inertial Science ISIS-IMU (triaxial accelerometers and rate gyros). For each helicopter, we collected data from two separate flights. The XCell Tempest train and test flights were 800 and 540 seconds long, the Bergen Industrial Twin train and test flights were each 110 seconds long. A highly optimized Kalman filter integrates the sensor information and reports (at 100Hz) 12 numbers corresponding to the helicopter’s state (x, y, z, ˙x, ˙y, ˙z, φ, θ, ω, ˙φ, ˙θ, ˙ω). The data is then downsampled to 10Hz before learning. For each of the helicopters, we learned the following models: 1. Linear-One-Step: The linear model from Section 3 trained using linear regression to minimize the one-step prediction error. 2. Linear-CIFER: The linear model from Section 3 trained using CIFER. 3. Linear-Lagged: The linear model from Section 3 trained minimizing the lagged criterion. 4. Acceleration-One-Step: The acceleration prediction model from Section 4 trained using linear regression to minimize the one step prediction error. 5. Acceleration-Lagged: The acceleration prediction model from Section 4 trained minimizing the lagged criterion. 4For simplicity of notation we omit the intercept parameters here, but they are easily incorporated, e.g., by having one additional input which is always equal to one. For Linear-Lagged and Acceleration-Lagged we used a horizon H of two seconds (20 simulation steps). The CPU times for training the different algorithms were: Less than one second for linear regression (algorithms 1 and 4 in the list above); one hour 20 minutes (XCell Tempest data) or 10 minutes (Bergen Industrial Twin data) for the lagged criteria (algorithms 3 and 5 above); about 5 minutes for CIFER. Our algorithm optimizing the lagged criterion appears to converge after at most 30 iterations. Since this algorithm is only approximate, we can then use coordinate descent search to further improve the lagged criterion.5 This coordinate descent search took an additional four hours for the XCell Tempest data and an additional 30 minutes for the Bergen Industrial Twin data. We report results both with and without this coordinate descent search. Our results show that the algorithm presented in Section 5 works well for fast approximate optimization of the lagged criterion, but that locally greedy search (coordinate descent) may then improve it yet further. For evaluation, the test data was split in consecutive non-overlapping two second windows. (This corresponds to 20 simulation steps, s0, . . . , s20.) The models are used to predict the state sequence over the two second window, when started in the true state s0. We report the average squared prediction error (difference between the simulated and true state) at each timestep t = 1, . . . , 20 throughout the two second window. The orientation error is measured by the squared magnitude of the minimal rotation needed to align the simulated orientation with the true orientation. Velocity, position, angular rate and orientation errors are measured in m/s, m, rad/s and rad (squared) respectively. (See Figure 2.) We see that Linear-Lagged consistently outperforms Linear-CIFER and Linear-OneStep. Similarly, for the acceleration prediction models, we have that AccelerationLagged consistently outperforms Acceleration-One-Step. These experiments support the case for training with the lagged criterion. The best acceleration prediction model, Acceleration-Lagged, is significantly more accurate than any of the linear models presented in Section 3. This effect is mostly present in the XCell Tempest data, which contained data collected from many different parts of the state space (e.g., flying in a circle); in contrast, the Bergen Industrial Twin data was collected mostly near hovering (and thus the linearization assumptions were somewhat less poor there). 7 Summary We presented an acceleration based parameterization for learning vehicular dynamics. The model predicts accelerations, and then integrates to obtain state trajectories. We also described an efficient algorithm for approximately minimizing the lagged criterion, which measures the predictive accuracy of the algorithm over both short and long time-scales. In our experiments, learning with the acceleration parameterization and using the lagged criterion gave significantly more accurate models than previous approaches. Using this approach, we have recently also succeeded in learning a model for, and then autonomously flying, a “funnel” aerobatic maneuver, in which the helicopter flies in a circle, keeping the tail pointed at the center of rotation, and the body of the helicopter pitched backwards at a steep angle (so that the body of the helicopter traces out the surface of a funnel). (Details will be presented in a forthcoming paper.) Acknowledgments. We give warm thanks to Adam Coates and to helicopter pilot Ben Tse for their help on this work. 5We used coordinate descent on the criterion of Eqn. (2), but reweighted the errors on velocity, angular velocity, position and orientation to scale them to roughly the same order of magnitude. 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 velocity t (s) 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 position t (s) 0 0.5 1 1.5 2 0 0.01 0.02 0.03 0.04 0.05 0.06 angular rate t (s) 0 0.5 1 1.5 2 0 0.005 0.01 0.015 orientation t (s) 0 0.5 1 1.5 2 0 20 40 60 80 100 velocity t (s) 0 0.5 1 1.5 2 0 50 100 150 200 250 position t (s) 0 0.5 1 1.5 2 0 0.2 0.4 0.6 0.8 1 angular rate t (s) 0 0.5 1 1.5 2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 orientation t (s) XCell Tempest Bergen Industrial Twin Figure 2: (Best viewed in color.) Average squared prediction errors throughout two-second simulations. Blue, dotted: Linear-One-Step. Green, dash-dotted: Linear-CIFER. Yellow, triangle: Linear-Lagged learned with fast, approximate algorithm from Section 5. Red, dashed: LinearLagged learned with fast, approximate algorithm from Section 5 followed by greedy coordinate descent search. Magenta, solid: Acceleration-One-Step. Cyan, circle: Acceleration-Lagged learned with fast, approximate algorithm from Section 5. Black,*: Acceleration-Lagged learned with fast, approximate algorithm from Section 5 followed by greedy coordinate descent search. The magenta, cyan and black lines (visually) coincide in the XCell position plots. The blue, yellow, magenta and cyan lines (visually) coincide in the Bergen angular rate and orientation plots. The red and black lines (visually) coincide in the Bergen angular rate plot. See text for details. References [1] P. Abbeel and A. Y. Ng. Learning first order Markov models for control. In NIPS 18, 2005. [2] J. Bagnell and J. Schneider. Autonomous helicopter control using reinforcement learning policy search methods. In International Conference on Robotics and Automation. IEEE, 2001. [3] V. Gavrilets, I. Martinos, B. Mettler, and E. Feron. Control logic for automated aerobatic flight of miniature helicopter. In AIAA Guidance, Navigation and Control Conference, 2002. [4] V. Gavrilets, I. Martinos, B. Mettler, and E. Feron. Flight test and simulation results for an autonomous aerobatic helicopter. In AIAA/IEEE Digital Avionics Systems Conference, 2002. [5] J. Leishman. Principles of Helicopter Aerodynamics. Cambridge University Press, 2000. [6] B. Mettler, M. Tischler, and T. Kanade. System identification of small-size unmanned helicopter dynamics. In American Helicopter Society, 55th Forum, 1999. [7] Andrew Y. Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben Tse, Eric Berger, and Eric Liang. Autonomous inverted helicopter flight via reinforcement learning. In International Symposium on Experimental Robotics, 2004. [8] Andrew Y. Ng, H. Jin Kim, Michael Jordan, and Shankar Sastry. Autnonomous helicopter flight via reinforcement learning. In NIPS 16, 2004. [9] Jonathan M. Roberts, Peter I. Corke, and Gregg Buskey. Low-cost flight control system for a small autonomous helicopter. In IEEE Int’l Conf. on Robotics and Automation, 2003. [10] J. Seddon. Basic Helicopter Aerodynamics. AIAA Education Series. America Institute of Aeronautics and Astronautics, 1990. [11] M.B. Tischler and M.G. Cauffman. Frequency response method for rotorcraft system identification: Flight application to BO-105 couple rotor/fuselage dynamics. Journal of the American Helicopter Society, 1992.
|
2005
|
7
|
2,888
|
Top-Down Control of Visual Attention: A Rational Account Michael C. Mozer Michael Shettel Shaun Vecera Dept. of Comp. Science & Dept. of Comp. Science & Dept. of Psychology Institute of Cog. Science Institute of Cog. Science University of Iowa University of Colorado University of Colorado Iowa City, IA 52242 USA Boulder, CO 80309 USA Boulder, CO 80309 USA Abstract Theories of visual attention commonly posit that early parallel processes extract conspicuous features such as color contrast and motion from the visual field. These features are then combined into a saliency map, and attention is directed to the most salient regions first. Top-down attentional control is achieved by modulating the contribution of different feature types to the saliency map. A key source of data concerning attentional control comes from behavioral studies in which the effect of recent experience is examined as individuals repeatedly perform a perceptual discrimination task (e.g., “what shape is the odd-colored object?”). The robust finding is that repetition of features of recent trials (e.g., target color) facilitates performance. We view this facilitation as an adaptation to the statistical structure of the environment. We propose a probabilistic model of the environment that is updated after each trial. Under the assumption that attentional control operates so as to make performance more efficient for more likely environmental states, we obtain parsimonious explanations for data from four different experiments. Further, our model provides a rational explanation for why the influence of past experience on attentional control is short lived. 1 INTRODUCTION The brain does not have the computational capacity to fully process the massive quantity of information provided by the eyes. Selective attention operates to filter the spatiotemporal stream to a manageable quantity. Key to understanding the nature of attention is discovering the algorithm governing selection, i.e., understanding what information will be selected and what will be suppressed. Selection is influenced by attributes of the spatiotemporal stream, often referred to as bottom-up contributions to attention. For example, attention is drawn to abrupt onsets, motion, and regions of high contrast in brightness and color. Most theories of attention posit that some visual information processing is performed preattentively and in parallel across the visual field. This processing extracts primitive visual features such as color and motion, which provide the bottom-up cues for attentional guidance. However, attention is not driven willy nilly by these cues. The deployment of attention can be modulated by task instructions, current goals, and domain knowledge, collectively referred to as top-down contributions to attention. How do bottom-up and top-down contributions to attention interact? Most psychologically and neurobiologically motivated models propose a very similar architecture in which information from bottom-up and top-down sources combines in a saliency (or activation) map (e.g., Itti et al., 1998; Koch & Ullman, 1985; Mozer, 1991; Wolfe, 1994). The saliency map indicates, for each location in the visual field, the relative importance of that location. Attention is drawn to the most salient locations first. Figure 1 sketches the basic architecture that incorporates bottom-up and top-down contributions to the saliency map. The visual image is analyzed to extract maps of primitive features such as color and orientation. Associated with each location in a map is a scalar response or activation indicating the presence of a particular feature. Most models assume that responses are stronger at locations with high local feature contrast, consistent with neurophysiological data, e.g., the response of a red feature detector to a red object is stronger if the object is surrounded by green objects. The saliency map is obtained by taking a sum of bottom-up activations from the feature maps. The bottom-up activations are modulated by a top-down gain that specifies the contribution of a particular map to saliency in the current task and environment. Wolfe (1994) describes a heuristic algorithm for determining appropriate gains in a visual search task, where the goal is to detect a target object among distractor objects. Wolfe proposes that maps encoding features that discriminate between target and distractors have higher gains, and to be consistent with the data, he proposes limits on the magnitude of gain modulation and the number of gains that can be modulated. More recently, Wolfe et al. (2003) have been explicit in proposing optimization as a principle for setting gains given the task definition and stimulus environment. One aspect of optimizing attentional control involves configuring the attentional system to perform a given task; for example, in a visual search task for a red vertical target among green vertical and red horizontal distractors, the task definition should result in a higher gain for red and vertical feature maps than for other feature maps. However, there is a more subtle form of gain modulation, which depends on the statistics of display environments. For example, if green vertical distractors predominate, then red is a better discriminative cue than vertical; and if red horizontal distractors predominate, then vertical is a better discriminative cue than red. In this paper, we propose a model that encodes statistics of the environment in order to allow for optimization of attentional control to the structure of the environment. Our model is designed to address a key set of behavioral data, which we describe next. 1.1 Attentional priming phenomena Psychological studies involve a sequence of experimental trials that begin with a stimulus presentation and end with a response from the human participant. Typically, trial order is randomized, and the context preceding a trial is ignored. However, in sequential studies, performance is examined on one trial contingent on the past history of trials. These sequential studies explore how experience influences future performance. Consider a the sequential attentional task of Maljkovic and Nakayama (1994). On each trial, the stimulus display (Figure 2) consists of three notched diamonds, one a singleton in color—either green among red or red among green. The task is to report whether the singleton diamond, referred to as the target, is notched on the left or the right. The task is easy because the singleton pops out, i.e., the time to locate the singleton does not depend on the number of diamonds in the display. Nonetheless, the response time significantly depends on the sequence of trials leading up to the current trial: If the target is the same color on the curFIGURE 1. An attentional saliency map constructed from bottom-up and top-down information visual image saliency map vertical horizontal green red primitive feature maps top-down gains bottom-up activations FIGURE 2. Sample display from Experiment 1 of Maljkovic and Nakayama (1994) rent trial as on the previous trial, response time is roughly 100 ms faster than if the target is a different color on the current trial. Considering that response times are on the order of 700 ms, this effect, which we term attentional priming, is gigantic in the scheme of psychological phenomena. 2 ATTENTIONAL CONTROL AS ADAPTATION TO THE STATISTICS OF THE ENVIRONMENT We interpret the phenomenon of attentional priming via a particular perspective on attentional control, which can be summarized in two bullets. • The perceptual system dynamically constructs a probabilistic model of the environment based on its past experience. • Control parameters of the attentional system are tuned so as to optimize performance under the current environmental model. The primary focus of this paper is the environmental model, but we first discuss the nature of performance optimization. The role of attention is to make processing of some stimuli more efficient, and consequently, the processing of other stimuli less efficient. For example, if the gain on the red feature map is turned up, processing will be efficient for red items, but competition from red items will reduce the efficiency for green items. Thus, optimal control should tune the system for the most likely states of the world by minimizing an objective function such as: (1) where g is a vector of top-down gains, e is an index over environmental states, P(.) is the probability of an environmental state, and RTg(.) is the expected response time—assuming a constant error rate—to the environmental state under gains g. Determining the optimal gains is a challenge because every gain setting will result in facilitation of responses to some environmental states but hindrance of responses to other states. The optimal control problem could be solved via direct reinforcement learning, but the rapidity of human learning makes this possibility unlikely: In a variety of experimental tasks, evidence suggests that adaptation to a new task or environment can occur in just one or two trials (e.g., Rogers & Monsell, 1996). Model-based reinforcement learning is an attractive alternative, because given a model, optimization can occur without further experience in the real world. Although the number of real-world trials necessary to achieve a given level of performance is comparable for direct and model-based reinforcement learning in stationary environments (Kearns & Singh, 1999), naturalistic environments can be viewed as highly nonstationary. In such a situation, the framework we suggest is well motivated: After each experience, the environment model is updated. The updated environmental model is then used to retune the attentional system. In this paper, we propose a particular model of the environment suitable for visual search tasks. Rather than explicitly modeling the optimization of attentional control by setting gains, we assume that the optimization process will serve to minimize Equation 1. Because any gain adjustment will facilitate performance in some environmental states and hinder performance in others, an optimized control system should obtain faster reaction times for more probable environmental states. This assumption allows us to explain experimental results in a minimal, parsimonious framework. 3 MODELING THE ENVIRONMENT Focusing on the domain of visual search, we characterize the environment in terms of a J g ( ) P e ( )RTg e ( ) e∑ = probability distribution over configurations of target and distractor features. We distinguish three classes of features: defining, reported, and irrelevant. To explain these terms, consider the task of searching a display of size varying, colored, notched diamonds (Figure 2), with the task of detecting the singleton in color and judging the notch location. Color is the defining feature, notch location is the reported feature, and size is an irrelevant feature. To simplify the exposition, we treat all features as having discrete values, an assumption which is true of the experimental tasks we model. We begin by considering displays containing a single target and a single distractor, and shortly generalize to multidistractor displays. We use the framework of Bayesian networks to characterize the environment. Each feature of the target and distractor is a discrete random variable, e.g., Tcolor for target color and Dnotch for the location of the notch on the distractor. The Bayes net encodes the probability distribution over environmental states; in our working example, this distribution is P(Tcolor, Tsize, Tnotch, Dcolor, Dsize, Dnotch). The structure of the Bayes net specifies the relationships among the features. The simplest model one could consider would be to treat the features as independent, illustrated in Figure 3a for singleton-color search task. The opposite extreme would be the full joint distribution, which could be represented by a look up table indexed by the six features, or by the cascading Bayes net architecture in Figure 3b. The architecture we propose, which we’ll refer to as the dominance model (Figure 3c), has an intermediate dependency structure, and expresses the joint distribution as: P(Tcolor)P(Dcolor|Tcolor)P(Tsize|Tcolor)P(Tnotch|Tcolor)P(Dsize|Dcolor)P(Dnotch|Tcolor). The structured model is constructed based on three rules. 1. The defining feature of the target is at the root of the tree. 2. The defining feature of the distractor is conditionally dependent on the defining feature of the target. We refer to this rule as dominance of the target over the distractor. 3. The reported and irrelevant features of target (distractor) are conditionally dependent on the defining feature of the target (distractor). We refer to this rule as dominance of the defining feature over nondefining features. As we will demonstrate, the dominance model produces a parsimonious account of a wide range of experimental data. 3.1 Updating the environment model The model’s parameters are the conditional distributions embodied in the links. In the example of Figure 3c with binary random variables, the model has 11 parameters. However, these parameters are determined by the environment: To be adaptive in nonstationary environments, the model must be updated following each experienced state. We propose a simple exponentially weighted averaging approach. For two variables V and W with observed values v and w on trial t, a conditional distribution, , is FIGURE 3. Three models of a visual-search environment with colored, notched, size-varying diamonds. (a) feature-independence model; (b) full-joint model; (c) dominance model. Tnotch Tsize Tcolor Dnotch Dsize Dcolor Tnotch Tsize Tcolor Dnotch Dsize Dcolor (a) (b) (c) Tnotch Tsize Tcolor Dnotch Dsize Dcolor Pt V u = W w = ( ) δuv = defined, where is the Kronecker delta. The distribution representing the environment following trial t, denoted , is then updated as follows: (2) for all u, where is a memory constant. Note that no update is performed for values of W other than w. An analogous update is performed for unconditional distributions. How the model is initialized—i.e., specifying —is irrelevant, because all experimental tasks that we model, participants begin the experiment with many dozens of practice trials. Data is not collected during practice trials. Consequently, any transient effects of do not impact the results. In our simulations, we begin with a uniform distribution for , and include practice trials as in the human studies. Thus far, we’ve assumed a single target and a single distractor. The experiments that we model involve multiple distractors. The simple extension we require to handle multiple distractors is to define a frequentist probability for each distractor feature V, , where is the count of co-occurrences of feature values v and w among the distractors, and is the count of w. Our model is extremely simple. Given a description of the visual search task and environment, the model has only a single degree of freedom, . In all simulations, we fix ; however, the choice of does not qualitatively impact any result. 4 SIMULATIONS In this section, we show that the model can explain a range of data from four different experiments examining attentional priming. All experiments measure response times of participants. On each trial, the model can be used to obtain a probability of the display configuration (the environmental state) on that trial, given the history of trials to that point. Our critical assumption—as motivated earlier—is that response times monotonically decrease with increasing probability, indicating that visual information processing is better configured for more likely environmental states. The particular relationship we assume is that response times are linear in log probability. This assumption yields long response time tails, as are observed in all human studies. 4.1 Maljkovic and Nakayama (1994, Experiment 5) In this experiment, participants were asked to search for a singleton in color in a display of three red or green diamonds. Each diamond was notched on either the left or right side, and the task was to report the side of the notch on the color singleton. The well-practiced participants made very few errors. Reaction time (RT) was examined as a function of whether the target on a given trial is the same or different color as the target on trial n steps back or ahead. Figure 4 shows the results, with the human RTs in the left panel and the simulation log probabilities in the right panel. The horizontal axis represents n. Both graphs show the same outcome: repetition of target color facilitates performance. This influence lasts only for a half dozen trials, with an exponentially decreasing influence further into the past. In the model, this decreasing influence is due to the exponential decay of recent history (Equation 2). Figure 4 also shows that—as expected—the future has no influence on the current trial. 4.2 Maljkovic and Nakayama (1994, Experiment 8) In the previous experiment, it is impossible to determine whether facilitation is due to repetition of the target’s color or the distractor’s color, because the display contains only two colors, and therefore repetition of target color implies repetition of distractor color. To unconfound these two potential factors, an experiment like the previous one was conδ Pt E Pt E V u = W w = ( ) αPt 1 – E V u = W w = ( ) 1 α – ( )Pt V u = W w = ( ) + = α P0 E P0 E P0 E Pt V v = W w = ( ) Cvw Cw ⁄ = Cvw Cw α α 0.75 = α ducted using four distinct colors, allowing one to examine the effect of repeating the target color while varying the distractor color, and vice versa. The sequence of trials was composed of subsequences of up-to-six consecutive trials with either the target or distractor color held constant while the other color was varied trial to trial. Following each subsequence, both target and distractors were changed. Figure 5 shows that for both humans and the simulation, performance improves toward an asymptote as the number of target and distractor repetitions increases; in the model, the asymptote is due to the probability of the repeated color in the environment model approaching 1.0. The performance improvement is greater for target than distractor repetition; in the model, this difference is due to the dominance of the defining feature of the target over the defining feature of the distractor. 4.3 Huang, Holcombe, and Pashler (2004, Experiment 1) Huang et al. (2004) and Hillstrom (2000) conducted studies to determine whether repetitions of one feature facilitate performance independently of repetitions of another feature. In the Huang et al. study, participants searched for a singleton in size in a display consisting of lines that were short and long, slanted left or right, and colored white or black. The reported feature was target slant. Slant, size, and color were uncorrelated. Huang et al. discovered that repeating an irrelevant feature (color or orientation) facilitated performance, but only when the defining feature (size) was repeated. As shown in Figure 6, the model replicates human performance, due to the dominance of the defining feature over the reported and irrelevant features. 4.4 Wolfe, Butcher, Lee, and Hyde (2003, Experiment 1) In an empirical tour-de-force, Wolfe et al. (2003) explored singleton search over a range of environments. The task is to detect the presence or absence of a singleton in displays conŦ15 Ŧ13 Ŧ11 Ŧ9 Ŧ7 Ŧ5 Ŧ3 Ŧ1 +1 +3 +5 +7 550 560 570 580 590 600 610 Reaction Time (msec) Different Color Same Color Past Future Relative Trial Number Ŧ15 Ŧ13 Ŧ11 Ŧ9 Ŧ7 Ŧ5 Ŧ3 Ŧ1 +1 +3 +5 +7 2.6 2.8 3 3.2 3.4 Ŧlog(P(trial)) Different Color Same Color Past Future Relative Trial Number FIGURE 4. Experiment 5 of Maljkovic and Nakayama (1994): performance on a given trial conditional on the color of the target on a previous or subsequent trial. Human data is from subject KN. Human data Simulation 1 2 3 4 5 6 580 590 600 610 620 630 640 650 Reaction Time (msec) Distractors Same Target Same Order in Sequence 1 2 3 4 5 6 3 3.5 4 4.5 5 5.5 6 Ŧlog(P(trial)) Distractors Same Target Same Order in Sequence FIGURE 5. Experiment 8 of Maljkovic and Nakayama (1994). (left panel) human data, average of subjects KN and SS; (right panel) simulation 3 3.2 3.4 3.6 3.8 4 4.2 Ŧlog(P(trial)) Color Repeat Color Alternate Size Repeat Size Alternate FIGURE 6. Experiment 1 of Huang, Holcombe, & Pashler (2004). (left panel) human data; (right panel) simulation 850 900 950 1000 1050 Reaction Time (msec) Color Repeat Color Alternate Size Repeat Size Alternate sisting of colored (red or green), oriented (horizontal or vertical) lines. Target-absent trials were used primarily to ensure participants were searching the display. The experiment examined seven experimental conditions, which varied in the amount of uncertainty as to the target identity. The essential conditions, from least to most uncertainty, are: blocked (e.g., target always red vertical among green horizontals), mixed feature (e.g., target always a color singleton), mixed dimension (e.g., target either red or vertical), and fully mixed (target could be red, green, vertical, or horizontal). With this design, one can ascertain how uncertainty in the environment and in the target definition influence task difficulty. Because the defining feature in this experiment could be either color or orientation, we modeled the environment with two Bayes nets—one color dominant and one orientation dominant—and performed model averaging. A comparison of Figures 7a and 7b show a correspondence between human RTs and model predictions. Less uncertainty in the environment leads to more efficient performance. One interesting result from the model is its prediction that the mixed-feature condition is easier than the fully-mixed condition; that is, search is more efficient when the dimension (i.e., color vs. orientation) of the singleton is known, even though the model has no abstract representation of feature dimensions, only feature values. 4.5 Optimal adaptation constant In all simulations so far, we fixed the memory constant. From the human data, it is clear that memory for recent experience is relatively short lived, on the order of a half dozen trials (e.g., left panel of Figure 4). In this section we provide a rational argument for the short duration of memory in attentional control. Figure 7c shows mean negative log probability in each condition of the Wolfe et al. (2003) experiment, as a function of . To assess these probabilities, for each experimental condition, the model was initialized so that all of the conditional distributions were uniform, and then a block of trials was run. Log probability for all trials in the block was averaged. The negative log probability (y axis of the Figure) is a measure of the model’s misprediction of the next trial in the sequence. For complex environments, such as the fully-mixed condition, a small memory constant is detrimental: With rapid memory decay, the effective history of trials is a high-variance sample of the distribution of environmental states. For simple environments, a large memory constant is detrimental: With slow memory decay, the model does not transition quickly from the initial environmental model to one that reflects the statistics of a new environment. Thus, the memory constant is constrained by being large enough that the environment model can hold on to sufficient history to represent complex environments, and by being small enough that the model adapts quickly to novel environments. If the conditions in Wolfe et al. give some indication of the range of naturalistic environments an agent encounters, we have a rational account of why attentional priming is so short lived. Whether priming lasts 2 trials or 20, the surprising empirical result is that it does not last 200 or 2000 trials. Our rational argument provides a rough insight into this finding. 0.98 0.95 0.9 0.8 0.5 0 0 1 2 3 4 5 Ŧlog(P(trial)) Memory Constant Blocked Red or Vertical Blocked Red and Vertical Mixed Feature Mixed Dimension Fully Mixed red or vert red and vert 360 380 400 420 440 460 480 target type reaction time (msec) Human Data fully mixed mixed feature mixed dimension blocked red or vert red and vert 0 1 2 3 4 target type Ŧlog(P(trial)) Simulation fully mixed mixed feature mixed dimension blocked FIGURE 7. (a) Human data for Wolfe et al. (2003), Experiment 1; (b) simulation; (c) misprediction of model (i.e., lower y value = better) as a function of for five experimental condition α (a) (b) (c) α 5 DISCUSSION The psychological literature contains two opposing accounts of attentional priming and its relation to attentional control. Huang et al. (2004) and Hillstrom (2000) propose an episodic account in which a distinct memory trace—representing the complete configuration of features in the display—is laid down for each trial, and priming depends on configural similarity of the current trial to previous trials. Alternatively, Maljkovic and Nakayama (1994) and Wolfe et al. (2003) propose a feature-strengthening account in which detection of a feature on one trial increases its ability to attract attention on subsequent trials, and priming is proportional to the number of overlapping features from one trial to the next. The episodic account corresponds roughly to the full joint model (Figure 3b), and the feature-strengthening account corresponds roughly to the independence model (Figure 3a). Neither account is adequate to explain the range of data we presented. However, an intermediate account, the dominance model (Figure 3c), is not only sufficient, but it offers a parsimonious, rational explanation. Beyond the model’s basic assumptions, it has only one free parameter, and can explain results from diverse experimental paradigms. The model makes a further theoretical contribution. Wolfe et al. distinguish the environments in their experiment in terms of the amount of top-down control available, implying that different mechanisms might be operating in different environments. However, in our account, top-down control is not some substance distributed in different amounts depending on the nature of the environment. Our account treats all environments uniformly, relying on attentional control to adapt to the environment at hand. We conclude with two limitations of the present work. First, our account presumes a particular network architecture, instead of a more elegant Bayesian approach that specifies priors over architectures, and performs automatic model selection via the sequence of trials. We did explore such a Bayesian approach, but it was unable to explain the data. Second, at least one finding in the literature is problematic for the model. Hillstrom (2000) occasionally finds that RTs slow when an irrelevant target feature is repeated but the defining target feature is not. However, because this effect is observed only in some experiments, it is likely that any model would require elaboration to explain the variability. ACKNOWLEDGEMENTS We thank Jeremy Wolfe for providing the raw data from his experiment for reanalysis. This research was funded by NSF BCS Award 0339103. REFERENCES Huang, L, Holcombe, A. O., & Pashler, H. (2004). Repetition priming in visual search: Episodic retrieval, not feature priming. Memory & Cognition, 32, 12–20. Hillstrom, A. P. (2000). Repetition effects in visual search. Perception & Psychophysics, 62, 800-817. Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Analysis & Machine Intelligence, 20, 1254–1259. Kearns, M., & Singh, S. (1999). Finite-sample convergence rates for Q-learning and indirect algorithms. In Advances in Neural Information Processing Systems 11 (pp. 996–1002). Cambridge, MA: MIT Press. Koch, C. and Ullman, S. (1985). Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology, 4, 219–227. Maljkovic, V., & Nakayama, K. (1994). Priming of pop-out: I. Role of features. Mem. & Cognition, 22, 657-672. Mozer, M. C. (1991). The perception of multiple objects: A connectionist approach. Cambridge, MA: MIT Press. Rogers, R. D., & Monsell, S. (1995). The cost of a predictable switch between simple cognitive tasks. Journal of Experimental Psychology: General, 124, 207–231. Wolfe, J.M. (1994). Guided Search 2.0: A Revised Model of Visual Search. Psych. Bull. & Rev., 1, 202–238. Wolfe, J. S., Butcher, S. J., Lee, C., & Hyde, M. (2003). Changing your mind: on the contributions of top-down and bottom-up guidance in visual search for feature singletons. Journal of Exptl. Psychology: Human Perception & Performance, 29, 483-502.
|
2005
|
70
|
2,889
|
The Role of Top-down and Bottom-up Processes in Guiding Eye Movements during Visual Search Gregory J. Zelinsky†‡, Wei Zhang‡, Bing Yu‡, Xin Chen†∗, Dimitris Samaras‡ Dept. of Psychology†, Dept. of Computer Science‡ State University of New York at Stony Brook Stony Brook, NY 11794 Gregory.Zelinsky@stonybrook.edu†, xichen@ic.sunysb.edu∗ {wzhang,ybing,samaras}@cs.sunysb.edu‡ Abstract To investigate how top-down (TD) and bottom-up (BU) information is weighted in the guidance of human search behavior, we manipulated the proportions of BU and TD components in a saliency-based model. The model is biologically plausible and implements an artificial retina and a neuronal population code. The BU component is based on featurecontrast. The TD component is defined by a feature-template match to a stored target representation. We compared the model’s behavior at different mixtures of TD and BU components to the eye movement behavior of human observers performing the identical search task. We found that a purely TD model provides a much closer match to human behavior than any mixture model using BU information. Only when biological constraints are removed (e.g., eliminating the retina) did a BU/TD mixture model begin to approximate human behavior. 1. Introduction The human object detection literature, also known as visual search, has long struggled with how best to conceptualize the role of bottom-up (BU) and top-down (TD) processes in guiding search behavior.1 Early theories of search assumed a pure BU feature decomposition of the objects in an image, followed by the later reconstitution of these features into objects if the object’s location was visited by spatially directed visual attention [1]. Importantly, the direction of attention to feature locations was believed to be random in these early models, thereby making them devoid of any BU or TD component contributing to the guidance of attention to objects in scenes. The belief in a random direction of attention during search was quashed by Wolfe and colleague’s [2] demonstration of TD information affecting search guidance. According to their guided-search model [3], preattentively available features from objects not yet bound by attention can be compared to a high-level target description to generate signals indicating evidence for the target in a display. The search process can then use these signals to 1In this paper we will refer to BU guidance as guidance based on task-independent signals arising from basic neuronal feature analysis. TD guidance will refer to guidance based on information not existing in the input image or proximal search stimulus, such as knowledge of target features or processing constraints imposed by task instruction. guide attention to display locations indicating the greatest evidence for the target. More recent models of TD target guidance can accept images of real-world scenes as stimuli and generate sequences of eye movements that can be directly compared to human search behavior [4]. Purely BU models of attention guidance have also enjoyed a great deal of recent research interest. Building on the concept of a saliency map introduced in [5], these models attempt to use biologically plausible computational primitives (e.g., center-surround receptive fields, color opponency, winner-take-all spatial competition, etc.) to define points of high salience in an image that might serve as attractors of attention. Much of this work has been discussed in the context of scene perception [6], but recently Itti and Koch [7] extended a purely BU model to the task of visual search. They defined image saliency in terms of intensity, color, and orientation contrast for multiple spatial scales within a pyramid. They found that a saliency model based on feature-contrast was able to account for a key finding in the behavioral search literature, namely very efficient search for feature-defined targets and far less efficient search for targets defined by conjunctions of features [1]. Given the body of evidence suggesting both TD and BU contributions to the guidance of attention in a search task, the logical next question to ask is whether these two sources of information should be combined to describe search behavior and, if so, in what proportion? To answer this question, we adopt a three-pronged approach. First, we implement two models of eye movements during visual search, one a TD model derived from the framework proposed by [4] and the other a BU model based on the framework proposed by [7]. Second, we use an eyetracker to collect behavioral data from human observers so as to quantify guidance in terms of the number of fixations needed to acquire a target. Third, we combine the outputs of the two models in various proportions to determine the TD/BU weighting best able to describe the number of search fixations generated by the human observers. 2. Eye movement model Figure 1: Flow of processing through the model. Abbreviations: TD SM (top-down saliency map); BU SM (bottom-up saliency map); SF(suggested fixation point); TSM (thresholded saliency map); CF2HS (Euclidean distance between current fixation and hotspot); SF2CF(Euclidean distance between suggested fixation and current fixation); EMT (eye movement threshold); FT (foveal threshold). In this section we introduce a computational model of eye movements during visual search. The basic flow of processing in this model is shown in Figure 1. Generally, we represent search scenes in terms of simple and biologically-plausible visual feature-detector responses (colors, orientations, scales). Visual routines then act on these representations to produce a sequence of simulated eye movements. Our framework builds on work described in [8, 4], but differs from this earlier model in several important respects. First, our model includes a perceptually-accurate simulated retina, which was not included in [8, 4]. Second, the visual routine responsible for moving gaze in our model is fundamentally different from the earlier version. In [8, 4], the number of eye movements was largely determined by the number of spatial scale filters used in the representation. The method used in the current model to generate eye movements (Section 2.3) removes this upper limit. Third, and most important to the topic of this paper, the current model is capable of integrating both BU and TD information in guiding search behavior. The [8, 4] model was purely TD. 2.1. Overview The model can be conceptually divided into three broad stages: (1) the creation of a saliency map (SM) based on TD and BU analysis of a retinally-transformed image, (2) recognizing the target, and (3) the operations required to generate eye movements. Within each of these stages are several more specific operations, which we will now describe briefly in an order determined by the processing flow. Input image: The model accepts as input a high-resolution (1280 × 960 pixel) image of the search scene, as well as a smaller image of the search target. A point is specified on the target image and filter responses are collected from a region surrounding this point. In the current study this point corresponded to the center of the target image. Retina transform: The search image is immediately transformed to reflect the acuity limitations imposed by the human retina. To implement this neuroanatomical constraint, we adopt a method described in [9], which was shown to provide a good fit to acuity limitations in the human visual system. The approach takes an image and a fixation point as input, and outputs a retina-transformed version of the image based on the fixation point (making it a good front-end to our model). The initial retina transformation assumes fixation at the center of the image, consistent with the behavioral experiment. A new retina transformation of the search image is conducted after each change in gaze. Saliency maps: Both the TD and the BU saliency maps are based on feature responses from Gaussian filters of different orientations, scales, colors, and orders. These two maps are then combined to create the final SM used to guide search (see Section 2.2 for details). Negativity map: The negativity map keeps a spatial record of every nontarget location that was fixated and rejected through the application of Gaussian inhibition, a process similar to inhibition of return [10] that we refer to as ”zapping”. The existence of such a map is supported by behavioral evidence indicating a high-capacity spatial memory for rejected nontargets in a search task [11]. Find hotspot: The hotspot (HS) is defined as the point on the saliency map having the largest saliency value. Although no biologically plausible mechanism for isolating the hotspot is currently used, we assume that a standard winner-take-all (WTA) algorithm can be used to find the SM hotspot. Recognition thresholds: Recognition is accomplished by comparing the hotspot value with two thresholds. The model terminates with a target-present judgment if the hotspot value exceeds a high target-present threshold, set at .995 in the current study. A targetabsent response is made if the hotspot value falls below a low target-absent threshold (not used in the current study). If neither of these termination criteria are satisfied, processing passes to the eye movement stage. Foveal threshold: Processing in the eye movement stage depends on whether the model’s simulated fovea is fixated on the SM hotspot. This event is determined by computing the Euclidean distance between the current location of the fovea’s center and the hotspot (CF2HS), then comparing this distance to a foveal threshold (FT). The FT, set at 0.5 deg of visual angle, is determined by the retina transform and viewing angle and corresponds to the radius of the foveal window size. The foveal window is the region of the image not blurred by the retina transform function, much like the high-resolution foveola in the human visual system. Hotspot out of fovea: If the hotspot is not within the FT, meaning that the object giving rise to the hotspot is not currently fixated, then the model will make an eye movement to bring the simulated fovea closer to the hotspot’s location. In making this movement, the model will be effectively canceling the effect of the retina transform, thereby enabling a judgment regarding the hotspot pattern. The destination of the eye movement is computed by taking the weighted centroid of activity on the thresholded saliency map (TSM). See Section 2.3 for additional details regarding the centroid calculation of the suggested fixation point (SF), its relationship to the distance threshold for generating an eye movement (EMT), and the dynamically-changing threshold used to remove those SM points offering the least evidence for the target (+SM thresh). Hotspot at fovea: If the simulated fovea reaches the hotspot (CF2HS < FT) and the target is still not detected (HS < target-present threshold), the model is likely to have fixated a nontarget. When this happens (a common occurrence in the course of a search), it is desirable to inhibit the location of this false target so as not to have it re-attract attention or gaze. To accomplish this, we inhibit or ”zap” the hotspot by applying a negative Gaussian filter centered at the hotspot location (set at 63 pixels). Following this injection of negativity into the SM, a new eye movement is made based on the dynamics outlined in Section 2.3. 2.2. Saliency map creation The first step in creating the TD and BU saliency maps is to separate the retina-transformed image into an intensity channel and two opponent-process color channels (R-G and BY). For each channel, we then extract visual features by applying a set of steerable 2D Gaussian-derivative filters, G(t, θ, s), where t is the order of the Gaussian kernel, θ is the orientation, and s is the spatial scale. The current model uses first and second order Gaussians, 4 orientations (0, 45, 90 and 180 degrees), and 3 scales (7, 15 and 31 pixels), for a total of 24 filters. We therefore obtain 24 feature maps of filter responses per channel, M(t, θ, s), or alternatively, a 72-dimensional feature vector, F, for each pixel in the retinatransformed image. The TD saliency map is created by correlating the retina-transformed search image with the target feature vector Ft.2 To maintain consistency between the two saliency map representations, the same channels and features used in the TD saliency map were also used to create the BU saliency map. Feature-contrast signals on this map were obtained directly from the responses of the Gaussian derivative filters. For each channel, the 24 feature maps were combined into a single map according to: X t,θ,s N(|M(t, θ, s)|) (1) where N(•) is the normalization function described in [12]. The final BU saliency map is then created by averaging the three combined feature maps. Note that this method of creating a BU saliency map differs from the approach used in [12, 7] in that our filters consisted of 1st and 2nd order derivatives of Gaussians and not center-surround DoG filters. While the two methods of computing feature contrast are not equivalent, in practice they yield very similar patterns of BU salience. 2Note that because our TD saliency maps are derived from correlations between target and scene images, the visual statistics of these images are in some sense preserved and might be described as a BU component in our model. Nevertheless, the correlation-based guidance signal requires knowledge of a target (unlike a true BU model), and for this reason we will continue to refer to this as a TD process. Finally, the combined SM was simply a linear combination of the TD and BU saliency maps, where the weighting coefficient was a parameter manipulated in our experiments. 2.3. Eye movement generation Our model defines gaze position at each moment in time by the weighted spatial average (centroid) of signals on the SM, a form of neuronal population code for the generation of eye movement [13, 14]. Although a centroid computation will tend to bias gaze in the direction of the target (assuming that the target is the maximally salient pattern in the image), gaze will also be pulled away from the target by salient nontarget points. When the number of nontarget points is large, the eye will tend to move toward the geometric center of the scene (a tendency referred to in the behavioral literature as the global effect, [15, 16]); when the number of points is small, the eye will move more directly to the target. To capture this activity-dependent eye movement behavior, we introduce a moving threshold, ρ, that excludes points from the SM over time based on their signal strength. Initially ρ will be set to zero, allowing every signal on the SM to contribute to the centroid gaze computation. However, with each timestep, ρ is increased by .001, resulting in the exclusion of minimally salient points from the SM (+ SM thresh in Figure 1). The centroid of the SM, what we refer to as the suggested fixation point (SF), is therefore dependent on the current value of ρ and can be expressed as: SF = X Sp>ρ pSp P Sp . (2) Eventually, only the most salient points will remain on the thresholded saliency map (TSM), resulting in the direction of gaze to the hotspot. If this hotspot is not the target, ρ can be decreased (- SM thresh in Figure 1) after zapping in order to reintroduce points to the SM. Such a moving threshold is a plausible mechanism of neural computation easily instantiated by a simple recurrent network [17]. In order to prevent gaze from moving with each change in ρ, which would result in an unrealistically large number of very small eye movements, we impose an eye movement threshold (EMT) that prevents gaze from shifting until a minimum distance between SF and CF is achieved (SF2CF > EMT in Figure 1). The EMT is based on the signal and noise characteristics of each retina-transformed image, and is defined as: EMT = max (FT, d(1 + Cd log Signal Noise )), (3) where FT is the fovea threshold, C is a constant, and d is the distance between the current fixation and the hotspot. The Signal term is defined as the sum of all foveal saliency values on the TSM; the Noise term is defined as the sum of all other TSM values. The Signal/Noise log ratio is clamped to the range of [−1/C, 0]. The lower bound of the SF2CF distance is FT, and the upper bound is d. The eye movement dynamics can therefore be summarized as follows: incrementing ρ will tend to increase the SF2CF distance, which will result in an eye movement to SF once this distance exceeds the EMT. 3. Experimental methods For each trial, the two human observers and the model were first shown an image of a target (a tank). In the case of the human observers, the target was presented for one second and presumably encoded into working memory. In the case of the model, the target was represented by a single 72-dimensional feature vector as described in Section 2. A search image was then presented, which remained visible to the human observers until they made a button press response. Eye movements were recorded during this interval using an ELII eyetracker. Section 2 details the processing stages used by the model. There were 44 images and targets, which were all modified versions of images in the TNO dataset [18]. The images subtended approximately 20◦on both the human and simulated retinas. 4. Experimental results Model and human data are reported from 2 experiments. For each experiment we tested 5 weightings of TD and BU components in the combined SM. Expressed as a proportion of the BU component, these weightings were: BU 0 (TD only), BU .25, BU .5, BU .75, and BU 1.0 (BU only). 4.1. Experiment 1 Table 1: Human and model search behavior at 5 TD/BU mixtures in Experiment 1. Retina Human subjects Model Population H1 H2 TD only BU: 0.25 BU: 0.5 BU: 0.75 BU only Misses (%) 0.00 0.00 0.00 36.36 72.73 77.27 88.64 Fixations 4.55 4.43 4.55 18.89 20.08 21.00 22.40 Std Dev 0.88 2.15 0.82 10.44 12.50 10.29 12.58 Figure 2: Comparison of human and model scanpaths at different TD/BU weightings. As can be seen from Table 1, the human observers were remarkably consistent in their behavior. Each required an average of 4.5 fixations to find the target (defined as gaze falling within .5 deg of the target’s center), and neither generated an error (defined by a failure to find the target within 40 fixations). Human target detection performance was matched almost exactly by a pure TD model, both in terms of errors (0%) and fixations (4.55). This exceptional match between human and model disappeared with the addition of a BU component. Relative to the human and TD model, a BU 0.25 mixture model resulted in a dramatic increase in the miss rate (36%) and in the average number of fixations needed to acquire the target (18.9) on those trials in which the target was ultimately fixated. These high miss and fixation rates continued to increase with larger weightings of the BU contribution, reaching an unrealistic 89% misses and 22 fixations with a pure BU model. Figure 2 shows representative eye movement scanpaths from our two human observers (a) and the model at three different TD/BU mixtures (b, BU 0; c, BU 0.5; d, BU 1.0) for one image. Note the close agreement between the human scanpaths and the behavior of the TD model. Note also that, with the addition of a BU component, the model’s eye either wanders to high-contrast patterns (bushes, trees) before landing on the target (c), or misses the target entirely (d). 4.2. Experiment 2 Recently, Navalpakkam & Itti [19] reported data from a saliency-based model also integrating BU and TD information to guide search. Among their many results, they compared their model to the purely TD model described in [4] and found that their mixture model offered a more realistic account of human behavior. Specifically, they observed that the [4] model was too accurate, often predicting that the target would be fixated after only a single eye movement. Although our current findings would seem to contradict [19]’s result, this is not the case. Recall from Section 2.0 that our model differs from [4] in two respects: (1) it retinally transforms the input image with each fixation, and (2) it uses a thresholded population-averaging code to generate eye movements. Both of these additions would be expected to increase the number of fixations made by the current model relative to the TD model described in [4]. Adding a simulated retina should increase the number of fixations by reducing the target-scene TD correlations and increasing the probability of false targets emerging in the blurred periphery. Adding population averaging should increase fixations by causing eye movements to locations other than hotspots. It may therefore be the case that [19]’s critique of [4] may be pointing out two specific weaknesses of [4]’s model rather than a general weakness of their TD approach. To test this hypothesis, we disabled the artificial retina and the population averaging code in our current model. The model now moves directly from hotspot to hotspot, zapping each before moving to the next. Without retinal blurring and population averaging, the behavior of this simpler model is now driven entirely by a WTA computation on the combined SM. Moreover, with a BU weighting of 1.0, this version of our model now more closely approximates other purely BU models in the literature that also lack retinal acuity limitations and population dynamics. Table 2: Human and model search behavior at 5 TD/BU mixtures in Experiment 2. NO Retina Human subjects Model NO Population H1 H2 TD only BU: 0.25 BU: 0.5 BU: 0.75 BU only Misses (%) 0.00 0.00 0.00 9.09 27.27 56.82 68.18 Fixations 4.55 4.43 1.00 8.73 16.60 13.37 14.71 Std Dev 0.88 2.15 0.00 9.15 12.29 9.20 12.84 Table 2 shows the data from this experiment. The first two columns replot the human data from Table 1. Consistent with [19], we now find that the performance of a purely TD model is too good. The target is consistently fixated after only a single eye movement, unlike the 4.5 fixations averaged by human observers. Also consistent with [19] is the observation that a BU contribution may assist this model in better characterizing human behavior. Although a 0.25 BU weighting resulted in a doubling of the human fixation rate and 9% misses, it is conceivable that a smaller BU weighting could nicely describe human performance. As in Experiment 1, at larger BU weightings the model again generated unrealistically high error and fixation rates. These results suggest that, in the absence of retinal and neuronal population-averaging constraints, BU information may play a small role in guiding search. 5. Conclusions To what extent is TD and BU information used to guide search behavior? The findings from Experiment 1 offer a clear answer to this question: when biologically plausible constraints are considered, any addition of BU information to a purely TD model will worsen, not improve, the match to human search performance (see [20] for a similar conclusion applied to a walking task). The findings from Experiment 2 are more open to interpretation. It may be possible to devise a TD model in which adding a BU component might prove useful, but doing this would require building into this model biologically implausible assumptions. A corollary to this conclusion is that, when these same biological constraints are added to existing BU saliency-based models, these models may no longer be able to describe human behavior. A final fortuitous finding from this study is the surprising degree of agreement between our purely TD model and human performance. The fact that this agreement was obtained by direct comparison to human behavior (rather than patterns reported in the behavioral literature), and observed in eye movement variables, lends validity to our method. Future work will explore the generality of our TD model, extending it to other forms of TD guidance (e.g., scene context) and tasks in which a target may be poorly defined (e.g., categorical search). Acknowledgments This work was supported by a grant from the ARO (DAAD19-03-1-0039) to G.J.Z. References [1] A. Treisman and G. Gelade. A feature-integration theory of attention. Cognitive Psychology, 12:97–136, 1980. [2] J. Wolfe, K. Cave, and S. Franzel. Guided search: An alternative to the feature integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15:419–433, 1989. [3] J. Wolfe. Guided search 2.0: A revised model of visual search. Psychonomic Bulletin and Review, 1:202–238, 1994. [4] R. Rao, G. Zelinsky, M. Hayhoe, and D. Ballard. Eye movements in iconic visual search. Vision Research, 42:1447–1463, 2002. [5] C. Koch and S. Ullman. Shifts of selective visual attention: Toward the underlying neural circuitry. Human Neurobiology, 4:219–227, 1985. [6] L. Itti and C. Koch. Computational modeling of visual attention. Nature Reviews Neuroscience, 2(3):194–203, 2001. [7] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shift of visual attention. Vision Research, 40(10-12):1489–1506, 2000. [8] R. Rao, G. Zelinsky, M. Hayhoe, and D. Ballard. Modeling saccadic targeting in visual search. In NIPS, 1995. [9] J.S. Perry and W.S. Geisler. Gaze-contingent real-time simulation of arbitrary visual fields. In SPIE, 2002. [10] R. M. Klein and W.J. MacInnes. Inhibition of return is a foraging facilitator in visual search. Psychological Science, 10(4):346–352, 1999. [11] C. A. Dickinson and G. Zelinsky. Marking rejected distractors: A gaze-contingent technique for measuring memory during search. Psychonomic Bulletin and Review, In press. [12] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. PAMI, 20(11):1254–1259, 1998. [13] T. Sejnowski. Neural populations revealed. Nature, 332:308, 1988. [14] C. Lee, W. Rohrer, and D. Sparks. Population coding of saccadic eye movements by neurons in the superior colliculus. Nature, 332:357–360, 1988. [15] J. Findlay. Global visual processing for saccadic eye movements. Vision Research, 22:1033– 1045, 1982. [16] G. Zelinsky, R. Rao, M. Hayhoe, and D. Ballard. Eye movements reveal the spatio-temporal dynamics of visual search. Psychological Science, 8:448–453, 1997. [17] J. L. Elman. Finding structures in time. Cognitive Science, 14:179–211, 1990. [18] A. Toet, P. Bijl, F. L. Kooi, and J. M. Valeton. A high-resolution image dataset for testing search and detection models. Technical Report TNO-NM-98-A020, TNO Human Factors Research Institute,, Soesterberg, The Netherlands, 1998. [19] V. Navalpakkam and L Itti. Modeling the influence of task on attention. Vision Research, 45:205–231, 2005. [20] K. A. Turano, D. R. Geruschat, and F. H. Baker. Oculomotor strategies for direction of gaze tested with a real-world activity. Vision Research, 43(3):333–346, 2003.
|
2005
|
71
|
2,890
|
A Bayesian Framework for Tilt Perception and Confidence Odelia Schwartz HHMI and Salk Institute La Jolla, CA 92014 odelia@salk.edu Terrence J. Sejnowski HHMI and Salk Institute La Jolla, CA 92014 terry@salk.edu Peter Dayan Gatsby, UCL 17 Queen Square, London dayan@gatsby.ucl.ac.uk Abstract The misjudgement of tilt in images lies at the heart of entertaining visual illusions and rigorous perceptual psychophysics. A wealth of findings has attracted many mechanistic models, but few clear computational principles. We adopt a Bayesian approach to perceptual tilt estimation, showing how a smoothness prior offers a powerful way of addressing much confusing data. In particular, we faithfully model recent results showing that confidence in estimation can be systematically affected by the same aspects of images that affect bias. Confidence is central to Bayesian modeling approaches, and is applicable in many other perceptual domains. Perceptual anomalies and illusions, such as the misjudgements of motion and tilt evident in so many psychophysical experiments, have intrigued researchers for decades.1–3 A Bayesian view4–8 has been particularly influential in models of motion processing, treating such anomalies as the normative product of prior information (often statistically codifying Gestalt laws) with likelihood information from the actual scenes presented. Here, we expand the range of statistically normative accounts to tilt estimation, for which there are classes of results (on estimation confidence) that are so far not available for motion. The tilt illusion arises when the perceived tilt of a center target is misjudged (ie bias) in the presence of flankers. Another phenomenon, called Crowding, refers to a loss in the confidence (ie sensitivity) of perceived target tilt in the presence of flankers. Attempts have been made to formalize these phenomena quantitatively. Crowding has been modeled as compulsory feature pooling (ie averaging of orientations), ignoring spatial positions.9,10 The tilt illusion has been explained by lateral interactions11,12 in populations of orientationtuned units; and by calibration.13 However, most models of this form cannot explain a number of crucial aspects of the data. First, the geometry of the positional arrangement of the stimuli affects attraction versus repulsion in bias, as emphasized by Kapadia et al14 (figure 1A), and others.15,16 Second, Solomon et al. recently measured bias and sensitivity simultaneously.11 The rich and surprising range of sensitivities, far from flat as a function of flanker angles (figure 1B), are outside the reach of standard models. Moreover, current explanations do not offer a computational account of tilt perception as the outcome of a normative inference process. Here, we demonstrate that a Bayesian framework for orientation estimation, with a prior favoring smoothness, can naturally explain a range of seemingly puzzling tilt data. We explicitly consider both the geometry of the stimuli, and the issue of confidence in the esti(A) (B) -80 -60 -40 -20 0 20 40 60 80 0.1 0.2 0.3 0.4 0.5 0.6 Flanker tilt (deg) Sensititvity (1/deg) -2 -1 0 1 2 3 4 5 6 Bias (deg) Attraction Repulsion Figure 1: Tilt biases and sensitivities in visual perception. (A) Kapadia et al demonstrated the importance of geometry on tilt bias, with bar stimuli in the fovea (and similar results in the periphery). When 5 degrees clockwise flankers are arranged colinearly, the center target appears attracted in the direction of the flankers; when flankers are lateral, the target appears repulsed. Data are an average of 5 subjects.14 (B) Solomon et al measured both biases and sensitivities for gratings in the visual periphery.11 On the top are example stimuli, with flankers tilted 22.5 degrees clockwise. This constitutes the classic tilt illusion, with a repulsive bias percept. In addition, sensitivities vary as a function of flanker angles, in a systematic way (even in cases when there are no biases at all). Sensitivities are given in units of the inverse of standard deviation of the tilt estimate. More detailed data for both experiments are shown in the results section. mation. Bayesian analyses have most frequently been applied to bias. Much less attention has been paid to the equally important phenomenon of sensitivity. This aspect of our model should be applicable to other perceptual domains. In section 1 we formulate the Bayesian model. The prior is determined by the principle of creating a smooth contour between the target and flankers. We describe how to extract the bias and sensitivity. In section 2 we show experimental data of Kapadia et al and Solomon et al, alongside the model simulations, and demonstrate that the model can account for both geometry, and bias and sensitivity measurements in the data. Our results suggest a more unified, rational, approach to understanding tilt perception. 1 Bayesian model Under our Bayesian model, inference is controlled by the posterior distribution over the tilt of the target element. This comes from the combination of a prior favoring smooth configurations of the flankers and target, and the likelihood associated with the actual scene. A complete distribution would consider all possible angles and relative spatial positions of the bars, and marginalize the posterior over all but the tilt of the central element. For simplicity, we make two benign approximations: conditionalizing over (ie clamping) the angles of the flankers, and exploring only a small neighborhood of their positions. We now describe the steps of inference. Smoothness prior: Under these approximations, we consider a given actual configuration (see fig 2A) of flankers f1 =(φ1, x1), f2 =(φ2, x2) and center target c=(φc, xc), arranged from top to bottom. We have to generate a prior over φc and δ1 =x1 −xc and δ2 =x2 −xc based on the principle of smoothness. As a less benign approximation, we do this in two stages: articulating a principle that determines a single optimal configuration; and generating a prior as a mixture of a Gaussian about this optimum and a uniform distribution, with the mixing proportion of the latter being determined by the smoothness of the optimum. Smoothness has been extensively studied in the computer vision literature.17–20 One widely -80 -60 -40 -20 0 20 40 60 80 -40 -20 0 20 40 -80 -60 -40 20 0 20 40 60 80 0 0.2 0.4 0.6 0.8 1 Flanker tilt (deg) Flanker tilt (deg) Max smooth target (deg) Probability max smooth (A) (C) f1 f2 c β1 R δ1 f1 f2 c (B) Φc Figure 2: Geometry and smoothness for flankers, f1 and f2, and center target, c. (A) Example actual configuration of flankers and target, aligned along the y axis from top to bottom. (B) The elastica procedure can rotate the target angle (to Φc) and shift the relative flanker and target positions on the x axis (to δ1 and δ2) in its search for the maximally smooth solution. Small spatial shifts (up to 1/15 the size of R) of positions are allowed, but positional shift is overemphasized in the figure for visibility. (C) Top: center tilt that results in maximal smoothness, as a function of flanker tilt. Boxed cartoons show examples for given flanker tilts, of the optimally smooth configuration. Note attraction of target towards flankers for small flanker angles; here flankers and target are positioned in a nearly colinear arrangement. Note also repulsion of target away from flankers for intermediate flanker angles. Bottom: P[c, f1, f2] for center tilt that yields maximal smoothness. The y axis is normalized between 0 and 1. used principle, elastica, known even to Euler, has been applied to contour completion21 and other computer vision applications.17 The basic idea is to find the curve with minimum energy (ie, square of curvature). Sharon et al19 showed that the elastica function can be well approximated by a number of simpler forms. We adopt a version that Leung and Malik18 adopted from Sharon et al.19 We assume that the probability for completing a smooth curve, can be factorized into two terms: P[c, f1, f2] = G(c, f1)G(c, f2) (1) with the term G(c, f1) (and similarly, G(c, f2)) written as: G(c, f1) = exp(−R σR −Dβ σβ ) where Dβ = β2 1 + β2 c −β1βc (2) and β1 (and similarly, βc) is the angle between the orientation at f1, and the line joining f1 and c. The distance between the centers of f1 and c is given by R. The two constants, σβ and σR, control the relative contribution to smoothness of the angle versus the spatial distance. Here, we set σβ = 1, and σR = 1.5. Figure 2B illustrates an example geometry, in which φc, δ1, and δ2, have been shifted from the actual scene (of figure 2A). We now estimate the smoothest solution for given configurations. Figure 2C shows for given flanker tilts, the center tilt that yields maximal smoothness, and the corresponding probability of smoothness. For near vertical flankers, the spatial lability leads to very weak attraction and high probability of smoothness. As the flanker angle deviates farther from vertical, there is a large repulsion, but also lower probability of smoothness. These observations are key to our model: the maximally smooth center tilt will influence attractive and repulsive interactions of tilt estimation; the probability of smoothness will influence the relative weighting of the prior versus the likelihood. From the smoothness principle, we construct a two dimensional prior (figure 3A). One dimension represents tilt, the other dimension, the overall positional shift between target Position Angle -0. 2 0 0.2 -20 -10 0 10 20 Likelihood Angle Position -0. 2 0 0.2 -20 -10 0 10 20 Position Angle -0. 2 0 0.2 -20 -10 0 10 20 Marginalized Posterior Angle Probability (A) Prior Posterior (B) (C) (D) -20 -10 0 10 20 0 0.2 0.4 0.6 0.8 1 Target angle (deg) Probability clockwise (E) Psychometric function -10 -5 0 5 10 0 0.01 0.02 0.03 Clockwise Counter-clockwise Figure 3: Bayes model for example flankers and target. (A) Prior 2D distribution for flankers set at 22.5 degrees (note repulsive preference for -5.5 degrees). (B) Likelihood 2D distribution for a target tilt of 3 degrees; (C) Posterior 2D distribution. All 2D distributions are drawn on the same grayscale range, and the presence of a larger baseline in the prior causes it to appear more dimmed. (D) Marginalized posterior, resulting in 1D distribution over tilt. Dashed line represents the mean, with slight preference for negative angle. (E) For this target tilt, we calculate probability clockwise, and obtain one point on psychometric curve. and flankers (called ’position’). The prior is a 2D Gaussian distribution, sat upon a constant baseline.22 The Gaussian is centered at the estimated smoothest target angle and relative position, and the baseline is determined by the probability of smoothness. The baseline, and its dependence on the flanker orientation, is a key difference from Weiss et al’s Gaussian prior for smooth, slow motion. It can be seen as a mechanism to allow segmentation (see Posterior description below). The standard deviation of the Gaussian is a free parameter. Likelihood: The likelihood over tilt and position (figure 3B) is determined by a 2D Gaussian distribution with an added baseline.22 The Gaussian is centered at the actual target tilt; and at a position taken as zero, since this is the actual position, to which the prior is compared. The standard deviation and baseline constant are free parameters. Posterior and marginalization: The posterior comes from multiplying likelihood and prior (figure 3C) and then marginalizing over position to obtain a 1D distribution over tilt. Figure 3D shows an example in which this distribution is bimodal. Other likelihoods, with closer agreement between target and smooth prior, give unimodal distributions. Note that the bimodality is a direct consequence of having an added baseline to the prior and likelihood (if these were Gaussian without a baseline, the posterior would always be Gaussian). The viewer is effectively assessing whether the target is associated with the same object as the flankers, and this is reflected in the baseline, and consequently, in the bimodality, and confidence estimate. We define α as the mean angle of the 1D posterior distribution (eg, value of dashed line on the x axis), and β as the height of the probability distribution at that mean angle (eg, height of dashed line). The term β is an indication of confidence in the angle estimate, where for larger values we are more certain of the estimate. Decision of probability clockwise: The probability of a clockwise tilt is estimated from the marginalized posterior: P = 1 1 + exp −α.∗k −log(β+η) (3) where α and β are defined as above, k is a free parameter and η a small constant. Free parameters are set to a single constant value for all flanker and center configurations. Weiss et al use a similar compressive nonlinearity, but without the term β. We also tried a decision function that integrates the posterior, but the resulting curves were far from the sigmoidal nature of the data. Bias and sensitivity: For one target tilt, we generate a single probability and therefore a single point on the psychometric function relating tilt to the probability of choosing clockwise. We generate the full psychometric curve from all target tilts and fit to it a cumulative -10 -5 0 5 10 0 20 40 60 80 100 Frequency responding clockwise Data -10 -5 0 5 10 0 20 40 60 80 100 Frequency responding clockwise Model -10 -5 0 5 10 0 20 40 60 80 100 Target tilt (deg) Frequency responding clockwise -10 -5 0 5 10 0 20 40 60 80 100 Target tilt (deg) Frequency responding clockwise (A) (B) (C) (D) Data Model Target tilt (deg) Target tilt (deg) Figure 4: Kapadia et al data,14 versus Bayesian model. Solid lines are fits to a cumulative Gaussian distribution. (A) Flankers are tilted 5 degrees clockwise (black curve) or anti-clockwise (gray) of vertical, and positioned spatially in a colinear arrangement. The center bar appears tilted in the direction of the flankers (attraction), as can be seen by the attractive shift of the psychometric curve. The boxed stimuli cartoon illustrates a vertical target amidst the flankers. (B) Model for colinear bars also produces attraction. (C) Data and (D) model for lateral flankers results in repulsion. All data are collected in the fovea for bars. Gaussian distribution N(µ, σ) (figure 3E). The mean µ of the fit corresponds to the bias, and 1 σ to the sensitivity, or confidence in the bias. The fit to a cumulative Gaussian and extraction of these parameters exactly mimic psychophysical procedures.11 2 Results: data versus model We first consider the geometry of the center and flanker configurations, modeling the full psychometric curve for colinear and parallel flanks (recall that figure 1A showed summary biases). Figure 4A;B demonstrates attraction in the data and model; that is, the psychometric curve is shifted towards the flanker, because of the nature of smooth completions for colinear flankers. Figure 4C;D shows repulsion in the data and model. In this case, the flankers are arranged laterally instead of colinearly. The smoothest solution in the model arises by shifting the target estimate away from the flankers. This shift is rather minor, because the configuration has a low probability of smoothness (similar to figure 2C), and thus the prior exerts only a weak effect. The above results show examples of changes in the psychometric curve, but do not address both bias and, particularly, sensitivity, across a whole range of flanker configurations. Figure 5 depicts biases and sensitivity from Solomon et al, versus the Bayes model. The data are shown for a representative subject, but the qualitative behavior is consistent across all subjects tested. In figure 5A, bias is shown, for the condition that both flankers are tilted at the same angle. The data exhibit small attraction at near vertical flanker angles (this arrangement is close to colinear); large repulsion at intermediate flanker angles of 22.5 and 45 degrees from vertical; and minimal repulsion at large angles from vertical. This behavior is also exhibited in the Bayes model (Figure 5B). For intermediate flanker angles, the smoothest solution in the model is repulsive, and the effect of the prior is strong enough to induce a significant repulsion. For large angles, the prior exerts almost no effect. Interestingly, sensitivity is far from flat in both data and model. In the data (Figure 5C), there is most loss in sensitivity at intermediate flanker angles of 22.5 and 45 degrees (ie, the subject is less certain); and sensitivity is higher for near vertical or near horizontal flankers. The model shows the same qualitative behavior (Figure 5D). In the model, there are two factors driving sensitivity: one is the probability of completing a smooth curvature for a given flanker configuration, as in Figure 2B; this determines the strength of the prior. The other factor is certainty in a particular center estimation; this is determined by β, derived from the posterior distribution, and incorporated into the decision stage of the model Data -80 -60 -40 -20 0 20 40 60 80 -10 -5 0 5 10 Flanker tilt (deg) Bias (deg) -80 -60 -40 -20 0 20 40 60 80 0.1 0.2 0.3 0.4 0.5 0.6 Flanker tilt (deg) Sensititvity (1/deg) (A) (C) -80 -60 -40 -20 0 20 40 60 80 -10 -5 0 5 10 Flanker tilt (deg) Bias (deg) -80 -60 -40 -20 0 20 40 60 80 0.1 0.2 0.3 0.4 0.5 0.6 Flanker tilt (deg) Sensititvity (1/deg) (E) (G) -80 -60 -40 -20 0 20 40 60 80 -10 -5 0 5 10 Flanker tilt (deg) Bias (deg) -80 -60 -40 -20 0 20 40 60 80 0.1 0.2 0.3 0.4 0.5 0.6 Flanker tilt (deg) Sensitivity (1/deg) (B) (D) Model (F) -80 -60 -40 -20 0 20 40 60 80 -10 -5 0 5 10 Flanker tilt (deg) Bias (deg) -80 -60 -40 -20 0 20 40 60 80 0.1 0.2 0.3 0.4 0.5 0.6 Flanker tilt (deg) Sensitivity (1/deg) (H) Figure 5: Solomon et al data11 (subject FF), versus Bayesian model. (A) Data and (B) model biases with same-tilted flankers; (C) Data and (D) model sensitivities with same-tilted flankers; (E;G) data and (F;H) model as above, but for opposite-tilted flankers (note that opposite-tilted data was collected for less flanker angles). Each point in the figure is derived by fitting a cummulative Gaussian distribution N(µ, σ) to corresponding psychometric curve, and setting bias equal to µ and sensitivity to 1 σ . In all experiments, flanker and target gratings are presented in the visual periphery. Both data and model stimuli are averages of two configurations, on the left hand side (9 O’clock position) and right hand side (3 O’clock position). The configurations are similar to Figure 1 (B), but slightly shifted according to an iso-eccentric circle, so that all stimuli are similarly visible in the periphery. (equation 3). For flankers that are far from vertical, the prior has minimal effect because one cannot find a smooth solution (eg, the likelihood dominates), and thus sensitivity is higher. The low sensitivity at intermediate angles arises because the prior has considerable effect; and there is conflict between the prior (tilt, position), and likelihood (tilt, position). This leads to uncertainty in the target angle estimation . For flankers near vertical, the prior exerts a strong effect; but there is less conflict between the likelihood and prior estimates (tilt, position) for a vertical target. This leads to more confidence in the posterior estimate, and therefore, higher sensitivity. The only aspect that our model does not reproduce is the (more subtle) sensitivity difference between 0 and +/- 5 degree flankers. Figure 5E-H depict data and model for opposite tilted flankers. The bias is now close to zero in the data (Figure 5E) and model (Figure 5F), as would be expected (since the maximally smooth angle is now always roughly vertical). Perhaps more surprisingly, the sensitivities continue to to be non-flat in the data (Figure 5G) and model (Figure 5H). This behavior arises in the model due to the strength of prior, and positional uncertainty. As before, there is most loss in sensitivity at intermediate angles. Note that to fit Kapadia et al, simulations used a constant parameter of k = 9 in equation 3, whereas for the Solomon et al. simulations, k = 2.5. This indicates that, in our model, there was higher confidence in the foveal experiments than in the peripheral ones. 3 Discussion We applied a Bayesian framework to the widely studied tilt illusion, and demonstrated the model on examples from two different data sets involving foveal and peripheral estimation. Our results support the appealing hypothesis that perceptual misjudgements are not a consequence of poor system design, but rather can be described as optimal inference.4–8 Our model accounts correctly for both attraction and repulsion, determined by the smoothness prior and the geometry of the scene. We emphasized the issue of estimation confidence. The dataset showing how confidence is affected by the same issues that affect bias,11 was exactly appropriate for a Bayesian formulation; other models in the literature typically do not incorporate confidence in a thoroughly probabilistic manner. In fact, our model fits the confidence (and bias) data more proficiently than an account based on lateral interactions among a population of orientationtuned cells.11 Other Bayesian work, by Stocker et al,6 utilized the full slope of the psychometric curve in fitting a prior and likelihood to motion data, but did not examine the issue of confidence. Estimation confidence plays a central role in Bayesian formulations as a whole. Understanding how priors affect confidence should have direct bearing on many other Bayesian calculations such as multimodal integration.23 Our model is obviously over-simplified in a number of ways. First, we described it in terms of tilts and spatial positions; a more complete version should work in the pixel/filtering domain.18,19 We have also only considered two flanking elements; the model is extendible to a full-field surround, whereby smoothness operates along a range of geometric directions, and some directions are more (smoothly) dominant than others. Second, the prior is constructed by summarizing the maximal smoothness information; a more probabilistically correct version should capture the full probability of smoothness in its prior. Third, our model does not incorporate a formal noise representation; however, sensitivities could be influenced both by stimulus-driven noise and confidence. Fourth, our model does not address attraction in the so-called indirect tilt illusion, thought to be mediated by a different mechanism. Finally, we have yet to account for neurophysiological data within this framework, and incorporate constraints at the neural implementation level. However, versions of our computations are oft suggested for intra-areal and feedback cortical circuits; and smoothness principles form a key part of the association field connection scheme in Li’s24 dynamical model of contour integration in V1. Our model is connected to a wealth of literature in computer vision and perception. Notably, occlusion and contour completion might be seen as the extreme example in which there is no likelihood information at all for the center target; a host of papers have shown that under these circumstances, smoothness principles such as elastica and variants explain many aspects of perception. The model is also associated with many studies on contour integration motivated by Gestalt principles;25,26 and exploration of natural scene statistics and Gestalt,27,28 including the relation to contour grouping within a Bayesian framework.29,30 Indeed, our model could be modified to include a prior from natural scenes. There are various directions for the experimental test and refinement of our model. Most pressing is to determine bias and sensitivity for different center and flanker contrasts. As in the case of motion, our model predicts that when there is more uncertainty in the center element, prior information is more dominant. Another interesting test would be to design a task such that the center element is actually part of a different figure and unrelated to the flankers; our framework predicts that there would be minimal bias, because of segmentation. Our model should also be applied to other tilt-based illusions such as the Fraser spiral and Z¨ollner. Finally, our model can be applied to other perceptual domains;31 and given the apparent similarities between the tilt illusion and the tilt after-effect, we plan to extend the model to adaptation, by considering smoothness in time as well as space. Acknowledgements This work was funded by the HHMI (OS, TJS) and the Gatsby Charitable Foundation (PD). We are very grateful to Serge Belongie, Leanne Chukoskie, Philip Meier and Joshua Solomon for helpful discussions. References [1] J J Gibson. Adaptation, after-effect, and contrast in the perception of tilted lines. Journal of Experimental Psychology, 20:553–569, 1937. [2] C Blakemore, R H S Carpentar, and M A Georgeson. Lateral inhibition between orientation detectors in the human visual system. Nature, 228:37–39, 1970. [3] J A Stuart and H M Burian. A study of separation difficulty: Its relationship to visual acuity in normal and amblyopic eyes. American Journal of Ophthalmology, 53:471–477, 1962. [4] A Yuille and H H Bulthoff. Perception as bayesian inference. In Knill and Whitman, editors, Bayesian decision theory and psychophysics, pages 123–161. Cambridge University Press, 1996. [5] Y Weiss, E P Simoncelli, and E H Adelson. Motion illusions as optimal percepts. Nature Neuroscience, 5:598–604, 2002. [6] A Stocker and E P Simoncelli. Constraining a bayesian model of human visual speed perception. Adv in Neural Info Processing Systems, 17, 2004. [7] D Kersten, P Mamassian, and A Yuille. Object perception as bayesian inference. Annual Review of Psychology, 55:271–304, 2004. [8] K Kording and D Wolpert. Bayesian integration in sensorimotor learning. Nature, 427:244–247, 2004. [9] L Parkes, J Lund, A Angelucci, J Solomon, and M Morgan. Compulsory averaging of crowded orientation signals in human vision. Nature Neuroscience, 4:739–744, 2001. [10] D G Pelli, M Palomares, and N J Majaj. Crowding is unlike ordinary masking: Distinguishing feature integration from detection. Journal of Vision, 4:1136–1169, 2002. [11] J Solomon, F M Felisberti, and M Morgan. Crowding and the tilt illusion: Toward a unified account. Journal of Vision, 4:500–508, 2004. [12] J A Bednar and R Miikkulainen. Tilt aftereffects in a self-organizing model of the primary visual cortex. Neural Computation, 12:1721–1740, 2000. [13] C W Clifford, P Wenderoth, and B Spehar. A functional angle on some after-effects in cortical vision. Proc Biol Sci, 1454:1705–1710, 2000. [14] M K Kapadia, G Westheimer, and C D Gilbert. Spatial distribution of contextual interactions in primary visual cortex and in visual perception. J Neurophysiology, 4:2048–262, 2000. [15] C C Chen and C W Tyler. Lateral modulation of contrast discrimination: Flanker orientation effects. Journal of Vision, 2:520–530, 2002. [16] I Mareschal, M P Sceniak, and R M Shapley. Contextual influences on orientation discrimination: binding local and global cues. Vision Research, 41:1915–1930, 2001. [17] D Mumford. Elastica and computer vision. In Chandrajit Bajaj, editor, Algebraic geometry and its applications. Springer Verlag, 1994. [18] T K Leung and J Malik. Contour continuity in region based image segmentation. In Proc. ECCV, pages 544–559, 1998. [19] E Sharon, A Brandt, and R Basri. Completion energies and scale. IEEE Pat. Anal. Mach. Intell., 22(10), 1997. [20] S W Zucker, C David, A Dobbins, and L Iverson. The organization of curve detection: coarse tangent fields. Computer Graphics and Image Processing, 9(3):213–234, 1988. [21] S Ullman. Filling in the gaps: the shape of subjective contours and a model for their generation. Biological Cybernetics, 25:1–6, 1976. [22] G E Hinton and A D Brown. Spiking boltzmann machines. Adv in Neural Info Processing Systems, 12, 1998. [23] R A Jacobs. What determines visual cue reliability? Trends in Cognitive Sciences, 6:345–350, 2002. [24] Z Li. A saliency map in primary visual cortex. Trends in Cognitive Science, 6:9–16, 2002. [25] D J Field, A Hayes, and R F Hess. Contour integration by the human visual system: evidence for a local “association field”. Vision Research, 33:173–193, 1993. [26] J Beck, A Rosenfeld, and R Ivry. Line segregation. Spatial Vision, 4:75–101, 1989. [27] M Sigman, G A Cecchi, C D Gilbert, and M O Magnasco. On a common circle: Natural scenes and gestalt rules. PNAS, 98(4):1935–1940, 2001. [28] S Mahumad, L R Williams, K K Thornber, and K Xu. Segmentation of multiple salient closed contours from real images. IEEE Pat. Anal. Mach. Intell., 25(4):433–444, 1997. [29] W S Geisler, J S Perry, B J Super, and D P Gallogly. Edge co-occurence in natural images predicts contour grouping performance. Vision Research, 6:711–724, 2001. [30] J H Elder and R M Goldberg. Ecological statistics of gestalt laws for the perceptual organization of contours. Journal of Vision, 4:324–353, 2002. [31] S R Lehky and T J Sejnowski. Neural model of stereoacuity and depth interpolation based on a distributed representation of stereo disparity. Journal of Neuroscience, 10:2281–2299, 1990.
|
2005
|
72
|
2,891
|
Multiple Instance Boosting for Object Detection Paul Viola, John C. Platt, and Cha Zhang Microsoft Research 1 Microsoft Way Redmond, WA 98052 {viola,jplatt}@microsoft.com Abstract A good image object detection algorithm is accurate, fast, and does not require exact locations of objects in a training set. We can create such an object detector by taking the architecture of the Viola-Jones detector cascade and training it with a new variant of boosting that we call MILBoost. MILBoost uses cost functions from the Multiple Instance Learning literature combined with the AnyBoost framework. We adapt the feature selection criterion of MILBoost to optimize the performance of the Viola-Jones cascade. Experiments show that the detection rate is up to 1.6 times better using MILBoost. This increased detection rate shows the advantage of simultaneously learning the locations and scales of the objects in the training set along with the parameters of the classifier. 1 Introduction When researchers use machine learning for object detection, they need to know the location and size of the objects, in order to generate positive examples for the classification algorithm. It is often extremely tedious to generate large training sets of objects, because it is not easy to specify exactly where the objects are. For example, given a ZIP code of handwritten digits, which pixel is the location of a “5” ? This sort of ambiguity leads to training sets which themselves have high error rates, this limits the accuracy of any trained classifier. In this paper, we explicitly acknowledge that object recognition is innately a Multiple Instance Learning problem: we know that objects are located in regions of the image, but we don’t know exactly where. In MIL, training examples are not singletons. Instead, they come in “bags”, where all of the examples in a bag share a label [4]. A positive bag means that at least one example in the bag is positive, while a negative bag means that all examples in the bag are negative. In MIL, learning must simultaneously learn which examples in the positive bags are positive, along with the parameters of the classifier. We have combined MIL with the Viola-Jones method of object detection, which uses Adaboost [11] to create a cascade of detectors. To do this, we created MILBoost, a new method for folding MIL into the AnyBoost [9] framework. In addition, we show how early stage in the detection cascade can be re-trained using information extracted from the final MIL classifier. We test this new form of MILBoost for detecting people in a teleconferencing application. This is a much harder problem then face detection, since the participants do not look at the camera (and sometimes away). The MIL framework is shown to produce classifiers with much higher detection rates and fast computation times. 1.1 Structure of paper We first review the previous work in two fields: previous related work in object detection (Section 2.1) and in multiple instance learning (Section 2.2). We derive a new MIL variant of boosting in Section 3, called MILBoost. MILBoost is used to train a detector in the Viola-Jones framework in Section 4. We then adapt MILBoost to train an effective cascade using a new criterion for selecting features in the early rounds of training (Section 5). The paper concludes in Section 6 with experimental results on the problem of person detection in a teleconferencing application. The MIL framework is shown to produce classifiers with much higher detection rates and fast computation times. 2 Relationship to previous work This paper lies at the intersection between the subfields of object detection and multiple instance learning. Therefore, we discuss the relationship with previous work in each subfield separately. 2.1 Previous work in image object detection The task of object detection in images is quite daunting. Amongst the challenges are 1) creating a system with high accuracy and low false detection rate, 2) restricting the system to consume a reasonable amount of CPU time, and 3) creating a large training set that has low labeling error. Perona et. al [3, 5] and Schmid [12] have proposed constellation models: spatial models of local image features. These models can be trained using unsegmented images in which the object can appear at any location. Learning uses EM-like algorithms to iteratively localize and refine discriminative image features. However, hitherto, the detection accuracy has not be as good as the best methods. Viola and Jones [13] created a system that exhaustively scans pose space for generic objects. This system is accurate, because it is trained using AdaBoost [11]. It is also very efficient, because it uses a cascade of detectors and very simple image features. However, the AdaBoost algorithm requires exact positions of objects to learn. The closest work to this paper is Nowlan and Platt [10], which built on the work of Keeler, et. al [7] (see below). In the Nowlan paper, a convolutional neural network was trained to detect hands. The exact location and size of the hands is approximately truthed: the neural network used MIL training to co-learn the object location and the parameters of the classifier. The system is effective, but is not as fast as Viola and Jones, because the detector is more complex and it does not use a cascade. This paper builds on the accuracy and speed of Viola and Jones, by using the same architecture. We attempt to gain the flexibility of the constellation models. Instead of an EM-like algorithm, we use MIL to create our system, which does not require iteration. Unlike Nowlan and Platt, we maintain a cascade of detectors for maximum speed. 2.2 Previous work in Multiple Instance Learning The idea for multiple instance learning was originally proposed 1990 for handwritten digit recognition by Keeler, et. al [7]. Keeler’s approach was called Integrated Segmentation and Recognition (ISR). In that paper, the position of a digit in a ZIP code was considered completely unknown. ISR simultaneously learned the positions of the digits and the parameters of a convolutional neural network recognizer. More details on ISR are given below (Section 3.2). Another relevant example of MIL is the Diverse Density approach of Maron [8]. Diverse Density uses the Noisy OR generative model [6] to explain the bag labels. A gradientdescent algorithm is used to find the best point in input space that explains the positive bags. We also utilize the Noisy OR generative model in a version of our algorithm, below (Section 3.1). Finally, a number of researchers have modified the boosting algorithm to perform MIL. For example, Andrews and Hofmann [1] have proposed modifying the inner loop of boosting to use linear programming. This is not practically applicable to the object detection task, which can have millions of examples (pixels) and thousands of bags. Another approach is due to Auer and Ortner [2], which enforces a constraint that weak classifiers must be either hyper-balls or hyper-rectangles in ℜN. This would exclude the fast features used by Viola and Jones. A third approach is that of Xu and Frank [14], which uses a generative model that the probability of a bag being positive is the mean of the probabilities that the examples are positive. We believe that this rule is unsuited for object detection, because only a small subset of the examples in the bag are ever positive. 3 MIL and Boosting We will present two new variants of AdaBoost which attempt to solve the MIL problem. The derivation uses the AnyBoost framework of of Mason et al. which views boosting as a gradient descent process [9]. The derivation builds on previous appropriate MIL cost functions, namely ISR and Noisy OR. The Noisy OR derivation is simpler and a bit more intuitive. 3.1 Noisy-OR Boost Recall in boosting each example is classified by a linear combination of weak classifiers. In MILBoost, examples are not individually labeled. Instead, they reside in bags. Thus, an example is indexed with two indices: i, which indexes the bag, and j, which indexes the example within the bag. The score of the example is yij = C(xij) and C(xij) = P t λtct(xij) a weighted sum of weak classifiers. The probability of an example is positive is given by pij = 1 1 + exp(−yij), the standard logistic function. The probability that the bag is positive is a “noisy OR” pi = 1 −Q j∈i(1 −pij) [6] [8]. Under this model the likelihood assigned to a set of training bags is: L(C) = Y i pti i (1 −pi)(1−ti) where ti ∈{0, 1} is the label of bag i. Following the AnyBoost approach, the weight on each example is given as the derivative of the cost function with respect to a change in the score of the example. The derivative of the log likelihood is: ∂log L(C) ∂yij = wij = ti −pi pi pij. (1) Note, that the weights here are signed. The interpretation is straightforward; the sign determines the example label. Each round of boosting is a search for a classifier which maximizes P ij c(xij)wij where c(xij) is the score assigned to the example by the weak classifier (for a binary classifier c(xij) ∈{−1, +1}). The parameter λt is determined using a line search to maximize log L(C + λtct). Examining the criteria (1) the weight on each example is the product of two quantities: the bag weight Wbag = ti−pi pi and the instance weight Winstance = pij. Observe that Wbag for a negative bag is always −1. Thus, the weight for a negative instance, pij, is the same that would result in a non-MIL AdaBoost framework (i.e. the negative examples are all equally negative). The weight on the positive instances is more complex. As learning proceeds and the probability of the bag approaches the target, the weight on the entire bag is reduced. Within the bag, the examples are assigned a weight which is higher for examples with higher scores. Intuitively the algorithm selects a subset of examples to assign a higher positive weight, and these example dominate subsequent learning. 3.2 ISR Boost The authors of the ISR paper may well have been aware of the Noisy OR criteria described above. They chose instead to derive a different perhaps less probabilistic criteria. They do this in part because the derivatives (and hence example weights) lead to a form of instance competition. Define χij = exp(yij), Si = P j∈i χij and pi = Si 1+Si . Keeler et al. argue that χij can be interpreted as the likelihood that the object occurs at ij. The quantity Si can be interpreted as a likelihood ratio that some (at least one) instance is positive, and finally pi is the probability that some instance is positive. The example weights for the ISR framework are: ∂log L(C) ∂yij = wij = (ti −pi) χij P j∈i χij (2) Examining the ISR criteria reveals two key properties. The first is the form of the example weight which is explicitly competitive. The examples in the bag compete for weight, since the weight is normalized by sum of the χij’s. Though the experimental evidence is weak, this rule perhaps leads to a very localized representation, where a single example is labeled positive and the other examples are labeled negative. The second property is that the negative examples also compete for weight. This turns out to be troublesome in the detection framework since there are many, many more negative examples than positive. How many negative bags should there be? In contrast, the Noisy OR criteria treats all negative examples as independent negative examples. 4 Application of MIL Boost to Object Detection in Images Each image is divided into a set of overlapping square windows that uniformly sample the space of position and scale (typically there are between 10,000 and 100,000 windows in a training image). Each window is used as an example for the purposes of training and detection. Each training image is labeled to determine the position and scale of the object of interest. For certain types of objects, such as frontal faces, it may be possible to accurately determine the position and scale of the face. One possibility is to localize the eyes and then to determine the single positive image window in which the eyes appear at a given relative location and scale. Even for this type of object the effort in carefully labeling the images is significant. For many other types of objects (objects which may be visible from multiple poses, or Figure 1: Two example images with people in a wide variety of poses. The algorithm will attempt to detect all people in the images, including those that are looking away from the camera. Figure 2: Some of the subwindows in one positive bag. are highly varied, or are flexible) the “correct” geometric normalization is unclear. It is not clear how to normalize images of people in a conference room, who may be standing, sitting upright, reclining, looking toward, or looking away from the camera. Similar questions arise for most other image classes such as cars, trees, or fruit. Experiments in this paper are performed on a set of images from a teleconferencing application. The images are acquired from a set of cameras near the center of the conference room (see Figure 1). The practical challenge is to steer a synthetic virtual camera toward the location of the speaker. The focus here is on person detection; determination of the person who is speaking is beyond the scope of this paper. In every training image each person is labeled by hand. The labeler is instructed to draw a box around the head of the person. While this may seem like a reasonable geometric normalization, it ignores one critical issue, context. At the available resolution (approximately 1000x150 pixels) the head is often less than 10 pixels wide. At this resolution, even for clear frontal faces, the best face detection algorithms frequently fail. There are simply too few pixels on the face. The only way to detect the head is to include the surrounding image context. It is difficult to determine the correct quantity of image context (Figure 2 shows many possible normalizations). If the body context is used to assist in detection, it is difficult to foresee the effect of body pose. Some of the participants are facing right, others left, and still others are leaning far forward/backward (while taking notes or reclining). The same context image is not be appropriate for all situations. Both of these issues can be addressed with the use of MIL. Each positive head is represented, during training, by a large number of related image windows (see Figure 2). The MIL boosting algorithm is then used to simultaneously learn a detector and determine the location and scale of the appropriate image context. 5 MIL Boosting a Detection Cascade In their work on face detection Viola and Jones train a cascade of classifiers, each designed to achieve high detection rates and modest false positive rates. During detection almost all of the computation is performed by the early stages in the cascade, perhaps 90% in the first 10 features. Training the initial stages of the cascade is the key to a fast and effective classifier. Training and evaluating a detector in a MIL framework has a direct impact on cascade construction, both on the features selected and the appropriate thresholds. The result of the MIL boost learning process is not only an example classifier, but also a set of weights on the examples. Those examples in positive bags which are assigned high weight have also high score. The final classifier labels these examples positive. The remaining examples in the positive bags are assigned a low weight and have a low score. The final classifier often classifies these examples as negative (as they should be). Since boosting is a greedy process, the initial weak classifiers do not have any knowledge of the subsequent classifiers. As a result, the first classifiers selected have no knowledge of the final weights assigned to the examples. The key to efficient processing, is that the initial classifiers have a low false negative rate on the examples determined to be positive by the final MIL classifier. This suggests a simple scheme for retraining the initial classifiers. Train a complete MIL boosted classifier and set the detection threshold to achieve the desired false positive and false negative rates. Retrain the initial weak classifier so that it has a zero false negative rate on the examples labeled positive by the full classifier. This results in a significant increase in the number of examples which can be pruned by this classifier. The process can be repeated, so that the second classifier is trained to yield a zero false negative rate on the remaining examples. 6 Experimental Results Experiments were performed using a set of 8 videos recorded in different conference rooms. A collection of 1856 images were sampled from these videos. In all cases the detector was trained on 7 video conferences and tested on the remaining video conference. There were a total of 12364 visible people in these images. Each was labeled by drawing a rectangle around the head of each person. Learning is performed on a total of about 30 million subwindows in the 1856 images. In addition to the monochrome images, two additional feature images are used. One measures the difference from the running mean image (this is something like background subtraction) and the other measures temporal variance over longer time scales. A set of 2654 rectangle filters are used for training. In each round the optimal filter and threshold is selected. In each experiment a total of 60 filters are learned. Figure 3: ROC comparison between various boosting rules. Figure 4: One example from the testing dataset and overlaid results. We compared classical AdaBoost with two variants of MIL boost: ISR and Noisy-OR. For the MIL algorithms there is one bag for each labeled head, containing those positive windows which overlap that head. Additionally there is one negative bag for each image. After training, performance is evaluated on held out conference video (see Figure 3). During training a set of positive windows are generated for each labeled example. All windows whose width is between 0.67 times and 1.5 times the head width and whose center is within 0.5 times the head width of the center of the head are labeled positive. An exception is made for AdaBoost, which has a tighter definition on positive examples (width between 0.83 and 1.2 times the head width and center within 0.2 times the head width) and produces better performance than the looser criterion. All windows which do not overlap with any head are considered negative. For each algorithm one experiment uses the ground truth obtained by hand (which has small yet unavoidable errors). A second experiment corrupts this ground truth further, moving each head by a uniform random shift such that there is non-zero overlap with the true position. Note that conventional AdaBoost is much worse when trained using corrupted ground truth. Interestingly, Adaboost is worse than NorBoost using the “correct” ground truth, even with a tight definition of positive examples. We conjecture that this is due to unavoidable ambiguity in the training and testing data. Overall the MIL detection results are practically useful. A typical example of detection results are shown in Figure 4. Results shown are for the noisy OR algorithm. In order to simplify the display, significantly overlapping detection windows are averaged into a single window. The scheme for retraining the initial classifier was evaluated on the noisy OR strong classifier trained above. Training a conventional cascade requires finding a small set of weak classifiers that can achieve zero false negative rate (or almost zero) and a low false positive rate. Using the first weak classifier yields a false positive rate of 39.7%. Including the first four weak classifiers yields a false positive rate of 21.4%. After retraining the first weak classifier alone yields a false positive rate of 11.7%. This improved rejection rate has the effect of reducing computation time of the cascade by roughly a factor of three. 7 Conclusions This paper combines the truthing flexibility of multiple instance learning with the high accuracy of the boosted object detector of Viola and Jones. This was done by introducing a new variant of boosting, called MILBoost. MILBoost combines examples into bags, using combination functions such as ISR or Noisy OR. Maximum likelihood on the output of these bag combination functions fit within the AnyBoost framework, which generates boosting weights for each example. We apply MILBoost to Viola-Jones face detection, where the standard AdaBoost works very well. NorBoost improves the detection rate over standard AdaBoost (tight positive) by nearly 15% (at a 10% false positive rate). Using MILBoost for object detection allows the detector to flexibly assign labels to the training set, which reduces label noise and improves performance. References [1] S. Andrews and T. Hofmann. Multiple-instance learning via disjunctive programming boosting. In S. Thrun, L. K. Saul, and B. Sch¨olkopf, editors, Proc. NIPS, volume 16. MIT Press, 2004. [2] P. Auer and R. Ortner. A boosting approach to multiple instance learning. In Lecture Notes in Computer Science, volume 3201, pages 63–74, October 2004. [3] M. C. Burl, T. K. Leung, and P. Perona. Face localization via shape statistics. In Proc. Int’l Workshop on Automatic Face and Gesture Recognition, pages 154–159, 1995. [4] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artif. Intell., 89(1-2):31–71, 1997. [5] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scaleinvariant learning. In Proc. CVPR, volume 2, pages 264–271, 2003. [6] D. Heckerman. A tractable inference algorithm for diagnosing multiple diseases. In Proc. UAI, pages 163–171, 1989. [7] J. D. Keeler, D. E. Rumelhart, and W.-K. Leow. Integrated segmentation and recognition of hand-printed numerals. In NIPS-3: Proceedings of the 1990 conference on Advances in neural information processing systems 3, pages 557–563, San Francisco, CA, USA, 1990. Morgan Kaufmann Publishers Inc. [8] O. Maron and T. Lozano-Perez. A framework for multiple-instance learning. In Proc. NIPS, volume 10, pages 570–576, 1998. [9] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent in function space, 1999. [10] S. J. Nowlan and J. C. Platt. A convolutional neural network hand tracker. In G. Tesauro, D. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 901–908. The MIT Press, 1995. [11] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. In Proc. COLT, volume 11, pages 80–91, 1998. [12] C. Schmid and R. Mohr. Local grayvalue invariants for image retrieval. IEEE Trans. PAMI, 19(5):530–535, 1997. [13] P. Viola and M. Jones. Robust real-time object detection. Int’l. J. Computer Vision, 57(2):137– 154, 2002. [14] X. Xu and E. Frank. Logistic regression and boosting for labeled bags of instances. In Lecture Notes in Computer Science, volume 3056, pages 272–281, April 2004.
|
2005
|
73
|
2,892
|
Generalization to Unseen Cases Teemu Roos Helsinki Institute for Information Technology P.O.Box 68, 00014 Univ. of Helsinki, Finland teemu.roos@cs.helsinki.fi Peter Gr¨unwald CWI, P.O.Box 94079, 1090 GB, Amsterdam, The Netherlands pdg@cwi.nl Petri Myllym¨aki Helsinki Institute for Information Technology P.O.Box 68, 00014 Univ of Helsinki, Finland petri.myllymaki@cs.helsinki.fi Henry Tirri Nokia Research Center P.O.Box 407 Nokia Group, Finland henry.tirri@nokia.com Abstract We analyze classification error on unseen cases, i.e. cases that are different from those in the training set. Unlike standard generalization error, this off-training-set error may differ significantly from the empirical error with high probability even with large sample sizes. We derive a datadependent bound on the difference between off-training-set and standard generalization error. Our result is based on a new bound on the missing mass, which for small samples is stronger than existing bounds based on Good-Turing estimators. As we demonstrate on UCI data-sets, our bound gives nontrivial generalization guarantees in many practical cases. In light of these results, we show that certain claims made in the No Free Lunch literature are overly pessimistic. 1 Introduction A large part of learning theory deals with methods that bound the generalization error of hypotheses in terms of their empirical errors. The standard definition of generalization error allows overlap between the training sample and test cases. When such overlap is not allowed, i.e., when considering off-training-set error [1]–[5] defined in terms of only previously unseen cases, usual generalization bounds do not apply. The off-training-set error and the empirical error sometimes differ significantly with high probability even for large sample sizes. In this paper, we show that in many practical cases, one can nevertheless bound this difference. In particular, we show that with high probability, in the realistic situation where the number of repeated cases, or duplicates, relative to the total sample size is small, the difference between the off-training-set error and the standard generalization error is also small. In this case any standard generalization error bound, no matter how it is arrived at, transforms into a similar bound on the off-training-set error. Our Contribution We show that with probability at least 1−δ, if there are r repetitions in the training sample, then the difference between the off-training-set error and the standard generalization error is at most of order O q 1 n log 4 δ + r log n (Thm. 2). Our main result (Corollary 1 of Thm. 1) gives a stronger non-asymptotic bound that can be evaluated numerically. The proof of Thms. 1 and 2 is based on Lemma 2, which is of independent interest, giving a new lower bound on the so-called missing mass, the total probability of as yet unseen cases. For small samples and few repetitions, this bound is significantly stronger than existing bounds based on Good-Turing estimators [6]–[8]. Properties of Our Bounds Our bounds hold (1) uniformly, are (2) distribution-free and (3) data-dependent, yet (4) relevant for data-sets encountered in practice. Let us consider these properties in turn. Our bounds hold uniformly in that they hold for all hypotheses (functions from features to labels) at the same time. Thus, unlike many bounds on standard generalization error, our bounds do not depend in any way on the richness of the hypothesis class under consideration measured in terms of, for instance, its VC dimension, or the margin of the selected hypothesis on the training sample, or any other property of the mechanism with which the hypothesis is chosen. Our bounds are distribution-free in that they hold no matter what the (unknown) data-generating distribution is. Our bounds depend on the data: they are useful only if the number of repetitions in the training set is very small compared to the training set size. However, in machine learning practice this is often the case as demonstrated in Sec. 3 with several UCI data-sets. Relevance Why are our results interesting? There are at least three reasons, the first two of which we discuss extensively in Sec. 4: (1) The use of off-training-set error is an essential ingredient of the No Free Lunch (NFL) theorems [1]–[5]. Our results counter-balance some of the overly pessimistic conclusions of this work. This is all the more relevant since the NFL theorems have been quite influential in shaping the thinking of both theoretical and practical machine learning researchers (see, e.g., Sec. 9.2 of the well-known textbook [5]). (2) The off-training-set error is an intuitive measure of generalization performance. Yet in practice it differs from standard generalization error (even with continuous feature spaces). Thus, we feel, it is worth studying. (3) Technically, we establish a surprising connection between off-training-set error (a concept from classification) and missing mass (a concept mostly applied in language modeling), and give a new lower bound on the missing mass. The paper is organized as follows: In Sec. 2 we fix notation, including the various error functionals considered, and state some preliminary results. In Sec. 3 we state our bounds, and we demonstrate their use on data-sets from the UCI machine learning repository. We discuss the implications of our results in Sec. 4. Postponed proofs are in Appendix A. 2 Preliminaries and Notation Let X be an arbitrary space of inputs, and let Y be a discrete space of labels. A learner observes a random training sample, D, of size n, consisting of the values of a sequence of input–label pairs ((X1, Y1), ..., (Xn, Yn)), where (Xi, Yi) ∈X × Y. Based on the sample, the learner outputs a hypothesis h : X →Y that gives, for each possible input value, a prediction of the corresponding label. The learner is successful if the produced hypothesis has high probability of making a correct prediction when applied to a test case. (Xn+1, Yn+1). Both the training sample and the test case are independently drawn from a common generating distribution P ∗. We use the following error functionals: Definition 1 (errors). Given a training sample D of size n, the i.i.d., off-training-set, and empirical error of a hypothesis h are given by Eiid(h) := Pr[Y ̸= h(X)] i.i.d. error, Eots(h, D) := Pr[Y ̸= h(X) | X /∈XD] off-training-set error, Eemp(h, D) := 1 n Pn i=1 I{h(Xi)̸=Yi} empirical error, where XD is the set of X-values occurring in sample D, and the indicator function I{·} takes value one if its argument is true and zero otherwise. The first one of these is just the standard generalization error of learning theory. Following [2], we call it i.i.d. error. For general input spaces and generating distributions Eots(h, D) may be undefined for some D. In either case, this is not a problem. First, if XD has measure one, the off-training-set error is undefined and we need not concern ourselves with it; the relevant error measure is Eiid(h) and standard results apply1. If, on the other hand, XD has measure zero, the off-training-set error and the i.i.d. error are equivalent and our results (in Sec. 3 below) hold trivially. Thus, if off-training-set error is relevant, our results hold. Definition 2. Given a training sample D, the sample coverage p(XD) is the probability that a new X-value appears in D: p(XD) := Pr[X ∈XD], where XD is as in Def. 1. The remaining probability, 1 −p(XD), is called the missing mass. Lemma 1. For any training set D such that Eots(h, D) is defined, we have a) |Eots(h, D) −Eiid(h)| ≤p(XD) , b) Eots(h, D) −Eiid(h) ≤ p(XD) 1 −p(XD)Eiid(h) . Proof. Both bounds follow essentially from the following inequalities2: Eots(h, D) = Pr[Y ̸= h(X), X /∈XD] Pr[X /∈XD] ≤Pr[Y ̸= h(X)] Pr[X /∈XD] ∧1 = Eiid(h) 1 −p(XD) ∧1 = Eiid(h) 1 −p(XD) ∧1 (1 −p(XD)) + Eiid(h) 1 −p(XD) ∧1 p(XD) ≤Eiid(h) + p(XD) , where ∧denotes the minimum. This gives one direction of Lemma 1.a (an upper bound on Eots(h, D)); the other direction is obtained by using analogous inequalities for the quantity 1 −Eots(h, D), with Y ̸= h(X) replaced by Y = h(X), which gives the upper bound 1 −Eots(h, D) ≤1 −Eiid(h) + p(XD). Lemma 1.b follows from the first line by ignoring the upper bound 1, and subtracting Eiid(h) from both sides. Given the value of (or an upper bound on) Eiid(h), the upper bound of Lemma 1.b may be significantly stronger than that of Lemma 1.a. However, in this work we only use Lemma 1.a for simplicity since it depends on p(XD) alone. The lemma would be of little use without a good enough upper bound on the sample coverage p(XD), or equivalently, a lower bound on the missing mass. In the next section we obtain such a bound. 3 An Off-training-set Error Bound Good-Turing estimators [6], named after Irving J. Good, and Alan Turing, are widely used in language modeling to estimate the missing mass. The known small bias of such estimators, together with a rate of convergence, can be used to obtain lower and upper bound for the missing mass [7, 8]. Unfortunately, for the sample sizes we are interested in, the lower bounds are not quite tight enough (see Fig. 1 below). In this section we state a new lower bound, not based on Good-Turing estimators, that is practically useful in our context. We compare this bound to the existing ones after Thm. 2. Let ¯ Xn ⊂X be the set consisting of the n most probable individual values of X. In case there are several such subsets any one of them will do. In case X has less than n elements, ¯ Xn := X. Denote for short ¯pn := Pr[X ∈¯ Xn]. No assumptions are made regarding the value of ¯pn, it may or may not be zero. The reason for us being interested in ¯pn is that 1Note however, that a continuous feature space does not necessarily imply this, see Sec. 4. 2This neat proof is due to Gilles Blanchard (personal communication). it gives us an upper bound p(XD) ≤¯pn on the sample coverage that holds for all D. We prove that when ¯pn is large it is likely that a sample of size n will have several repeated Xvalues so that the number of distinct X-values is less than n. This implies that if a sample with a small number of repeated X-values is observed, it is safe to assume that ¯pn is small and therefore, the sample coverage p(XD) must also be small. Lemma 2. The probability of obtaining a sample of size n ≥1 with at most 0 ≤r < n repeated X-values is upper-bounded by Pr[“at most r repetitions”] ≤∆(n, r, ¯pn) , where ∆(n, r, ¯pn) := n X k=0 n k ¯pk n(1 −¯pn)n−kf(n, r, k) (1) and f(n, r, k) is given by f(n, r, k) := ( 1 if k < r min k r n! (n−k+r)!n−(k−r), 1 if k ≥r. ∆(n, r, ¯pn) is a non-increasing function of ¯pn. For a proof, see Appendix A. Given a fixed confidence level 1 −δ we can now define a data-dependent upper bound on the sample coverage B(δ, D) := arg min p {p : ∆(n, r, p) ≤δ} , (2) where r is the number of repeated X-values in D, and ∆(n, r, p) is given by Eq. (1). Theorem 1. For any 0 ≤δ ≤1, the upper bound B(δ, D) on the sample coverage given by Eq. (2) holds with at least probability 1 −δ: Pr [p(XD) ≤B(δ, D)] ≥1 −δ . Proof. Consider fixed values of the confidence level 1 −δ, sample size n, and probability ¯pn. Let R be the largest integer for which ∆(n, R, ¯pn) ≤δ. By Lemma 2 the probability of obtaining at most R repetitions is upper-bounded by δ. Thus, it is sufficient that the bound holds whenever the number of repetitions is greater than R. For any such r > R, we have ∆(n, r, ¯pn) > δ. By Lemma 2 the function ∆(n, r, ¯pn) is non-increasing in ¯pn, and hence it must be that ¯pn < arg minp{p : ∆(n, r, p) ≤δ} = B(δ, D). Since p(XD) ≤¯pn, the bound then holds for all r > R. Rather than the sample coverage p(XD), the real interest is often in off-training-set error. Using the relation between the two quantities, one gets the following corollary that follows directly from Lemma 1.a and Thm. 1. Corollary 1 (main result: off-training-set error bound). For any 0 ≤δ ≤1, the difference between the i.i.d. error and the off-training-set error is bounded by Pr [∀h |Eots(h, D) −Eiid(h)| ≤B(δ, D)] ≥1 −δ . Corollary 1 implies that the off-training-set error and the i.i.d. error are entangled, thus transforming all distribution-free bounds on the i.i.d. error to similar bounds on the offtraining-set error. Since the probabilistic part of the result (Lemma 1) does not involve a specific hypothesis, Corollary 1 holds for all hypotheses at the same time, and does not depend on the richness of the hypothesis class in terms of, for instance, its VC dimension. Figure 1 illustrates the behavior of the bound (2) as the sample size grows. It can be seen that for a small number of repetitions the bound is nontrivial already at moderate sample sizes. Moreover, the effect of repetitions is tolerable, and it diminishes as the number of repetitions grows. Table 1 lists values of the bound for a number of data-sets from the UCI machine learning repository [9]. In many cases the bound is about 0.10–0.20 or less. Theorem 2 gives an upper bound on the rate with which the bound decreases as n grows. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 10 100 1000 10000 r=0 r=1 r=10 G-T PSfrag replacements B(δ, D) sample size Figure 1: Upper bound B(δ, D) given by Eq. (2) for samples with zero (r = 0) to ten (r = 10) repeated X-values on the 95 % confidence level (δ = 0.05). The dotted curve is an asymptotic version for r = 0 given by Thm. 2. The curve labeled ‘G-T’ (for r = 0) is based on Good-Turing estimators (Thm. 3 in [7]). Asymptotically, it exceeds our r = 0 bound by a factor O(log n). Bound for the UCI data-sets in Table 1 are marked with small triangles (▽). Note the log-scale for sample size. Theorem 2 (a weaker bound in closed-form). For all n and all ¯pn, all r < n, the function B(δ, D) has the upper bound B(δ, D) ≤3 q 1 2n log 4 δ + 2r log n . For a proof, see Appendix A. Let us compare Thm. 2 to the existing bounds on B(δ, D) based on Good-Turing estimators [7, 8]. For fixed δ, Thm. 3 in [7] gives an upper bound of O (r/n + log n/√n). The exact bound is drawn as the G-T curve in Fig. 1. In contrast, our bound gives O √C + r log n/√n , for a known constant C > 0. For fixed r and increasing n, this gives an improvement over the G-T bound of order O(log n) if r = 0, and O(√log n) if r > 0. For r growing faster than O(√log n), asymptotically our bound becomes uncompetitive3. The real advantage of our bound is that, in contrast to G-T, it gives nontrivial bounds for sample sizes and number of repetitions that typically occur in classification problems. For practical applications in language modeling (large samples, many repetitions), the existing G-T bound of [7] is probably preferable. The developments in [8] are also relevant, albeit in a more indirect manner. In Thm. 10 of that paper, it is shown that the probability that the missing mass is larger than its expected value by an amount ϵ is bounded by e−(e/2)nϵ2. In [7], Sec. 4, some techniques are developed to bound the expected missing mass in terms of the number of repetitions in the sample. One might conjecture that, combined with Thm. 10 of [8], these techniques can be extended to yield an upper bound on B(δ, D) of order O(r/n + 1/√n) that would be asymptotically stronger than the current bound. We plan to investigate this and other potential ways to improve the bounds in future work. Any advance in this direction makes the implications of our bounds even more compelling. 3If data are i.i.d. according to a fixed P ∗, then, as follows from the strong law of large numbers, r, considered as a function of n, will either remain zero for ever or will be larger than cn for some c > 0, for all n larger than some n0. In practice, our bound is still relevant because typical data-sets often have r very small compared to n (see Table 1). This is possible because apparently n ≪n0. Table 1: Bounds on the difference between the i.i.d. error and the off-training-set error given by Eq. (2) on confidence level 95% (δ = 0.05). A dash (-) indicates no repetitions. Bounds greater than 0.5 are in parentheses. DATA SAMPLE SIZE REPETITIONS BOUND Abalone 4177 0.0383 Adult 32562 25 0.0959 Annealing 798 8 0.3149 Artificial Characters 1000 34 (0.5112) Breast Cancer (Diagnostic) 569 0.1057 Breast Cancer (Original) 699 236 (1.0) Credit Approval 690 0.0958 Cylinder Bands 542 0.1084 Housing 506 0.1123 Internet Advertisement 2385 441 (0.9865) Isolated Letter Speech Recogn. 1332 0.0685 Letter Recognition 20000 1332 (0.6503) Multiple Features 2000 4 0.1563 Musk 6598 17 0.1671 Page Blocks 5473 80 0.3509 Water Treatment Plant 527 0.1099 Waveform 5000 0.0350 4 Discussion – Implications of Our Results The use of off-training-set error is an essential ingredient of the influential No Free Lunch theorems [1]–[5]. Our results imply that, while the NFL theorems themselves are valid, some of the conclusions drawn from them are overly pessimistic, and should be reconsidered. For instance, it has been suggested that the tools of conventional learning theory (dealing with standard generalization error) are “ill-suited for investigating off-trainingset error” [3]. With the help of the little add-on we provide in this paper (Corollary 1), any bound on standard generalization error can be converted to a bound on off-training-set error. Our empirical results on UCI data-sets show that the resulting bound is often not essentially weaker than the original one. Thus, the conventional tools turn out not to be so ‘ill-suited’ after all. Secondly, contrary to what is sometimes suggested4, we show that one can relate performance on the training sample to performance on as yet unseen cases. On the other side of the debate, it has sometimes been claimed that the off-training-set error is irrelevant to much of modern learning theory where often the feature space is continuous. This may seem to imply that off-training-set error coincides with standard generalization error (see remark after Def. 1). However, this is true only if the associated distribution is continuous: then the probability of observing the same X-value twice is zero. However, in practice even when the feature space has continuous components, data-sets sometimes contain repetitions (e.g., Adult, see Table 1), if only for the reason that continuous features may be discretized or truncated. In practice repetitions occur in many data-sets, implying that off-training-set error can be different from the standard i.i.d. error. Thus, off-trainingset error is relevant. Also, it measures a quantity that is in some ways close to the meaning of ‘inductive generalization’ – in dictionaries the words ‘induction’ and ‘generalization’ frequently refer to ‘unseen instances’. Thus, off-training-set error is not just relevant but also intuitive. This makes it all the more interesting that standard generalization bounds transfer to off-training-set error – and that is the central implication of this paper. 4For instance, “if we are interested in the error for [unseen cases], the NFL theorems tell us that (in the absence of prior assumptions) [empirical error] is meaningless” [2]. Acknowledgments We thank Gilles Blanchard for useful discussions. Part of this work was carried out while the first author was visiting CWI. This work was supported in part by the Academy of Finland (Minos, Prima), Nuffic, and IST Programme of the European Community, under the PASCAL Network, IST-2002-506778. This publication only reflects the authors’ views. References [1] Wolpert, D.H.: On the connection between in-sample testing and generalization error. Complex Systems 6 (1992) 47–94 [2] Wolpert, D.H.: The lack of a priori distinctions between learning algorithms. Neural Computation 8 (1996) 1341–1390 [3] Wolpert, D.H.: The supervised learning no-free-lunch theorems. In: Proc. 6th Online World Conf. on Soft Computing in Industrial Applications (2001). [4] Schaffer, C.: A conservation law for generalization performance. In: Proc. 11th Int. Conf. on Machine Learning (1994) 259–265 [5] Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd Edition. Wiley, 2001. [6] Good, I.J.: The population frequencies of species and the estimation of population parameters. Biometrika 40 (1953) 237–264 [7] McAllester, D.A., Schapire, R.E.: On the convergence rate of Good-Turing estimators. In: Proc. 13th Ann. Conf. on Computational Learning Theory (2000) 1–6 [8] McAllester, D.A., Ortiz L.: Concentration inequalities for the missing mass and for histogram rule error. Journal of Machine Learning Research 4 (2003) 895–911. [9] Blake, C., and Merz, C.: UCI repository of machine learning databases. Univ. of California, Dept. of Information and Computer Science (1998) A Postponed Proofs We first state two propositions that are useful in the proof of Lemma 2. Proposition 1. Let Xm be a domain of size m, and let P ∗ Xm be an associated probability distribution. The probability of getting no repetitions when sampling 1 ≤k ≤m items with replacement from distribution P ∗ Xm is upper-bounded by Pr[“no repetitions” | k] ≤ m! (m−k)!mk . Proof Sketch of Proposition 1. By way of contradiction it is possible to show that the probability of obtaining no repetitions is maximized when P ∗ Xm is uniform. After this, it is easily seen that the maximal probability equals the right-hand side of the inequality. Proposition 2. Let Xm be a domain of size m, and let P ∗ Xm be an associated probability distribution. The probability of getting at most r ≥0 repeated values when sampling 1 ≤k ≤m items with replacement from distribution P ∗ Xm is upper-bounded by Pr[“at most r repetitions” | k] ≤ ( 1 if k < r min k r m! (m−k+r)!m−(k−r), 1 if k ≥r. Proof of Proposition 2. The case k < r is trivial. For k ≥r, the event “at most r repetitions in k draws” is equivalent to the event that there is at least one subset of size k −r of the X-variables {X1, . . . , Xk} such that all variables in the subset take distinct values. For a subset of size k −r, Proposition 1 implies that the probability that all values are distinct is at most m! (m−k+r)!m−(k−r). Since there are k r subsets of the X-variables of size k −r, the union bound implies that multiplying this by k r gives the required result. Proof of Lemma 2. The probability of getting at most r repeated X-values can be upper bounded by considering repetitions in the maximally probable set ¯ Xn only. The probability of no repetitions in ¯ Xn can be broken into n + 1 mutually exclusive cases depending on how many X-values fall into the set ¯ Xn. Thus we get Pr[“at most r repetitions in ¯ Xn”] = n X k=0 Pr[“at most r repetitions in ¯ Xn” | k] Pr[k] , where Pr[· | k] denotes probability under the condition that k of the n cases fall into ¯ Xn, and Pr[k] denotes the probability of the latter occurring. Proposition 2 gives an upper bound on the conditional probability. The probability Pr[k] is given by the binomial distribution with parameter ¯pn: Pr[k] = Bin(k ; n, ¯pn) = n k ¯pk n(1 −¯pn)n−k . Combining these gives the formula for ∆(n, r, ¯pn). Showing that ∆(n, r, ¯pn) is non-increasing in ¯pn is tedious but uninteresting and we only sketch the proof: It can be checked that the conditional probability given by Proposition 2 is non-increasing in k (the min operator is essential for this). From this the claim follows since for increasing ¯pn the binomial distribution puts more weight to terms with large k, thus not increasing the sum. Proof of Thm. 2. The first three factors in the definition (1) of ∆(n, r, ¯pn) are equal to a binomial probability Bin(k ; n, ¯pn), and the expectation of k is thus n¯pn. By the Hoeffding bound, for all ϵ > 0, the probability of k < n(¯pn −ϵ) is bounded by exp(−2nϵ2). Applying this bound with ϵ = ¯pn/3 we get that the probability of k < 2 3 ¯pn is bounded by exp(−2 9n¯p2 n). Combined with (1) this gives the following upper bound on ∆(n, r, ¯pn): exp −2 9n¯p2 n max k<n 2 3 pn f(n, r, k)+ max k≥n 2 3 pn f(n, r, k) ≤exp −2 9n¯p2 n + max k≥n 2 3 pn f(n, r, k) (3) where the maxima are taken over integer-valued k. In the last inequality we used the fact that for all n, r, k, it holds that f(n, r, k) ≤1. Now note that for k ≥r, we can bound f(n, r, k) ≤ k r k−r−1 Y j=0 n −j n ≤ n r k Y j=0 n −j n k Y j=k−r n n −j ≤ n r k Y j=1 n −j n n n −k r+1 ≤n2r n n −k k Y j=1 n −j n . (4) If k < r, f(n, r, k) = 1 so that (4) holds in fact for all k with 1 ≤k ≤n. We bound the last factor Qk j=1 n−j n further as follows. The average of the k factors of this product is less than or equal to n−k/2 n = 1 − k 2n. Since a product of k factors is always less than or equal to the average of the factors to the power of k, we get the upper bound 1 − k 2n k ≤ exp −k·k 2n ≤exp −k2 2n , where the first inequality follows from 1 −x ≤exp(−x) for x < 1. Plugging this into (4) gives f(n, r, k) ≤n2r n n−k exp −k2 2n . Plugging this back into (3) gives ∆(n, r, ¯pn) ≤exp(−2 9n¯p2 n) + maxk≥n 2 3 pn 3n2r exp −k2 2n ≤ exp(−2 9n¯p2 n) + 3n2r exp(−2 9n¯p2 n) ≤4n2r exp(−2 9n¯p2 n). Recall that B(δ, D) := arg minp {p : ∆(n, r, p) ≤δ}. Replacing ∆(n, r, p) by the above upper bound, makes the set of p satisfying the inequality smaller. Thus, the minimal member of the reduced set is greater than or equal to the minimal member of the set with ∆(n, r, p) ≤δ, giving the following bound on B(δ, D): B(δ, D) ≤arg minp p : 4n2r exp −2 9np2 ≤δ = 3 q 1 2n log 4 δ + 2r log n .
|
2005
|
74
|
2,893
|
Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity Amir Navot12 Lavi Shpigelman12 Naftali Tishby12 Eilon Vaadia23 1School of computer Science and Engineering 2Interdisciplinary Center for Neural Computation 3Dept. of Physiology, Hadassah Medical School The Hebrew University Jerusalem, 91904, Israel Email for correspondence: {anavot,shpigi}@cs.huji.ac.il Abstract We present a non-linear, simple, yet effective, feature subset selection method for regression and use it in analyzing cortical neural activity. Our algorithm involves a feature-weighted version of the k-nearest-neighbor algorithm. It is able to capture complex dependency of the target function on its input and makes use of the leave-one-out error as a natural regularization. We explain the characteristics of our algorithm on synthetic problems and use it in the context of predicting hand velocity from spikes recorded in motor cortex of a behaving monkey. By applying feature selection we are able to improve prediction quality and suggest a novel way of exploring neural data. 1 Introduction In many supervised learning tasks the input is represented by a very large number of features, many of which are not needed for predicting the labels. Feature selection is the task of choosing a small subset of features that is sufficient to predict the target labels well. Feature selection reduces the computational complexity of learning and prediction algorithms and saves on the cost of measuring non selected features. In many situations, feature selection can also enhance the prediction accuracy by improving the signal to noise ratio. Another benefit of feature selection is that the identity of the selected features can provide insights into the nature of the problem at hand. Therefore feature selection is an important step in efficient learning of large multi-featured data sets. Feature selection (variously known as subset selection, attribute selection or variable selection) has been studied extensively both in statistics and by the machine learning community over the last few decades. In the most common selection paradigm an evaluation function is used to assign scores to subsets of features and a search algorithm is used to search for a subset with a high score. The evaluation function can be based on the performance of a specific predictor (wrapper model, [1]) or on some general (typically cheaper to compute) relevance measure of the features to the prediction (filter model). In any case, an exhaustive search over all feature sets is generally intractable due to the exponentially large number of possible sets. Therefore, search methods are employed which apply a variety of heuristics, such as hill climbing and genetic algorithms. Other methods simply rank individual features, assigning a score to each feature independently. These methods are usually very fast, but inevitably fail in situations where only a combined set of features is predictive of the target function. See [2] for a comprehensive overview of feature selection and [3] which discusses selection methods for linear regression. A possible choice of evaluation function is the leave-one-out (LOO) mean square error (MSE) of the k-Nearest-Neighbor (kNN) estimator ([4, 5]). This evaluation function has the advantage that it both gives a good approximation of the expected generalization error and can be computed quickly. [6] used this criterion on small synthetic problems (up to 12 features). They searched for good subsets using forward selection, backward elimination and an algorithm (called schemata) that races feature sets against each other (eliminating poor sets, keeping the fittest) in order to find a subset with a good score. All these algorithms perform a local search by flipping one or more features at a time. Since the space is discrete the direction of improvement is found by trial and error, which slows the search and makes it impractical for large scale real world problems involving many features. In this paper we develop a novel selection algorithm. We extend the LOO-kNN-MSE evaluation function to assign scores to weight vectors over the features, instead of just to feature subsets. This results in a smooth (“almost everywhere”) function over a continuous domain, which allows us to compute the gradient analytically and to employ a stochastic gradient ascent to find a locally optimal weight vector. The resulting weights provide a ranking of the features, which we can then threshold in order to produce a subset. In this way we can apply an easy-to-compute, gradient directed search, without relearning of a regression model at each step but while employing a strong non-linear function estimate (kNN) that can capture complex dependency of the function on its features1. Our motivation for developing this method is to address a major computational neuroscience question: which features of the neural code are relevant to the observed behavior. This is an important element of enabling interpretability of neural activity. Feature selection is a promising tool for this task. Here, we apply our feature selection method to the task of reconstructing hand movements from neural activity, which is one of the main challenges in implementing brain computer interfaces [8]. We look at neural population spike counts, recorded in motor cortex of a monkey while it performed hand movements and locate the most informative subset of neural features. We show that it is possible to improve prediction results by wisely selecting a subset of cortical units and their time lags, relative to the movement. Our algorithm, which considers feature subsets, outperforms methods that consider features on an individual basis, suggesting that complex dependency on a set of features exists in the code. The remainder of the paper is organized as follows: we describe the problem setting in section 2. Our method is presented in section 3. Next, we demonstrate its ability to cope with a complicated dependency of the target function on groups of features using synthetic data (section 4). The results of applying our method to the hand movement reconstruction problem is presented in section 5. 2 Problem Setting First, let us introduce some notation. Vectors in Rn are denoted by boldface small letters (e.g. x, w). Scalars are denoted by small letters (e.g. x, y). The i’th element of a vector x is denoted by xi. Let f(x), f : Rn −→R be a function that we wish to estimate. Given a set S ⊂Rn, the empiric mean square error (MSE) of an estimator ˆf for f is defined as MSES( ˆf) = 1 |S| P x∈S f(x) −ˆf(x) 2 . 1The design of this algorithm was inspired by work done by Gilad-Bachrach et al. ([7]) which used a large margin based evaluation function to derive feature selection algorithms for classification. kNN Regression k-Nearest-Neighbor (kNN) is a simple, intuitive and efficient way to estimate the value of an unknown function in a given point using its values in other (training) points. Let S = {x1, . . . , xm} be a set of training points. The kNN estimator is defined as the mean function value of the nearest neighbors: ˆf(x) = 1 k P x′∈N(x) f(x′) where N(x) ⊂S is the set of k nearest points to x in S and k is a parameter([4, 5]). A softer version takes a weighted average, where the weight of each neighbor is proportional to its proximity. One specific way of doing this is ˆf(x) = 1 Z X x′∈N(x) f(x′)e−d(x,x′)/β (1) where d (x, x′) = ∥x −x′∥2 2 is the ℓ2 norm, Z = P x′∈N(x) e−d(x,x′)/β is a normalization factor and β is a parameter. The soft kNN version will be used in the remainder of this paper. This regression method is a special form of locally weighted regression (See [5] for an overview of the literature on this subject.) It has the desirable property that no learning (other than storage of the training set) is required for the regression. Also note that the Gaussian Radial Basis Function has the form of a kernel ([9]) and can be replaced with any operator on two data points that decays as a function of the difference between them (e.g. kernel induced distances). As will be seen in the next section, we use the MSE of a modified kNN regressor to guide the search for a set of features F ⊂{1, . . . n} that achieves a low MSE. However, the MSE and the Gaussian kernel can be replaced by other loss measures and kernels (respectively) as long as they are differentiable almost everywhere. 3 The Feature Selection Algorithm In this section we present our selection algorithm called RGS (Regression, Gradient guided, feature Selection). It can be seen as a filter method for general regression algorithms or as a wrapper for estimation by the kNN algorithm. Our goal is to find subsets of features that induce a small estimation error. As in most supervised learning problems, we wish to find subsets that induce a small generalization error, but since it is not known, we use an evaluation function on the training set. This evaluation function is defined not only for subsets but for any weight vector over the features. This is more general because a feature subset can be represented by a binary weight vector that assigns a value of one to features in the set and zero to the rest of the features. For a given weights vector over the features w ∈Rn, we consider the weighted squared ℓ2 norm induced by w, defined as ∥z∥2 w = P i z2 i w2 i . Given a training set S, we denote by ˆfw(x) the value assigned to x by a weighted kNN estimator, defined in equation 1, using the weighted squared ℓ2-norm as the distances d(x, x′) and the nearest neighbors are found among the points of S excluding x. The evaluation function is defined as the negative (halved) square error of the weighted kNN estimator: e(w) = −1 2 X x∈S f(x) −ˆfw(x) 2 . (2) This evaluation function scores weight vectors (w). A change of weights will cause a change in the distances and, possibly, the identity of each point’s nearest neighbors, which will change the function estimates. A weight vector that induces a distance measure in which neighbors have similar labels would receive a high score. The mean, 1/|S| is replaced with a 1/2 to ease later differentiation. Note that there is no explicit regularization term in e(w). This is justified by the fact that for each point, the estimate of its function value does not include that point as part of the training set. Thus, equation 2 is a leave-oneout cross validation error. Clearly, it is impossible to go over all the weight vectors (or even over all the feature subsets), and therefore some search technique is required. Algorithm 1 RGS(S, k, β, T) 1. initialize w = (1, 1, . . . , 1) 2. for t = 1 . . . T (a) pick randomly an instance x from S (b) calculate the gradient of e(w): ∇e(w) = − X x∈S f(x) −ˆfw(x) ∇w ˆfw(x) ∇w ˆfw(x) = −4 β P x′′,x′∈N(x) f(x′′)a(x′, x′′) u(x′, x′′) P x′′,x′∈N(x) a(x′, x′′) where a(x′, x′′) = e−(||x−x′||2 w+||x−x′′||2 w)/β and u(x′, x′′) ∈Rn is a vector with ui = wi (xi −x′ i)2 + (xi −x′′ i )2 . (c) w = w + ηt∇e(w) = w 1 + ηt∇w ˆfw(x) where ηt is a decay factor. Our method finds a weight vector w that locally maximizes e(w) as defined in (2) and then uses a threshold in order to obtain a feature subset. The threshold can be set either by cross validation or by finding a natural cutoff in the weight values. However, we later show that using the distance measure induced by w in the regression stage compensates for taking too many features. Since e(w) is defined over a continuous domain and is smooth almost everywhere we can use gradient ascent in order to maximize it. RGS (algorithm 1) is a stochastic gradient ascent over e(w). In each step the gradient is evaluated using one sample point and is added to the current weight vector. RGS considers the weights of all the features at the same time and thus it can handle dependency on a group of features. This is demonstrated in section 4. In this respect, it is superior to selection algorithms that scores each feature independently. It is also faster than methods that try to find a good subset directly by trial and error. Note, however, that convergence to global optima is not guaranteed and standard techniques to avoid local optima can be used. The parameters of the algorithm are k (number of neighbors), β (Gaussian decay factor), T (number of iterations) and {ηt}T t=1 (step size decay scheme). The value of k can be tuned by cross validation, however a proper choice of β can compensate for a k that is too large. It makes sense to tune β to a value that places most neighbors in an active zone of the Gaussian. In our experiments, we set β to half of the mean distance between points and their k neighbors. It usually makes sense to use ηt that decays over time to ensure convergence, however, on our data, convergence was also achieved with ηt = 1. The computational complexity of RGS is Θ(TNm) where T is the number of iterations, N is the number of features and m is the size of the training set S. This is correct for a naive implementation which finds the nearest neighbors and their distances from scratch at each step by measuring the distances between the current point to all the other points. RGS is basically an on line method which can be used in batch mode by running it in epochs on the training set. When it is run for only one epoch, T = m and the complexity is Θ m2N . Matlab code for this algorithm (and those that we compare with) is available at http://www.cs.huji.ac.il/labs/learning/code/fsr/ 4 Testing on synthetic data The use of synthetic data, where we can control the importance of each feature, allows us to illustrate the properties of our algorithm. We compare our algorithm with other common 0 0.5 1 0 0.5 10 0.5 1 (a) 0 0.5 1 0 0.5 1 −1 0 1 (b) 0 0.5 1 0 0.5 1 0 1 2 (c) 0 0.5 1 0 0.5 1 −1 0 1 (d) 0 0.5 1 0 0.5 1 −1 0 1 (e) 0 0.5 1 0 0.5 1 −1 0 1 (f) Figure 1: (a)-(d): Illustration of the four synthetic target functions. The plots shows the functions value as function of the first two features. (e),(f): demonstration of the effect of feature selection on estimating the second function using kNN regression (k = 5, β = 0.05). (e) using both features (mse = 0.03), (f) using the relevant feature only (mse = 0.004) selection methods: infoGain [10], correlation coefficients (corrcoef) and forward selection (see [2]) . infoGain and corrcoef simply rank features according to the mutual information2 or the correlation coefficient (respectively) between each feature and the labels (i.e. the target function value). Forward selection (fwdSel) is a greedy method in which features are iteratively added into a growing subset. In each step, the feature showing the greatest improvement (given the previously selected subset) is added. This is a search method that can be applied to any evaluation function and we use our criterion (equation 2 on feature subsets). This well known method has the advantages of considering feature subsets and that it can be used with non linear predictors. Another algorithm we compare with scores each feature independently using our evaluation function (2). This helps us in analyzing RGS, as it may help single out the respective contributions to performance of the properties of the evaluation function and the search method. We refer to this algorithm as SKS (Single feature, kNN regression, feature Selection). We look at four different target functions over R50. The training sets include 20 to 100 points that were chosen randomly from the [−1, 1]50 cube. The target functions are given in the top row of figure 2 and are illustrated in figure 1(a-d). A random Gaussian noise with zero mean and a variance of 1/7 was added to the function value of the training points. Clearly, only the first feature is relevant for the first two target functions, and only the first two features are relevant for the last two target functions. Note also that the last function is a smoothed version of parity function learning and is considered hard for many feature selection algorithms [2]. First, to illustrate the importance of feature selection on regression quality we use kNN to estimate the second target function. Figure 1(e-f) shows the regression results for target (b), using either only the relevant feature or both the relevant and an irrelevant feature. The addition of one irrelevant feature degrades the MSE ten fold. Next, to demonstrate the capabilities of the various algorithms, we run them on each of the above problems with varying training set size. We measure their success by counting the number of times that the relevant features were assigned the highest rank (repeating the experiment 250 times by re-sampling the training set). Figure 2 presents success rate as function of training set size. We can see that all the algorithms succeeded on the first function which is monotonic and depends on one feature alone. infoGain and corrcoef fail on the second, non-monotonic function. The three kNN based algorithms succeed because they only depend on local properties of the target function. We see, however, that RGS needs a larger training set to achieve a high success rate. The third target function depends on two features but the dependency is simple as each of them alone is highly correlated with the function value. The fourth, XOR-like function exhibits a complicated dependency that requires consideration of the two relevant features simultaneously. SKS which considers features separately sees the effect of all other features as noise and, therefore, has only marginal success on the third 2Feature and function values were “binarized” by comparing them to the median value. (a) x2 1 (b) sin(2πx1 + π/2) (c) sin(2πx1 + π/2) + x2 (d) sin(2πx1) sin(2πx2) 20 40 60 80 100 20 40 60 80 100 success rate # examples 20 40 60 80 100 20 40 60 80 100 # examples 20 40 60 80 100 20 40 60 80 100 # examples 20 40 60 80 100 20 40 60 80 100 # examples corrcoef infoGain SKS fwdSel RGS Figure 2: Success rate of the different algorithms on 4 synthetic regression tasks (averaged over 250 repetitions) as a function of the number of training examples. Success is measured by the percent of the repetitions in which the relevant feature(s) received first place(s). function and fails on the fourth altogether. RGS and fwdSel apply different search methods. fwdSel considers subsets but can evaluate only one additional feature in each step, giving it some advantage over RGS on the third function but causing it to fail on the fourth. RGS takes a step in all features simultaneously. Only such an approach can succeed on the fourth function. 5 Hand Movements Reconstruction from Neural Activity To suggest an interpretation of neural coding we apply RGS and compare it with the alternatives presented in the previous section3 on the hand movement reconstruction task. The data sets were collected while a monkey performed a planar center-out reaching task with one or both hands [11]. 16 electrodes, inserted daily into novel positions in primary motor cortex were used to detect and sort spikes in up to 64 channels (4 per electrode). Most of the channels detected isolated neuronal spikes by template matching. Some, however, had templates that were not tuned, producing spikes during only a fraction of the session. Others (about 25%) contained unused templates (resulting in a constant zero producing channel or, possibly, a few random spikes). The rest of the channels (one per electrode) produced spikes by threshold passing. We construct a labeled regression data set as follows. Each example corresponds to one time point in a trial. It consists of the spike counts that occurred in the 10 previous consecutive 100ms long time bins from all 64 channels (64 × 10 = 640 features) and the label is the X or Y component of the instantaneous hand velocity. We analyze data collected over 8 days. Each data set has an average of 5050 examples collected during the movement periods of the successful trials. In order to evaluate the different feature selection methods we separate the data into training and test sets. Each selection method is used to produce a ranking of the features. We then apply kNN (based on the training set) using different size groups of top ranking features to the test set. We use the resulting MSE (or correlation coefficient between true and estimated movement) as our measure of quality. To test the significance of the results we apply 5fold cross validation and repeat the process 5 times on different permutations of the trial ordering. Figure 3 shows the average (over permutations, folds and velocity components) MSE as a function of the number of selected features on four of the different data sets (results on the rest are similar and omitted due to lack of space)4. It is clear that RGS achieves better results than the other methods throughout the range of feature numbers. To test whether the performance of RGS was consistently better than the other methods we counted winning percentages (the percent of the times in which RGS achieved lower MSE than another algorithm) in all folds of all data sets and as a function of the number of 3fwdSel was not applied due to its intractably high run time complexity. Note that its run time is at least r times that of RGS where r is the size of the optimal set and is longer in practice. 4We use k = 50 (approximately 1% of the data points). β is set automatically as described in section 3. These parameters were manually tuned for good kNN results and were not optimized for any of the feature selection algorithms. The number of epochs for RGS was set to 1 (i.e. T = m). 7 200 400 600 0.63 0.74 7 200 400 600 0.06 0.09 7 200 400 600 0.08 0.10 7 200 400 600 0.27 0.77 RGS SKS infoGain corrcoef Figure 3: MSE results for the different feature selection methods on the neural activity data sets. Each sub figure is a different recording day. MSEs are presented as a function of the number of features used. Each point is a mean over all 5 cross validation folds, 5 permutations on the data and the two velocity component targets. Note that some of the data sets are harder than others. features used. Figure 4 shows the winning percentages of RGS versus the other methods. For a very low number of features, while the error is still high, RGS winning scores are only slightly better than chance but once there are enough features for good predictions the winning percentages are higher than 90%. In figure 3 we see that the MSE achieved when using only approximately 100 features selected by RGS is better than when using all the features. This difference is indeed statistically significant (win score of 92%). If the MSE is replaced by correlation coefficient as the measure of quality, the average results (not shown due to lack of space) are qualitatively unchanged. RGS not only ranks the features but also gives them weights that achieve locally optimal results when using kNN regression. It therefore makes sense not only to select the features but to weigh them accordingly. Figure 5 shows the winning percentages of RGS using the weighted features versus RGS using uniformly weighted features. The corresponding MSEs (with and without weights) on the first data set are also displayed. It is clear that using the weights improves the results in a manner that becomes increasingly significant as the number of features grows, especially when the number of features is greater than the optimal number. Thus, using weighted features can compensate for choosing too many by diminishing the effect of the surplus features. To take a closer look at what features are selected, figure 6 shows the 100 highest ranking features for all algorithms on one data set. Similar selection results were obtained in the rest of the folds. One would expect to find that well isolated cells (template matching) are more informative than threshold based spikes. Indeed, all the algorithms select isolated cells more frequently within the top 100 features (RGS does so in 95% of the time and the rest in 70%-80%). A human selection of channels, based only on looking at raster plots and selecting channels with stable firing rates was also available to us. This selection was independent of the template/threshold categorisation. Once again, the algorithms selected the humanly preferred channels more frequently than the other channels. Another and more interesting observation that can also be seen in the figure is that while corrcoef, SKS and infoGain tend to select all time lags of a channel, RGS’s selections are more scattered (more channels and only a few time bins per channel). Since RGS achieves best results, we 0 100 200 300 400 500 600 50 60 70 80 90 100 winning percentage number of features RGS vs SKS RGS vs infoGain RGS vs corrcoef Figure 4: Winning percentages of RGS over the other algorithms. RGS achieves better MSEs consistently. 0 100 200 300 400 500 600 50 60 70 80 90 100 winning percentage number of features 0.6 0.8 MSE winning percentages uniform weights non−uniform weights Figure 5: Winning percentages of RGS with and without weighting of features (black). Gray lines are corresponding MSEs of these methods on the first data set. RGS SKS corrCoef infoGain Figure 6: 100 highest ranking features (grayed out) selected by the algorithms. Results are for one fold of one data set. In each sub figure the bottom row is the (100ms) time bin with least delay and the higher rows correspond to longer delays. Each column is a channel (silent channels omitted). conclude that this selection pattern is useful. Apparently RGS found these patterns thanks to its ability to evaluate complex dependency on feature subsets. This suggests that such dependency of the behavior on the neural activity does exist. 6 Summary In this paper we present a new method of selecting features for function estimation and use it to analyze neural activity during a motor control task . We use the leave-one-out mean squared error of the kNN estimator and minimize it using a gradient ascent on an “almost” smooth function. This yields a selection method which can handle a complicated dependency of the target function on groups of features yet can be applied to large scale problems. This is valuable since many common selection methods lack one of these properties. By comparing the result of our method to other selection methods on the motor control task, we show that consideration of complex dependency helps to achieve better performance. These results suggest that this is an important property of the code. Our future work is aimed at a better understanding of neural activity through the use of feature selection. One possibility is to perform feature selection on other kinds of neural data such as local field potentials or retinal activity. Another promising option is to explore the temporally changing properties of neural activity. Motor control is a dynamic process in which the input output relation has a temporally varying structure. RGS can be used in on line (rather than batch) mode to identify these structures in the code. References [1] R. Kohavi and G.H. John. Wrapper for feature subset selection. Artificial Intelligence, 97(12):273–324, 1997. [2] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. JMLR, 2003. [3] A.J. Miller. Subset Selection in Regression. Chapman and Hall, 1990. [4] L. Devroye. The uniform convergence of nearest neighbor regression function estimators and their application in optimization. IEEE transactions in information theory, 24(2), 1978. [5] C. Atkeson, A. Moore, and S. Schaal. Locally weighted learning. AI Review, 11. [6] O. Maron and A. Moore. The racing algorithm: Model selection for lazy learners. In Artificial Intelligence Review, volume 11, pages 193–225, April 1997. [7] R. Gilad-Bachrach, A. Navot, and N. Tishby. Margin based feature selection - theory and algorithms. In Proc. 21st (ICML), pages 337–344, 2004. [8] D. M. Taylor, S. I. Tillery, and A. B. Schwartz. Direct cortical control of 3d neuroprosthetic devices. Science, 296(7):1829–1832, 2002. [9] V. Vapnik. The Nature Of Statistical Learning Theory. Springer-Verlag, 1995. [10] J. R. Quinlan. Induction of decision trees. In Jude W. Shavlik and Thomas G. Dietterich, editors, Readings in Machine Learning. Morgan Kaufmann, 1990. Originally published in Machine Learning 1:81–106, 1986. [11] R. Paz, T. Boraud, C. Natan, H. Bergman, and E. Vaadia. Preparatory activity in motor cortex reflects learning of local visuomotor skills. Nature Neuroscience, 6(8):882–890, August 2003.
|
2005
|
75
|
2,894
|
Visual Encoding with Jittering Eyes Michele Rucci∗ Department of Cognitive and Neural Systems Boston University Boston, MA 02215 rucci@cns.bu.edu Abstract Under natural viewing conditions, small movements of the eye and body prevent the maintenance of a steady direction of gaze. It is known that stimuli tend to fade when they are stabilized on the retina for several seconds. However, it is unclear whether the physiological self-motion of the retinal image serves a visual purpose during the brief periods of natural visual fixation. This study examines the impact of fixational instability on the statistics of visual input to the retina and on the structure of neural activity in the early visual system. Fixational instability introduces fluctuations in the retinal input signals that, in the presence of natural images, lack spatial correlations. These input fluctuations strongly influence neural activity in a model of the LGN. They decorrelate cell responses, even if the contrast sensitivity functions of simulated cells are not perfectly tuned to counter-balance the power-law spectrum of natural images. A decorrelation of neural activity has been proposed to be beneficial for discarding statistical redundancies in the input signals. Fixational instability might, therefore, contribute to establishing efficient representations of natural stimuli. 1 Introduction Models of the visual system often examine steady-state levels of neural activity during presentations of visual stimuli. It is difficult, however, to envision how such steady-states could occur under natural viewing conditions, given that the projection of the visual scene on the retina is never stationary. Indeed, the physiological instability of visual fixation keeps the retinal image in permanent motion even during the brief periods in between saccades. Several sources cause this constant jittering of the eye. Fixational eye movements, of which we are not aware, alternate small saccades with periods of drifts, even when subjects are instructed to maintain steady fixation [8]. Following macroscopic redirection of gaze, other small eye movements, such as corrective saccades and post-saccadic drifts, are likely to occur. Furthermore, outside of the controlled conditions of a laboratory, when the head is not constrained by a bite bar, movements of the body, as well as imperfections in the vestibulo-ocular reflex, significantly amplify the motion of the retinal image. In the light of ∗Webpage: www.cns.bu.edu/∼rucci this constant jitter, it is remarkable that the brain is capable of constructing a stable percept, as fixational instability moves the stimulus by an amount that should be clearly visible (see, for example, [7]). Little is known about the purposes of fixational instability. It is often claimed that small saccades are necessary to refresh neuronal responses and prevent the disappearance of a stationary scene, a claim that has remained controversial given the brief durations of natural visual fixation (reviewed in [16]). Yet, recent theoretical proposals [1, 11] have claimed that fixational instability plays a more central role in the acquisition and neural encoding of visual information than that of simply refreshing neural activity. Consistent with the ideas of these proposals, neurophysiological investigations have shown that fixational eye movements strongly influence the activity of neurons in several areas of the monkey’s brain [5, 14, 6]. Furthermore, modeling studies that simulated neural responses during freeviewing suggest that fixational instability profoundly affects the statistics of thalamic [13] and thalamocortical activity [10]. This paper summarizes an alternative theory for the existence of fixational instability. Instead of regarding the jitter of visual fixation as necessary for refreshing neuronal responses, it is argued that the self-motion of the retinal image is essential for properly structuring neural activity in the early visual system into a format that is suitable for processing at later stages. It is proposed that fixational instability is part of a strategy of acquisition of visual information that enables compact visual representations in the presence of natural visual input. 2 Neural decorrelation and fixational instability It is a long-standing proposal that an important function of early visual processing is the removal of part of the redundancy that characterizes natural visual input [3]. Less redundant signals enable more compact representations, in which the same amount of information can be represented by smaller neuronal ensembles. While several methods exist for eliminating input redundancies, a possible approach is the removal of pairwise correlations between the intensity values of nearby pixels [2]. Elimination of these spatial correlations allows efficient representations in which neuronal responses tend to be less statistically dependent. According to the theory described in this paper, fixational instability contributes to decorrelating the responses of cells in the retina and the LGN during viewing of natural scenes. This theory is based on two factors, which are described separately in the following sections. The first component, analyzed in Section 2.1, is the spatially uncorrelated input signal that occurs when natural scenes are scanned by jittering eyes. The second factor is an amplification of this spatially uncorrelated input, which is mediated by cell response characteristics. Section 2.2 examines the interaction between the dynamics of fixational instability and the temporal characteristics of neurons in the Lateral Geniculate Nucleus (LGN), the main relay of visual information to the cortex. 2.1 Influence of fixational instability on visual input To analyze the effect of fixational instability on the statistics of geniculate activity, it is useful to approximate the input image in a neighborhood of a fixation point x0 by means of its Taylor series: I(x) ≈I(x0) + ∇I(x0) · (x −x0)T + o(|x −x0|2) (1) If the jittering produced by fixational instability is sufficiently small, high-order derivatives can be neglected, and the input to a location x on the retina during visual fixation can be approximated by its first-order expansion: S(x, t) ≈I(x) + ξT (t) · ∇I(x) = I(x) + ˜I(x, t) (2) where ξ(t) = [ξx(t), ξy(t)] is the trajectory of the center of gaze during the period of fixation, t is the time elapsed from fixation onset, I(x) is the visual input at t = 0, and ˜I(x, t) = ∂I(x) ∂x ξx(t) + ∂I(x) ∂y ξy(t) is the dynamic fluctuation in the visual input produced by fixational instability. Eq. 2 allows an analytical estimation of the power spectrum of the signal entering the eye during the self-motion of the retinal image. Since, according to Eq. 2, the retinal input S(x, t) can be approximated by the sum of two contributions, I and ˜I, its power spectrum RSS consists of three terms: RSS(u, w) ≈RII + R˜I ˜I + 2RI ˜I where u and w represent, respectively, spatial and temporal frequency. Fixational instability can be modeled as an ergodic process with zero mean and uncorrelated components along the two axes, i.e., ⟨ξ⟩T = 0 and Rξxξy(t) = 0. Although not necessary for the proposed theory, these assumptions simplify our statistical analysis, as RI ˜I is zero, and the power spectrum of the visual input is given by: RSS ≈RII + R˜I ˜I (3) where RII is the power spectrum of the stimulus, and R ˜I ˜I depends on both the stimulus and fixational instability. To determine R˜I ˜I(u, w), from Eq. 2 follows that ˜I(u, w) = iuxI(u)ξx(w) + iuyI(u)ξy(w) and under the assumption of uncorrelated motion components, approximating the power spectrum via finite Fourier Transform yields: R˜I ˜I(u, w) = lim T →∞< 1 T |˜IT (u, w)|2 >ξ,I= Rξξ(w)RII(u)|u|2 (4) where ˜IT is the Fourier Transform of a signal of duration T, and we have assumed identical second-order statistics of retinal image motion along the two Cartesian axes. As shown in Fig. 1 is clear that the presence of the term u2 in Eq. 4 compensates for the scaling invariance of natural images. That is, since for natural images RII(u) ∝u−2, the product RII(u)|u|2 whitens RII by producing a power spectrum R˜I ˜I that remains virtually constant at all spatial frequencies. 2.2 Influence of fixational instability on neural activity This section analyzes the structure of correlated activity during fixational instability in a model of the LGN. To delineate the important elements of the theory, we consider linear approximations of geniculate responses provided by space-time separable kernels. This assumption greatly simplifies the analysis of levels of correlation. Results are, however, general, and the outcomes of simulations with space-time inseparable kernels and different levels of rectification (the most prominent nonlinear behavior of parvocellular geniculate neurons) can be found in [13, 10]. Mean instantaneous firing rates were estimated on the basis of the convolution between the input I and the cell spatiotemporal kernel hα: α(t) = hα(x, t) ⋆I(x, t) = Z t 0 Z ∞ −∞ Z ∞ −∞ hα(x′, y′, t′)I(x −x′, y −y′, t −t′) dx′ dy′ dt′ where hα(x, t) = gα(t)fα(x). Kernels were designed on the basis of data from neurophysiological recordings to replicate the responses of parvocellular ON-center cells in the LGN 1 10 10 −4 10 −2 10 0 Spatial Frequency (cycles/deg) Power R˜I ˜I RII Figure 1: Fixational instability introduces a spatially uncorrelated component in the visual input to the retina during viewing of natural scenes. The graph compares the power spectrum of natural images (RII) to the dynamic power spectrum introduced by fixational instability (R˜I ˜I). The two curves represent radial averages evaluated over 15 pictures of natural scenes. of the macaque. The spatial component fα(x) was modeled by a standard difference of Gaussian [15]. The temporal kernel gα(t) possessed a biphasic profile with positive peak at 50 ms, negative peak at 75 ms, and overall duration of less than 200 ms [4]. In this section, levels of correlation in the activity of pairs of geniculate neurons are summarized by the correlation pattern ˆcαα(x): ˆcαα(x) = ⟨αy(t)αz(t)⟩ T,I (5) where αy(t) and αz(t) are the responses of cells with receptive fields centered at y and z, and x = y −z is the separation between receptive field centers. The average is evaluated over time T and over a set of stimuli I. With linear models, ˆcαα(x) can be estimated on the basis of the input power spectrum RSS(u, w): ˆcαα(x) = cαα(x, t) t=0 and cαα(x, t) = F−1{Rαα} (6) where Rαα = |Hα|2RSS(u, w) is the power spectrum of LGN activity (Hα(u, w) is the spatiotemporal Fourier transform of the kernel hα(x, t)), and F−1 represents the inverse Fourier transform operator. To evaluate Rαα, substitution of RSS from Eq. 3 and separation of spatial and temporal elements yield: Rαα ≈|Gα|2|Fα|2RII + |Gα|2|Fα|2R˜I ˜I = RS αα + RD αα (7) where Fα(u) and Gα(w) represent the Fourier Transforms of the spatial and temporal kernels. Eq. 7 shows that, similar to the retinal input, also the power spectrum of geniculate activity can be approximated by the sum of two separate elements. Only RD αα depends on fixational instability. The first term, RS αα, is determined by the power spectrum of the stimulus and the characteristics of geniculate cells but does not depend on the motion of the eye during the acquisition of visual information. By substituting in Eq. 6 the expression of Rαα from Eq. 7, we obtain cαα(x, t) ≈cS αα(x, t) + cD αα(x, t) (8) where cS αα(x, t) = F−1{RS αα(u, w)} and cD αα(x, t) = F−1{RD αα(u, w)} Eq. 8 shows that fixational instability adds the term cD αα to the pattern of correlated activity cS αα that would obtained with presentation of the same set of stimuli without the self-motion of the eye. With presentation of pictures of natural scenes, RII(w) = 2πδ(w), and the two input signals RS αα and RD αα provide, respectively, a static and a dynamic contribution to the spatiotemporal correlation of geniculate activity. The first term in Eq. 8 gives a correlation pattern: ˆcS αα(x) = kSF−1 S {|Fα|2RS II(u)} (9) where kS = |G(0)|2. By substituting R˜I ˜I from Eq. 4, the second term in Eq. 8 gives a correlation pattern: ˆcD αα(x) = kDFS −1{|Fα|2RS II(u)|u|2} (10) where kD = FT −1{|Gα(w)|2Rξξ(w)} t=0 is a constant given by the temporal dynamics of cell response and fixational instability. F −1 T and F−1 S indicate the operations of inverse Fourier Transform in time and space. To summarize, during the physiological instability of visual fixation, the structure of correlated activity in a linear model of the LGN is given by the superposition of two spatial terms, each of them weighted by a coefficient (kS and kD) that depends on dynamics: ˆcαα(x) = kSFS −1{(|Fα|2RS II(u)} + kDFS −1{|Fα|2RS II(u)|u|2} (11) Whereas the stimulus contributes to the structure of correlated activity by means of the power spectrum RS II, the contribution introduced by fixational instability depends on RS ˜I ˜I, a signal that discards the broad correlation of natural images. Since in natural images, most power is concentrated at low spatial frequencies, the uncorrelated fluctuations in the input signals generated by fixational instability have small amplitudes. That is, RD II provides less power than RS II. However, geniculate cells tend to respond more strongly to changing stimuli than stationary ones, and kD is larger than kS. Therefore, the small input modulations introduced by fixational instability are amplified by the dynamics of geniculate cells. Fig. 2 shows the structure of correlated activity in the model when images of natural scenes are examined in the presence of fixational instability. In this example, fixational instability was assumed to possess Gaussian temporal correlation, Rξξ(w), with standard deviation σT = 22 ms and amplitude σS = 12 arcmin. In addition to the total pattern of correlation given by Eq. 11, Fig. 2 also shows the patterns of correlation produced by the two components ˆcS αα and ˆcD αα. Whereas ˆcS αα was strongly influenced by the broad spatial correlations of natural images, ˆcD αα, due to its dependence on the whitened power spectrum R ˜I ˜I, was determined exclusively by cell receptive fields. Due to the amplification factor kD, ˆcD αα provided a stronger contribution than ˆcS αα and heavily influenced the global structure of correlated activity. To examine the relative influence of the two terms ˆcS αα and ˆcD αα on the structure of correlated activity, Fig. 3 shows their ratio at separation zero, ρDS = ˆcD αα(0)/ˆcS αα(0), with 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 Normalized Correlation Cell RF Separation (deg.) Static Dynamic Total Figure 2: Patterns of correlation obtained from Eq. 11 when natural images are examined in the presence of fixational instability. The three curves represent the total level of correlation (Total), the correlation ˆcS αα(x) that would be present if the same images were examined in the absence of fixational instability (Static), and the contribution ˆcD αα(x) of fixational instability (Dynamic). Data are radial averages evaluated over pairs of cells with the same separation ||x|| between their receptive fields. presentation of natural images and for various parameters of fixational instability. Fig. 3 (a) shows the effect of varying the spatial amplitude of the retinal jitter. In order to remain within the range of validity of the Taylor approximation in Eq. 2, only small amplitude values are considered. As shown by Fig. 3 (a), the larger the instability of visual fixation, the larger the contribution of the dynamic term ˆcD αα with respect to ˆcS αα. Except for very small values of σS, ρDS is larger than one, indicating that ˆcD αα influences the structure of correlated activity more strongly than ˆcS αα. Fig. 3 (b) shows the impact of varying σT , which defines the temporal window over which fixational jitter is correlated. Note that ρDS is a non-monotonic function of σT . For a range of σT corresponding to intervals shorter than the typical duration of visual fixation, ˆcD αα is significantly larger than ˆcS αα. Thus, fixational instability strongly influences correlated activity in the model when it moves the direction of gaze within a range of a few arcmin and is correlated over a fraction of the duration of visual fixation. This range of parameters is consistent with the instability of fixation observed in primates. 3 Conclusions It has been proposed that neurons in the early visual system decorrelate their responses to natural stimuli, an operation that is believed to be beneficial for the encoding of visual information [2]. The original claim, which was based on psychophysical measurements of human contrast sensitivity, relies on an inverse proportionality between the spatial response characteristics of retinal and geniculate neurons and the structure of natural images. However, data from neurophysiological recordings have clearly shown that neurons in the retina and the LGN respond significantly to low spatial frequencies, in a way that is not compatible with the requirements of Atick and Redlich’s proposal. During natural viewing, input signals to the retina depend not only on the stimulus, but also on the physiological instability of visual fixation. The results of this study show that when natural scenes are examined 0 5 10 15 0 1 2 3 4 5 6 Ratio Dynamic/Static σS (arcmin) 0 100 200 300 0 0.5 1 1.5 2 2.5 3 Ratio Dynamic/Static σT (ms) (a) (b) Figure 3: Influence of the characteristics of fixational instability on the patterns of correlated activity during presentation of natural images. The two graphs show the ratio ρDS between the peaks of the two terms ˆcD αα and ˆcS αα in Eq. 8. Fixational instability was assumed to possess a Gaussian correlation with standard deviation σT and amplitude σS. (a) Effect of varying σS (σT = 22 ms). (b) Effect of varying σT (σS = 12 arcmin). with jittering eyes, as occurs under natural viewing conditions, fixational instability tends to decorrelate cell responses even if the contrast sensitivity functions of individual neurons do not counterbalance the power spectrum of visual input. The theory described in this paper relies of two main elements. The first component is the presence of a spatially uncorrelated input signal during presentation of natural visual stimuli (R˜I ˜I in Eq. 3). This input signal is a direct consequence of the scale invariance of natural images. It is a property of natural images that, although the intensity values of nearby pixels tend to be correlated, changes in intensity around pairs of pixels are uncorrelated. This property is not satisfied by an arbitrary image. In a spatial grating, for example, intensity changes at any two locations are highly correlated. During the instability of visual fixation, neurons receive input from the small regions of the visual field covered by the jittering of their receptive fields. In the presence of natural images, although the inputs to cells with nearby receptive fields are on average correlated, the fluctuations in these input signals produced by fixational instability are not correlated. Fixational instability appears to be tuned to the statistics of natural images, as it introduces a spatially uncorrelated signal only in the presence of visual input with a power spectrum that declines as u−2 with spatial frequency. The second element of the theory is the neuronal amplification of the spatially uncorrelated input signal introduced by the self-motion of the retinal image. This amplification originates from the interaction between the dynamics of fixational instability and the temporal sensitivity of geniculate units. Since R ˜I ˜I attenuates the low spatial frequencies of the stimulus, it tends to possess less power than RII. However, in Eq. 11, the contributions of the two input signals are modulated by the multiplicative terms kS and kD, which depend on the temporal characteristics of cell responses (both kS and kD) and fixational instability (kD only). Since geniculate neurons respond more strongly to changing stimuli than to stationary ones, kD tends to be higher than kS. Correspondingly, in a linear model of the LGN, units are highly sensitive to the uncorrelated fluctuations in the input signals produced by fixational instability. The theory summarized in this study is consistent with the strong modulations of neural responses observed during fixational eye movements [5, 14, 6], as well as with the results of recent psychophysical experiments aimed at investigating perceptual influences of fixational instability [12, 9]. It should be observed that, since patterns of correlations were evaluated via Fourier analysis, this study implicitly assumed a steady-state condition of visual fixation. Further work is needed to extend the proposed theory in order to take into account time-varying natural stimuli and the nonstationary regime produced by the occurrence of saccades. Acknowledgments The author thanks Antonino Casile and Gaelle Desbordes for many helpful discussions. This material is based upon work supported by the National Institute of Health under Grant EY15732-01 and the National Science Foundation under Grant CCF-0432104. References [1] E. Ahissar and A. Arieli. Figuring space by time. Neuron, 32(2):185–201, 2001. [2] J. J. Atick and A. Redlich. What does the retina know about natural scenes? Neural Comp., 4:449–572, 1992. [3] H. B. Barlow. The coding of sensory messages. In W. H. Thorpe and O. L. Zangwill, editors, Current Problems in Animal Behaviour, pages 331–360. Cambridge University Press, Cambridge, 1961. [4] E. A. Benardete and E. Kaplan. Dynamics of primate P retinal ganglion cells: Responses to chromatic and achromatic stimuli. J. Physiol., 519(3):775–790, 1999. [5] D. A. Leopold and N. K. Logothetis. Microsaccades differentially modulate neural activity in the striate and extrastriate visual cortex. Exp. Brain. Res., 123:341–345, 1998. [6] S. Martinez-Conde, S. L. Macknik, and D. H. Hubel. The function of bursts of spikes during visual fixation in the awake primate lateral geniculate nucleus and primary visual cortex. Proc. Natl. Acad. Sci. USA, 99(21):13920–13925, 2002. [7] I. Murakami and P. Cavanagh. A jitter after-effect reveals motion-based stabilization of vision. Nature, 395(6704):798–801, 1998. [8] F. Ratliff and L. A. Riggs. Involuntary motions of the eye during monocular fixation. J. Exp. Psychol., 40:687–701, 1950. [9] M. Rucci and J. Beck. Effects of ISI and flash duration on the identification of briefly flashed stimuli. Spatial Vision, 18(2):259–274, 2005. [10] M. Rucci and A. Casile. Decorrelation of neural activity during fixational instability: Possible implications for the refinement of V1 receptive fields. Visual Neurosci., 21:725–738, 2004. [11] M. Rucci and A. Casile. Fixational instability and natural image statistics: Implications for early visual representations. Network: Computation in Neural Systems, 16(2-3):121–138, 2005. [12] M. Rucci and G. Desbordes. Contributions of fixational eye movements to the discrimination of briefly presented stimuli. J. Vision, 3(11):852–64, 2003. [13] M. Rucci, G. M. Edelman, and J. Wray. Modeling LGN responses during free-viewing: A possible role of microscopic eye movements in the refinement of cortical orientation selectivity. J. Neurosci, 20(12):4708–4720, 2000. [14] D. M. Snodderly, I. Kagan, and M. Gur. Selective activation of visual cortex neurons by fixational eye movements: Implications for neural coding. Vis. Neurosci., 18:259–277, 2001. [15] P. D. Spear, R. J. Moore, C. B. Y. Kim, J. T. Xue, and N. Tumosa. Effects of aging on the primate visual system: spatial and temporal processing by lateral geniculate neurons in young adult and old rhesus monkeys. J. Neurophysiol., 72:402–420, 1994. [16] R.M. Steinman and J.Z. Levinson. The role of eye movements in the detection of contrast and spatial detail. In E. Kowler, editor, Eye Movements and their Role in Visual and Cognitive Processes, pages 115–212. Elsevier Science, 1990.
|
2005
|
76
|
2,895
|
Learning to Control an Octopus Arm with Gaussian Process Temporal Difference Methods Yaakov Engel∗ AICML, Dept. of Computing Science University of Alberta Edmonton, Canada yaki@cs.ualberta.ca Peter Szabo and Dmitry Volkinshtein Dept. of Electrical Engineering Technion Institute of Technology Haifa, Israel peter.z.szabo@gmail.com dmitryvolk@gmail.com Abstract The Octopus arm is a highly versatile and complex limb. How the Octopus controls such a hyper-redundant arm (not to mention eight of them!) is as yet unknown. Robotic arms based on the same mechanical principles may render present day robotic arms obsolete. In this paper, we tackle this control problem using an online reinforcement learning algorithm, based on a Bayesian approach to policy evaluation known as Gaussian process temporal difference (GPTD) learning. Our substitute for the real arm is a computer simulation of a 2-dimensional model of an Octopus arm. Even with the simplifications inherent to this model, the state space we face is a high-dimensional one. We apply a GPTDbased algorithm to this domain, and demonstrate its operation on several learning tasks of varying degrees of difficulty. 1 Introduction The Octopus arm is one of the most sophisticated and fascinating appendages found in nature. It is an exceptionally flexible organ, with a remarkable repertoire of motion. In contrast to skeleton-based vertebrate and present-day robotic limbs, the Octopus arm lacks a rigid skeleton and has virtually infinitely many degrees of freedom. As a result, this arm is highly hyper-redundant – it is capable of stretching, contracting, folding over itself several times, rotating along its axis at any point, and following the contours of almost any object. These properties allow the Octopus to exhibit feats requiring agility, precision and force. For instance, it is well documented that Octopuses are able to pry open a clam or remove the plug off a glass jar, to gain access to its contents [1]. The basic mechanism underlying the flexibility of the Octopus arm (as well as of other organs, such as the elephant trunk and vertebrate tongues) is the muscular hydrostat [2]. Muscular hydrostats are organs capable of exerting force and producing motion with the sole use of muscles. The muscles serve in the dual roles of generating the forces and maintaining the structural rigidity of the appendage. This is possible due to a constant volume constraint, which arises from the fact that muscle tissue is incompressible. Proper ∗To whom correspondence should be addressed. Web site: www.cs.ualberta.ca/∼yaki use of this constraint allows muscle contractions in one direction to generate forces acting in perpendicular directions. Due to their unique properties, understanding the principles governing the movement and control of the Octopus arm and other muscular hydrostats is of great interest to both physiologists and robotics engineers. Recent physiological and behavioral studies produced some interesting insights to the way the Octopus plans and controls its movements. Gutfreund et al. [3] investigated the reaching movement of an Octopus arm and showed that the motion is performed by a stereotypical forward propagation of a bend point along the arm. Yekutieli et al. [4] propose that the complex behavioral movements of the Octopus are composed from a limited number of ”motion primitives”, which are spatio-temporally combined to produce the arm’s motion. Although physical implementations of robotic arms based on the same principles are not yet available, recent progress in the technology of “artificial muscles” using electroactive polymers [5] may allow the construction of such arms in the near future. Needless to say, even a single such arm poses a formidable control challenge, which does not appear to be amenable to conventional control theoretic or robotics methodology. In this paper we propose a learning approach for tackling this problem. Specifically, we formulate the task of bringing some part of the arm into a goal region as a reinforcement learning (RL) problem. We then proceed to solve this problem using Gaussian process temporal difference learning (GPTD) algorithms [6, 7, 8]. 2 The Domain Our experimental test-bed is a finite-elements computer simulation of a planar variant of the Octopus arm, described in [9, 4]. This model is based on a decomposition of the arm into quadrilateral compartments, and the constant muscular volume constraint mentioned above is translated into a constant area constraint on each compartment. Muscles are modeled as dampened springs and the mass of each compartment is concentrated in point masses located at its corners1. Although this is a rather crude approximation of the real arm, even for a modest 10-segment model there are already 88 continuous state variables2, making this a rather high dimensional learning problem. Figure 1 illustrates this model. Since our model is 2–dimensional, all force vectors lie on the x −y plane, and the arm’s motion is planar. This limitation is due mainly to the high computational cost of the full 3–dimensional calculations for any arm of reasonable size. There are four types of forces acting on the arm: 1) The internal forces generated by the arm’s muscles, 2) the vertical forces caused by the influence of gravity and the arm’s buoyancy in the medium in which it is immersed (typically sea water), 3) drag forces produced by the arm’s motion through this medium, and 4) internal pressure-induced forces responsible for maintaining the constant volume of each compartment. The use of simulation allows us to easily investigate different operating scenarios, such as zero or low gravity scenarios, different media, such as water, air or vacuum, and different muscle models. In this study, we used a simple linear model for the muscles. The force applied by a muscle at any given time t is F(t) = k0 + (kmax −k0)A(t) ℓ(t) −ℓrest + cdℓ(t) dt . 1For the purpose of computing volumes, masses, friction and muscle strength, the arm is effectively defined in three dimensions. However, no forces or motion are allowed in the third dimension. We also ignore the suckers located along the ventral side of the arm, and treat the arm as if it were symmetric with respect to reflection along its long axis. Finally, we comment that this model is restricted to modeling the mechanics of the arm and does not attempt to model its nervous system. 210 segments result in 22 point masses, each being described by 4 state variables – the x and y coordinates and their respective first time-derivatives. CN C1 ventral side dorsal side pair #1 pair #N+1 longitudinal muscle longitudinal muscle transverse muscle transverse muscle arm tip arm base Figure 1: An N compartment simulated Octopus arm. Each constant area compartment Ci is defined by its surrounding 2 longitudinal muscles (ventral and dorsal) and 2 transverse muscles. Circles mark the 2N + 2 point masses in which the arm’s mass is distributed. In the bottom right one compartment is magnified with additional detail. This equation describes a dampened spring with a controllable spring constant. The spring’s length at time t is ℓ(t), its resting length, at which it does not apply any force is ℓrest.3 The spring’s stiffness is controlled by the activation variable A(t) ∈[0, 1]. Thus, when the activation is zero, and the contraction is isometric (with zero velocity), the relaxed muscle exhibits a baseline passive stiffness k0. In a fully activated isometric contraction the spring constant becomes kmax. The second term is a dampening, energy dissipating term, which is proportional to the rate of change in the spring’s length, and (with c > 0) is directed to resist that change. This is a very simple muscle model, which has been chosen mainly due to its low computational cost, and the relative ease of computing the energy expended by the muscle (why this is useful will become apparent in the sequel). More complex muscle models can be easily incorporated into the simulator, but may result in higher computational overhead. For additional details on the modeling of the other forces and on the derivation of the equations of motion, refer to [4]. 3 The Learning Algorithms As mentioned above, we formulate the problem of controlling our Octopus arm as a RL problem. We are therefore required to define a Markov decision process (MDP), consisting of state and action spaces, a reward function and state transition dynamics. The states in our model are the Cartesian coordinates of the point masses and their first time-derivatives. A finite (and relatively small) number of actions are defined by specifying, for each action, a set of activations for the arm’s muscles. The actions used in this study are depicted in Figure 2. Given the arm’s current state and the chosen action, we use the simulator to compute the arm’s state after a small fixed time interval. Throughout this interval the activations remain fixed, until a new action is chosen for the next interval. The reward is defined as −1 for non-goal states, and 10 for goal states. This encourages the controller to find policies that bring the arm to the goal as quickly as possible. In addition, in order to encourage smoothness and economy in the arm’s movements, we subtract an energy penalty term from these rewards. This term is proportional to the total energy expended by all muscles during each action interval. Training is performed in an episodic manner: Upon reaching a goal, the current episode terminates and the arm is placed in a new initial position to begin a new episode. If a goal is not reached by some fixed amount of time, the 3It is assumed that at all times ℓ(t) ≥ℓrest. This is meant to ensure that our muscles can only apply force by contracting, as real muscles do. This can be assured by endowing the compartments with sufficiently high volumes, or equivalently, by setting ℓrest sufficiently low. episode terminates regardless. Action # 1 Action # 2 Action # 3 Action # 4 Action # 5 Action # 6 Figure 2: The actions used in the fixed-base experiments. Line thickness is proportional to activation intensity. For the rotating base experiment, these actions were augmented with versions of actions 1, 2, 4 and 5 that include clockwise and anti-clockwise torques applied to the arm’s base. The RL algorithms implemented in this study belong to the Policy Iteration family of algorithms [10]. Such algorithms require an algorithmic component for estimating the mean sum of (possibly discounted) future rewards collected along trajectories, as a function of the trajectory’s initial state, also known as the value function. The best known RL algorithms for performing this task are temporal difference algorithms. Since the state space of our problem is very large, some form of function approximation must be used to represent the value estimator. Temporal difference methods, such as TD(λ) and LSTD(λ), are provably convergent when used with linearly parametrized function approximation architectures [10]. Used this way, they require the user to define a fixed set of basis functions, which are then linearly combined to approximate the value function. These basis functions must be defined over the entire state space, or at least over the subset of states that might be reached during learning. When local basis functions are used (e.g., RBFs or tile codes [11]), this inevitably means an exponential explosion of the number of basis functions with the dimensionality of the state space. Nonparametric GPTD learning algorithms4 [8], offer an alternative to the conventional parametric approach. The idea is to define a nonparametric statistical generative model connecting the hidden values and the observed rewards, and a prior distribution over value functions. The GPTD modeling assumptions are that both the prior and the observation-noise distributions are Gaussian, and that the model equations relating values and rewards have a special linear form. During or following a learning session, in which a sequence of states and rewards are observed, Bayes’ rule may be used to compute the posterior distribution over value functions, conditioned on the observed reward sequence. Due to the GPTD model assumptions, this distribution is also Gaussian, and is derivable in closed form. The benefits of using (nonparametric) GPTD methods are that 1) the resulting value estimates are generally not constrained to lie in the span of any predetermined set of basis functions, 2) no resources are wasted on unvisited state and action space regions, and 3) rather than the point estimates provided by other methods, GPTD methods provide complete probability distributions over value functions. In [6, 7, 8] it was shown how the computation of the posterior value GP moments can be performed sequentially and online. This is done by a employing a forward selection mechanism, which is aimed at attaining a sparse approximation of the posterior moments, under a constraint on the resulting error. The input samples (states, or state-action pairs) used in this approximation are stored in a dictionary, the final size of which is often a good indicator of the problem’s complexity. Since nonparametric GPTD algorithms belong to the family of kernel machines, they require the user to define a kernel function, which encodes her prior knowledge and beliefs concerning similarities and correlations in the domain at hand. More specifically, the kernel function k(·, ·) defines the prior covariance of the value process. Namely, for two arbitrary states x and x′, Cov[V (x), V (x′)] = k(x, x′) (see [8] for details). In this study we experimented with several kernel functions, however, in this 4GPTD models can also be defined parametrically, see [8]. paper we will describe results obtained using a third degree polynomial kernel, defined by k(x, x′) = x⊤x′ + 1 3. It is well known that this kernel induces a feature space of monomials of degree 3 or less [12]. For our 88 dimensional input space, this feature space is spanned by a basis consisting of 91 3 = 121,485 linearly independent monomials. We experimented with two types of policy-iteration based algorithms. The first was optimistic policy iteration (OPI), in which, in any given time-step, the current GPTD value estimator is used to evaluate the successor states resulting from each one of the actions available at the current state. Since, given an action, the dynamics are deterministic, we used the simulation to determine the identity of successor states. An action is then chosen according to a semi-greedy selection rule (more on this below). A more disciplined approach is provided by a paired actor-critic algorithm. Here, two independent GPTD estimators are maintained. The first is used to determine the policy, again, by some semigreedy action selection rule, while its parameters remain fixed. In the meantime, the second GPTD estimator is used to evaluate the stationary policy determined by the first. After the second GPTD estimator is deemed sufficiently accurate, as indicated by the GPTD value variance estimate, the roles are reversed. This is repeated as many times as required, until no significant improvement in policies is observed. Although the latter algorithm, being an instance of approximate policy iteration, has a better theoretical grounding [10], in practice it was observed that the GPTD-based OPI worked significantly faster in this domain. In the experiments reported in the next section we therefore used the latter. For additional details and experiments refer to [13]. One final wrinkle concerns the selection of the initial state in a new episode. Since plausible arm configurations cannot be attained by randomly drawing 88 state variable from some simple distribution, a more involved mechanism for setting the initial state in each episode has to be defined. The method we chose is tightly connected to the GPTD mode of operation: At the end of each episode, 10 random states were drawn from the GPTD dictionary. From these, the state with the highest posterior value variance estimate was selected as the initial state of the next episode. This is a form of active learning, which is made possible by employing GPTD, and that is applicable to general episodic RL problems. 4 Experiments The experiments described in this section are aimed at demonstrating the applicability of GPTD-based algorithms to large-scale RL problems, such as our Octopus arm. In these experiments we used the simulated 10-compartment arm described in Section 2. The set of goal states consisted of a circular region located somewhere within the potential reach of the arm (recall that the arm has no fixed length). The action set depends on the task, as described in Figure 2. Training episode duration was set to 4 seconds, and the time interval between action decisions was 0.4 seconds. This allowed a maximum of 10 learning steps per trial. The discount factor was set to 1. The exploration policy used was the ubiquitous ε-greedy policy: The greedy action (i.e. the one for which the sum of the reward and the successor state’s estimated value is the highest) is chosen with probability 1 −ε, and with probability ε a random action is drawn from a uniform distribution over all other actions. The value of ε is reduced during learning, until the policy converges to the greedy one. In our implementation, in each episode, ε was dependent on the number of successful episodes experienced up to that point. The general form of this relation is ε = ε0N 1 2 /(N 1 2 +Ngoals), where Ngoals is the number of successful episodes, ε0 is the initial value of ε and N 1 2 is the number of successful episodes required to reduce ε to ε0/2. In order to evaluate the quality of learned solutions, 100 initial arm configurations were creFigure 3: Examples of initial states for the rotating-base experiments (left) and the fixedbase experiments (right). Starting states also include velocities, which are not shown. ated. This was done by starting a simulation from some fixed arm configuration, performing a long sequence of random actions, and sampling states randomly from the resulting trajectory. Some examples of such initial states are depicted in Figure 3. During learning, following each training episode, the GPTD-learned parameters were recorded on file. Each set of GPTD parameters defines a value estimator, and therefore also a greedy policy with respect to the posterior value mean. Each such policy was evaluated by using it, starting from each of the 100 initial test states. For each starting state, we recorded whether or not a goal state was reached within the episode’s time limit (4 seconds), and the duration of the episode (successful episodes terminate when a goal state is reached). These two measures of performance were averaged over the 100 starting states and plotted against the episode index, resulting in two corresponding learning curves for each experiment5. We started with a simple task in which reaching the goal is quite easy. Any point of the arm entering the goal circle was considered as a success. The arm’s base was fixed and the gravity constant was set to zero, corresponding to a scenario in which the arm moves on a horizontal frictionless plane. In the second experiment the task was made a little more difficult. The goal was moved further away from the base of the arm. Moreover, gravity was set to its natural level, of 9.8 m s2 , with the motion of the arm now restricted to a vertical plane. The learning curves corresponding to these two experiments are shown in Figure 4. A success rate of 100% was reached after 10 and 20 episodes, respectively. In both cases, even after a success rate of 100% is attained, the mean time-to-goal keeps improving. The final dictionaries contained about 200 and 350 states, respectively. In our next two experiments, the arm had to reach a goal located so that it cannot be reached unless the base of the arm is allowed to rotate. We added base-rotating actions to the basic actions used in the previous experiments (see Figure 2 for an explanation). Allowing a rotating base significantly increases the size of the action set, as well the size of the reachable state space, making the learning task considerably more difficult. To make things even more difficult, we rewarded the arm only if it reached the goal with its tip, i.e. the two point-masses at the end of the arm. In the first experiment in this series, gravity was switched on. A 99% success rate was attained after 270 trials, with a final dictionary size of 5It is worth noting that this evaluation procedure requires by far more time than the actual learning, since each point in the graphs shown below requires us to perform 100 simulation runs. Whereas learning can be performed almost in real-time (depending on dictionary size), computing the statistics for a single learning run may take a day, or more. Figure 4: Success rate (solid) and mean time to goal (dashed) for a fixed-base arm in zero gravity (left), and with gravity (right). 100% success was reached after 10 and 20 trials, respectively. The insets illustrate one starting position and the location of the goal regions, in each case. about 600 states. In the second experiment gravity was switched off, but a circular region of obstacle states was placed between the arm’s base and the goal circle. If any part of the arm touched the obstacle, the episode immediately terminated with a negative reward of -2. Here, the success rate peaked at 40% after around 1000 episodes, and remained roughly constant thereafter. It should be taken into consideration that at least some of the 100 test starting states are so close to the obstacle that, regardless of the action taken, the arm cannot avoid hitting the obstacle. The learning curves are presented in Figure 5. Figure 5: Success rate (solid) and mean time to goal (dashed) for a rotating-base arm with gravity switched on (left), and with gravity switched off but with an obstacle blocking the direct path to the goal (right). The arm has to rotate its base in order to reach the goal in either case (see insets). Positive reward was given only for arm-tip contact, any contact with the obstacle terminated the episode with a penalty. A 99% success rate was attained after 270 episodes for the first task, whereas for the second task success rate reached 40%. Video movies showing the arm in various scenarios are available at www.cs.ualberta.ca/∼yaki/movies/. 5 Discussion Up to now, GPTD based RL algorithms have only been tested on low dimensional problem domains. Although kernel methods have handled high-dimensional data, such as handwritten digits, remarkably well in supervised learning domains, the applicability of the kernelbased GPTD approach to high dimensional RL problems has remained an open question. The results presented in this paper are, in our view, a clear indication that GPTD methods are indeed scalable, and should be considered seriously as a possible solution method by practitioners facing large-scale RL problems. Further work on the theory and practice of GPTD methods is called for. Standard techniques for model selection and tuning of hyper-parameters can be incorporated straightforwardly into GPTD algorithms. Value iteration-based variants, i.e. “GPQ-learning”, would provide yet another useful set of tools. The Octopus arm domain is of independent interest, both to physiologists and robotics engineers. The fact that reasonable controllers for such a complex arm can be learned from trial and error, in a relatively short time, should not be understated. Further work in this direction should be aimed at extending the Octopus arm simulation to a full 3-dimensional model, as well as applying our RL algorithms to real robotic arms based on the muscular hydrostat principle, when these become available. Acknowledgments Y. E. was partially supported by the AICML and the Alberta Ingenuity fund. We would also like to thank the Ollendorff Minerva Center, for supporting this project. References [1] G. Fiorito, C. V. Planta, and P. Scotto. Problem solving ability of Octopus Vulgaris Lamarck (Mollusca, Cephalopoda). Behavioral and Neural Biology, 53 (2):217–230, 1990. [2] W.M. Kier and K.K. Smith. Tongues, tentacles and trunks: The biomechanics of movement in muscular-hydrostats. Zoological Journal of the Linnean Society, 83:307–324, 1985. [3] Y. Gutfreund, T. Flash, Y. Yarom, G. Fiorito, I. Segev, and B. Hochner. Organization of Octopus arm movements: A model system for studying the control of flexible arms. The journal of Neuroscience, 16:7297–7307, 1996. [4] Y. Yekutieli, R. Sagiv-Zohar, R. Aharonov, Y. Engel, B. Hochner, and T. Flash. A dynamic model of the Octopus arm. I. Biomechanics of the Octopus reaching movement. Journal of Neurophysiology (in press), 2005. [5] Y. Bar-Cohen, editor. Electroactive Polymer (EAP) Actuators as Artificial Muscles - Reality, Potential and Challenges. SPIE Press, 2nd edition, 2004. [6] Y. Engel, S. Mannor, and R. Meir. Bayes meets Bellman: The Gaussian process approach to temporal difference learning. In Proc. of the 20th International Conference on Machine Learning, 2003. [7] Y. Engel, S. Mannor, and R. Meir. Reinforcement learning with Gaussian processes. In Proc. of the 22nd International Conference on Machine Learning, 2005. [8] Y. Engel. Algorithms and Representations for Reinforcement Learning. PhD thesis, The Hebrew University of Jerusalem, 2005. www.cs.ualberta.ca/∼yaki/papers/thesis.ps. [9] R. Aharonov, Y. Engel, B. Hochner, and T. Flash. A dynamical model of the octopus arm. In Neuroscience letters. Supl. 48. Proceedings of the 6th annual meeting of the Israeli Neuroscience Society, 1997. [10] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [11] R.S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [12] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, England, 2004. [13] Y. Engel, P. Szabo, and D. Volkinshtein. Learning to control an Octopus arm with Gaussian process temporal difference methods. Technical report, Technion Institute of Technology, 2005. www.cs.ualberta.ca/∼yaki/reports/octopus.pdf.
|
2005
|
77
|
2,896
|
Improved Risk Tail Bounds for On-Line Algorithms * Nicolo Cesa-Bianchi DSI, Universita di Milano via Comelico 39 20135 Milano, Italy cesa-bianchi@dsi.unimi.it Abstract Claudio Gentile DICOM, Universita dell'Insubria via Mazzini 5 21100 Varese, Italy gentile@dsi.unimi.it We prove the strongest known bound for the risk of hypotheses selected from the ensemble generated by running a learning algorithm incrementally on the training data. Our result is based on proof techniques that are remarkably different from the standard risk analysis based on uniform convergence arguments. 1 Introduction In this paper, we analyze the risk of hypotheses selected from the ensemble obtained by running an arbitrary on-line learning algorithm on an i.i.d. sequence of training data. We describe a procedure that selects from the ensemble a hypothesis whose risk is, with high probability, at most Mn + 0 ((innn)2 + J~n Inn) , where Mn is the average cumulative loss incurred by the on-line algorithm on a training sequence of length n. Note that this bound exhibits the "fast" rate (in n)2 In whenever the cumulative loss nMn is 0(1). This result is proven through a refinement of techniques that we used in [2] to prove the substantially weaker bound Mn + 0 ( J (in n) In). As in the proof of the older result, we analyze the empirical process associated with a run of the on-line learner using exponential inequalities for martingales. However, this time we control the large deviations of the on-line process using Bernstein's maximal inequality rather than the Azuma-Hoeffding inequality. This provides a much tighter bound on the average risk of the ensemble. Finally, we relate the risk of a specific hypothesis within the ensemble to the average risk. As in [2], we select this hypothesis using a deterministic sequential testing procedure, but the use of Bernstein's inequality makes the analysis of this procedure far more complicated. The study of the statistical risk of hypotheses generated by on-line algorithms, initiated by Littlestone [5], uses tools that are sharply different from those used for uniform convergence analysis, a popular approach based on the manipulation of suprema of empirical ' Part of the results contained in this paper have been presented in a talk given at the NIPS 2004 workshop on "(Ab)Use of Bounds". processes (see, e.g., [3]). Unlike uniform convergence, which is tailored to empirical risk minimization, our bounds hold for any learning algorithm. Indeed, disregarding efficiency issues, any learner can be run incrementally on a data sequence to generate an ensemble of hypotheses. The consequences of this line of research to kernel and margin-based algorithms have been presented in our previous work [2]. Notation. An example is a pair (x , y), where x E X (which we call instance) is a data element and y E Y is the label associated with it. Instances x are tuples of numerical and/or symbolic attributes. Labels y belong to a finite set of symbols (the class elements) or to an interval of the real line, depending on whether the task is classification or regression. We allow a learning algorithm to output hypotheses of the form h : X ----> D , where D is a decision space not necessarily equal to y. The goodness of hypothesis h on example (x, y) is measured by the quantity C(h(x), y), where C : D x Y ----> lR. is a nonnegative and bounded loss function. 2 A bound on the average risk An on-line algorithm A works in a sequence of trials. In each trial t = 1,2, ... the algorithm takes in input a hypothesis Ht- l and an example Zt = (Xt, yt), and returns a new hypothesis H t to be used in the next trial. We follow the standard assumptions in statistical learning: the sequence of examples zn = ((Xl , Yd , ... , (Xn , Yn)) is drawn i.i.d. according to an unknown distribution over X x y. We also assume that the loss function C satisfies 0 ::; C ::; 1. The success of a hypothesis h is measured by the risk of h, denoted by risk(h). This is the expected loss of h on an example (X, Y) drawn from the underlying distribution , risk(h) = lEC(h(X), Y). Define also riskernp(h) to be the empirical risk of h on a sample zn, 1 n riskernp(h) = - 2: C(h(Xt ), yt) . n t=l Given a sample zn and an on-line algorithm A, we use Ho, HI, ... ,Hn- l to denote the ensemble of hypotheses generated by A. Note that the ensemble is a function of the random training sample zn. Our bounds hinge on the sample statistic which can be easily computed as the on-line algorithm is run on zn. The following bound, a consequence of Bernstein's maximal inequality for martingales due to Freedman [4], is of primary importance for proving our results. Lemma 1 Let L I , L2 , ... be a sequence of random variables, 0 ::; Lt ::; 1. Define the bounded martingale difference sequence Vi = lE[Lt ILl' ... ' Lt- l ] - Lt and the associated martingale Sn = VI + ... + Vn with conditional variance Kn = L:~=l Var[Lt I LI, ... ,Lt - I ]. Then,forall s,k ~ 0, IP' (Sn ~ s, Kn ::; k) ::; exp ( - 2k :22S/ 3) . The next proposition, derived from Lemma 1, establishes a bound on the average risk of the ensemble of hypotheses. Proposition 2 Let Ho, . .. ,Hn - 1 be the ensemble of hypotheses generated by an arbitrary on-line algorithm A. Then,for any 0 < 5 ::; 1, ( 1 ~ . 36 (nMn +3 ) IP' ;:;: ~ rlsk(Ht- d ::::: Mn + ~ In 5 + 2 The bound shown in Proposition 2 has the same rate as a bound recently proven by Zhang [6, Theorem 5]. However, rather than deriving the bound from Bernstein inequality as we do, Zhang uses an ad hoc argument. Proof. Let 1 n f-ln = - 2: risk(Ht_d and vt- l = risk(Ht_d - C(Ht-1(Xt), yt) for t ::::: l. n t = l Let "'t be the conditional variance Var(C(Ht _ 1 (Xt ) , yt) I Zl , ... , Zt- l). Also, set for brevity K n = 2:~= 1 "'t, K~ = l2:~= 1 "'d, and introduce the function A (x) 2 In (X+l)}X +3) for x ::::: O. We find upper and lower bounds on the probability IP' (t vt- l ::::: A(Kn) + J A(Kn) Kn) . (1) The upper bound is determined through a simple stratification argument over Lemma 1. We can write n 1P'(2: vt- l ::::: A(Kn) + J A(Kn) Kn) t = l n ::; 1P'(2: vt- l ::::: A(K~) + J A(K~) K~) t=l n n ::; 2: 1P'(2: vt- 1 ::::: A(s) + JA(s)s, K~ = s) s=o t= l n n ::; 2: 1P'(2: vt- l ::::: A(s) + J A(s) s, Kn ::; s + 1) s=o t= l ~ ( (A(s) + J A(s) s)2 ) < ~exp -~--~~-=====~~---- s=o ~(A(s) + J A(s) s) + 2(s + 1) (using Lemma 1). Since (A(s)+~)2 > A(s)/2 for all s > 0 we obtain HA(s)+VA(S)S) +2(S+1) , (1) < t e- A(s)/2 = t 5 < 5. - s=o s=o (s + l)(s + 3) (2) As far as the lower bound on (1) is concerned, we note that our assumption 0 ::; C ::; 1 implies "'t ::; risk(Ht_d for all t which, in tum, gives Kn ::; nf-ln. Thus (1) = IP'( nf-ln - nMn ::::: A(Kn) + J A(Kn) Kn) ::::: IP'( nf-ln - nMn ::::: A(nf-ln) + J A(nf-ln) nf-ln) = IP'( 2nf-ln ::::: 2nMn + 3A(nf-ln) + J4nMn A(nf-ln) + 5A(nf-lnF ) = lP'( x::::: B + ~A(x) + JB A(x) + ~A2(x) ) , where we set for brevity x = nf-ln and B = n Mn. We would like to solve the inequality x ~ B + ~A(x) + VB A(x) + ~A2(X) (3) W.r.t. x . More precisely, we would like to find a suitable upper bound on the (unique) x* such that the above is satisfied as an equality. A (tedious) derivative argument along with the upper bound A(x) ::; 4 In (X!3) show that x' = B + 2 VB In ( Bt3) + 36ln ( Bt3) makes the left-hand side of (3) larger than its right-hand side. Thus x' is an upper bound on x* , and we conclude that which, recalling the definitions of x and B, and combining with (2), proves the bound. D 3 Selecting a good hypothesis from the ensemble If the decision space D of A is a convex set and the loss function £ is convex in its first argument, then via Jensen's inequality we can directly apply the bound of Proposition 2 to the risk of the average hypothesis H = ~ L ~=I H t - I . This yields lP' (riSk(H) ~ Mn + ~ In (nM~ + 3) + 2 ~n In (nM~ + 3) ) ::; 6. (4) Observe that this is a O(l/n) bound whenever the cumulative loss n Mn is 0(1). If the convexity hypotheses do not hold (as in the case of classification problems), then the bound in (4) applies to a hypothesis randomly drawn from the ensemble (this was investigated in [1] though with different goals). In this section we show how to deterministically pick from the ensemble a hypothesis whose risk is close to the average ensemble risk. To see how this could be done, let us first introduce the functions Er5(r, t) = 3(!~ t) + J ~~rt and cr5(r, t) = Er5 (r + J ~~rt' t) , 'th B-1 n(n+2) WI n r5 . Let riskemp(Ht , t + 1) + Er5 (riskemp(Ht, t + 1), t) be the penalized empirical risk of hypothesis H t , where 1 n riskemp(Ht , t + 1) = -- " £(Ht(Xi), Xi) n - t ~ i=t+1 is the empirical risk of H t on the remaining sample Zt+l, ... , Z]1' We now analyze the performance of the learning algorithm that returns the hypothesis H minimizing the penalized risk estimate over all hypotheses in the ensemble, i.e., I ii = argmin( riskemp(Ht , t + 1) + Er5 (riskemp(Ht , t + 1), t)) . (5) O::;t <n I Note that, from an algorithmic point of view, this hypothesis is fairly easy to compute. In particular, if the underlying on-line algorithm is a standard kernel-based algorithm, fj can be calculated via a single sweep through the example sequence. Lemma 3 Let Ho , ... , H n - 1 be the ensemble of hypotheses generated by an arbitrary online algorit~m A working with a loss £ satisfying 0 S £ S 1. Then,for any 0 < b S 1, the hypothesis H satisfies lP' (risk(H) > min (risk(Ht ) + 2 c8(risk(Ht ) , t))) S b . O::;t <n Proof. We introduce the following short-hand notation riskemp(Ht , t + 1), f = argmin (Rt + £8(Rt , t)) O::;t <n T * argmin (risk(Ht ) + 2c8(risk(Ht ), t)) . O::;t <n Also, let H * = H T * and R * = riskemp(HT * , T * + 1) = R T * . Note that H defined in (5) coincides with Hi' . Finally, let Q( ) = y'2B(2B + 9r(n - t)) - 2B r, t () . 3 n - t With this notation we can write lP' ( risk(H) > risk(H*) + 2c8(risk(H*), T *)) < lP' ( risk(H) > risk(H*) + 2C8 (R* - Q(R*, T *), T *)) + lP' (riSk(H*) < R * - Q(R*, T *)) < lP' ( risk(H) > risk(H*) + 2C8 (R* - Q(R*, T *), T *) ) + ~ lP' ( risk(Ht ) < R t - Q(Rt , t)) . Applying the standard Bernstein's inequality (see, e.g., [3, Ch. 8]) to the random variables R t with IRtl S 1 and expected value risk(Ht ), and upper bounding the variance of R t with risk(Ht ), yields ( . () B + y'B(B + 18(n - t)risk(Ht ))) - B lP' r~sk H t < Rt () S e . 3 n - t With a little algebra, it is easy to show that . () B + y'B(B + 18(n - t)risk(Ht )) r~sk H t < Rt () 3 n - t is equivalent to risk(Ht ) < R t - Q(Rt , t). Hence, we get lP' ( risk(H) > risk(H*) + 2c8(risk(H*), T *)) < lP' (risk(H) > risk(H*) + 2C8 (R* - Q(R*, T *),T *) ) + n e- B < lP' (risk(H) > risk(H*) + 2£8(R*, T *)) +n e- B where in the last step we used ~ B'r Q('r, t) ::; -n - t and Set for brevity E = Eo (R* , T *) . We have IP'( risk(H) > risk(H*) + 2E) Co ('I' - J ~~'rt' t) = Eo ('I', t) . IP' ( risk(H) > risk(H*) + 2E, R f + Eo (Rf' T) ::; R * + E) (since Rf + Eo(Rf, T) ::; R * + E holds with certainty) < ~ IP' ( Rt + Eo(Rt , t) ::; R * + E, risk(Ht) > risk(H*) + 2E). (6) Now, if Rt + Eo (Rt, t) ::; R * + E holds, then at least one of the following three conditions Rt ::; risk(Ht ) - Eo(Rt , t) , R * > risk(H*) + E, risk(Ht ) - risk(H*) < 2E must hold. Hence, for any fixed t we can write IP' ( Rt + Eo(Rt, t) ::; R * + E, risk(Ht ) > risk(H*) + 2E) < IP'( Rt ::; risk(Ht ) - Eo(Rt , t) , risk(Ht ) > risk(H*) + 2E) +IP' ( R * > risk(H*) + E, risk(Ht ) > risk(H*) + 2E) +IP' ( risk(Ht ) - risk(H*) < 2E, risk(Ht ) > risk(H*) + 2E) < IP'( Rt ::; risk(Ht ) - Eo(Rt , t)) +IP'( R * > risk(H*) + E) . (7) Plugging (7) into (6) we have IP' (risk(H) > risk(H*) + 2E) < ~ IP' ( R t ::; risk(Ht) - Eo(Rt, t)) +n IP'( R * > risk(H*) + E) < n e- B + n ~ IP' ( Rt 2: risk(Ht) + Eo(Rt,t)) ::; n e- B + n 2 e- B , where in the last two inequalities we applied again Bernstein's inequality to the random variables Rt with mean risk(Ht ). Putting together we obtain lP'(risk(H) > risk(H*) + 2co(risk(H*), T *)) ::; (2n + n 2)e- B which, recalling that B = In n(no+2) , implies the thesis. D Fix n 2: 1 and 15 E (0,1). For each t = 0, ... , n - 1, introduce the function f() llCln(n -t) + 1 m,cx tX =x++ 2 --, x2:0, 3 n-t n-t where C = In 2n(~+2) . Note that each ft is monotonically increasing. We are now ready to state and prove the main result of this paper. Theorem 4 Fix any loss function C satisfying 0 ::; C ::; 1. Let H 0 , ... , H n-;:..l be the ensemble of hypotheses generated by an arbitrary on-line algorithm A and let H be the hypothesis minimizing the penalized empirical risk expression obtained by replacing C8 with C8/2 in (5). Then,for any 0 < 15 ::; 1, ii satisfies ( ~ ( 36 2n(n+3) IP' risk(H) ;::: min ft Mt,n + --In 15 + 2 t,n 8 < 15 M In 2n(n+3) )) t , O<C;Vn n - t n where Mt ,n = n~t L: ~=t+l C(Hi- 1 (Xi)' Xi). In particular, upper bounding the minimum over t with t = 0 yields ( ~ ( 36 2n(n + 3) IP' risk(H) ;::: fo Mn + -:;:; In 15 + 2 M In 2n(n+3) )) n 8 < J. n (8) For n ---+ 00, bound (8) shows that risk(ii) is bounded with high probability by Mn+O Cn:n + VMn~nn) . If the empirical cumulative loss n Mn is small (say, Mn ::; cln, where c is constant with n), then our penalized empirical risk minimizer ii achieves a 0 ( (In 2 n) In) risk bound. Also, recall that, in this case, under convexity assumptions the average hypothesis H achieves the sharper bound 0 (1 In) . Proof. Let Mt ,n = n~t L:~:/ risk(Hi ). Applying Lemma 3 with C8/2 we obtain lP'(risk(ii) > min (risk(Ht) + c8/2(risk(Ht), t)))::; i . (9) O<C;t<n 2 We then observe that min (risk(Ht) + c8/2(risk(Ht ), t)) O.:;t<n min min (risk(Hi ) + c8/2(risk(Hi ), i)) O<C;t<n t<C;2<n n-l < min _ 1_ "(risk(Hi ) + c8/2 (risk(Hi ), i)) O<t<n n - t ~ < < i=t min Mt + -- ,,--- + -- " ( 1 n- l 8 0 1 n- l ( O.:;t<n ,n n - t {;;; 3 n - i n - t {;;; (using the inequality Vx + y ::; ,jX + 2:/x ) min Mt + -- " - -- + -- " ( 1 n-l 11 0 1 n-l O.:;t<n ,n n - t {;;; 3 n - i n - t {;;; . ( 110 In(n-t)+1 V20Mt,n) mm Mt n + -+ 2 --O.:;t<n ' 3 n - t n - t (using L:7=1 I ii::; 1 + In k and the concavity of the square root) min ft(Mt n) . O<C;t<n ' Now, it is clear that Proposition 2 can be immediately generalized to imply the following set of inequalities, one for each t = 0, ... , n - 1, ( 36 A J Mt n A) 0 IP' /Jt n ~ M t n + -- + 2 --' s, , n - t n - t 2n (10) where A = In 2n (~ +3) . Introduce the random variables Ko, ... ,Kn - 1 to be defined later. We can write IP' ( min (riSk(Ht ) + c8/2(risk(Ht ) , t)) ~ min Kt) O:'S:t<n O:'S:t<n SIP' ( min !t(/Jt n) ~ min K t) S ~ IP' (!t(/Jt n) ~ K t) O<t<n ' O<t<n ~ , t=O Now, for each t = 0, ... , n - l, define K t = ft ( Mt ,n + ~6_1 + 2 J M~:':./) . Then (10) and the monotonicity of fo, .. . , fn-l allow us to obtain IP' ( min (risk(Ht ) + c8/2(risk(Ht ) , t)) ~ min Kt) O:'S:t<n O:'S:t<n n- l ( (36 A (iVf;;A)) < ~ IP' ft(/Jt ,n) ~ !t Mt ,n + n _ t + 2 V ~ n- l ( 36 A (iVf;;A) ~ IP' /Jt ,n ~ Mt ,n + n _ t + 2 V ~ S 0/ 2 . Combining with (9) concludes the proof. D 4 Conclusions and current research issues We have shown tail risk bounds for specific hypotheses selected from the ensemble generated by the run of an arbitrary on-line algorithm. Proposition 2, our simplest bound, is proven via an easy application of Bernstein's maximal inequality for martingales, a quite basic result in probability theory. The analysis of Theorem 4 is also centered on the same martingale inequality. An open problem is to simplify this analysis, possibly obtaining a more readable bound. Also, the bound shown in Theorem 4 contains In n terms. We do not know whether these logarithmic terms can be improved to In(Mnn) , similarly to Proposition 2. A further open problem is to prove lower bounds, even in the special case when nMn is bounded by a constant. References [1] A. Blum, A. Kalai, and J. Langford. Beating the hold-out. In Proc.12th COLT, 1999. [2] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms. IEEE Trans. on Information Theory, 50(9):2050-2057,2004. [3] L. Devroye, L. Gy6rfi , and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer Verlag, 1996. [4] D. A. Freedman. On tail probabilities for martingales. The Annals of Probability, 3: 100-118,1975. [5] N. Littlestone. From on-line to batch learning. In Proc. 2nd COLT, 1989. [6] T. Zhang. Data dependent concentration bounds for sequential prediction algorithms. In Proc. 18th COLT, 2005.
|
2005
|
78
|
2,897
|
Kernelized Infomax Clustering Felix V. Agakov Edinburgh University Edinburgh EH1 2QL, U.K. felixa@inf.ed.ac.uk David Barber IDIAP Research Institute CH-1920 Martigny Switzerland david.barber@idiap.ch Abstract We propose a simple information-theoretic approach to soft clustering based on maximizing the mutual information I(x, y) between the unknown cluster labels y and the training patterns x with respect to parameters of specifically constrained encoding distributions. The constraints are chosen such that patterns are likely to be clustered similarly if they lie close to specific unknown vectors in the feature space. The method may be conveniently applied to learning the optimal affinity matrix, which corresponds to learning parameters of the kernelized encoder. The procedure does not require computations of eigenvalues of the Gram matrices, which makes it potentially attractive for clustering large data sets. 1 Introduction Let x ∈R|x| be a visible pattern, and y ∈{y1, . . . , y|y|} its discrete unknown cluster label. Rather than learning a density model of the observations, our goal here will be to learn a mapping x →y from the observations to the latent codes (cluster labels) by optimizing a formal measure of coding efficiency. Good codes y should be in some way informative about the underlying high-dimensional source vectors x, so that the useful information contained in the sources is not lost. The fundamental measure in this context is the mutual information I(x, y) def = H(x) −H(x|y) ≡H(y) −H(y|x), (1) which indicates the decrease in uncertainty about the pattern x due to the knowledge of the underlying cluster label y (e.g. Cover and Thomas (1991)). Here H(y) ≡−⟨log p(y)⟩p(y) and H(y|x) ≡−⟨log p(y|x)⟩p(x,y) are marginal and conditional entropies respectively, and the brackets ⟨. . .⟩p represent averages over p. In our case the encoder model is defined as p(x, y) ∝ M X m=1 δ(x −x(m))p(y|x), (2) where {x(m)|m = 1, . . . , M} is a set of training patterns. Our goal is to maximize (1) with respect to parameters of a constrained encoding distribution p(y|x). In contrast to most applications of the infomax principle (Linsker (1988)) in stochastic channels (e.g. Brunel and Nadal (1998); Fisher and Principe (1998); Torkkola and Campbell (2000)), optimization of the objective (1) is computationally tractable since the cardinality of the code space |y| (the number of clusters) will typically be low. Indeed, had the code space been high-dimensional, computation of I(x, y) would have required evaluation of the generally intractable entropy of the mixture H(y), and approximations would have needed to be considered (e.g. Barber and Agakov (2003); Agakov and Barber (2006)). Maximization of the mutual information with respect to parameters of the encoder model effectively defines a discriminative unsupervised optimization framework, where the model is parameterized similarly to a conditionally trained classifier, but where the cluster allocations are generally unknown. Training such models p(y|x) by maximizing the likelihood p(x) would be meaningless, as the cluster variables would marginalize out, which motivates also our information theoretic approach. In this way we may extract soft cluster allocations directly from the training set, with no additional information about class labels, relevance patterns, etc. required. This is an important difference from other clustering techniques making a recourse to information theory, which consider different channels and generally require additional information about relevance or irrelevance variables (cf Tishby et al. (1999); Chechik and Tishby (2002); Dhillon and Guan (2003)). Our infomax approach is in contrast with probabilistic methods based on likelihood maximization. There the task of finding an optimal cluster allocation y for an observed pattern x may be viewed as an inference problem in generative models y →x, where the probability of the data p(x) = P y p(y)p(x|y) is defined as a mixture of |y| processes. The key idea of fitting such models to data is to find a constrained probability distribution p(x) which would be likely to generate the visible patterns {x(1), . . . , x(M)} (this is commonly achieved by maximizing the marginal likelihood for deterministic parameters of the constrained distribution). The unknown clusters y corresponding to each pattern x may then be assigned according to the posterior p(y|x) ∝p(y)p(x|y). Such generative approaches are well-known but suffer from the constraint that p(x|y) is a correctly normalised distribution in x. In high dimensions |x| this restricts the class of generative distributions usually to (mixtures of) Gaussians whose mean is dependent (in a linear or non-linear way) on the latent cluster y. Typically data will lie on low dimensional curved manifolds embedded in the high dimensional x-space. If we are restricted to using mixtures of Gaussians to model this curved manifold, typically a very large number of mixture components will be required. No such restrictions apply in the infomax case so that the mappings p(y|x) may be very complex, subject only to sensible clustering constraints. 2 Clustering in Nonlinear Encoder Models Arguably, there are at least two requirements which a meaningful cluster allocation procedure should satisfy. Firstly, clusters should be, in some sense, locally smooth. For example, each pair of source vectors should have a high probability of being assigned to the same cluster if the vectors satisfy specific geometric constraints. Secondly, we may wish to avoid assigning unique cluster labels to outliers (or other constrained regions in the data space), so that under-represented regions in the data space are not over-represented in the code space. Note that degenerate cluster allocations are generally suboptimal under the objective (1), as they would lead to a reduction in the marginal entropy H(y). On the other hand, it is intuitive that maximization of the mutual information I(x, y) favors hard assignments of cluster labels to equiprobable data regions, as this would result in the growth in H(y) and reduction in H(y|x). 2.1 Learning Optimal Parameters Local smoothness and “softness” of the clusters may be enforced by imposing appropriate constraints on p(y|x). A simple choice of the encoder is p(yj|x(i)) ∝exp{−∥x(i) −wj∥2/sj + bj}, (3) where the cluster centers wj ∈R|x|, the dispersions sj, and the biases bj are the encoder parameters to be learned. Clearly, under the encoding distribution (3) patterns x lying close to specific centers wj in the data space will tend to be clustered similarly. In principle, we could consider other choices of p(y|x); however (3) will prove to be particularly convenient for the kernelized extensions. Learning the optimal cluster allocations corresponds to maximizing (1) with respect to the encoder parameters (3). The gradients are given by ∂I(x, y) ∂wj = 1 M M X m=1 p(yj|x(m))(x(m) −wj) sj α(m) j (4) ∂I(x, y) ∂sj = 1 M M X m=1 p(yj|x(m))∥x(m) −wj∥2 2s2 j α(m) j . (5) Analogously, we get ∂I(x, y)/∂bj = PM m=1 p(yj|x(m))α(m) j /M. Expressions (4) and (5) have the form of the weighted EM updates for isotropic Gaussian mixtures, with the weighting coefficients α(m) j defined as α(m) j def = αj(x(m)) def = log p(yj|x(m)) p(yj) −KL ³ p(y|x(m))∥⟨p(y|x)⟩˜p(x) ´ , (6) where KL defines the Kullback-Leibler divergence (e.g. Cover and Thomas (1991)), and ˜p(x) ∝P m δ(x −x(m)) is the empirical distribution. Clearly, if α(m) j is kept fixed for all m = 1, . . . , M and j = 1, . . . , |y|, the gradients (4) are identical to those obtained by maximizing the log-likelihood of a Gaussian mixture model (up to irrelevant constant pre-factors). Generally, however, the coefficients α(m) j will be functions of wl, sl, and bl for all cluster labels l = 1, . . . , |y|. In practice, we may impose a simple construction ensuring that sj > 0, for example by assuming that sj = exp{˜sj} where ˜sj ∈R. For this case, we may re-express the gradients for the variances as ∂I(x, y)/∂˜sj = sj∂I(x, y)/∂sj. Expressions (4) and (5) may then be used to perform gradient ascent on I(x, y) for wj, ˜sj, and bj, where j = 1, . . . , |y|. After training, the optimal cluster allocations may be assigned according to the encoding distribution p(y|x). 2.2 Infomax Clustering with Kernelized Encoder Models We now extend (3) by considering a kernelized parameterization of a nonlinear encoder. Let us assume that the source patterns x(i), x(j) have a high probability of being assigned to the same cluster if they lie close to a specific cluster center in some feature space. One choice of the encoder distribution for this case is p(yj|x(i)) ∝exp{−∥φ(x(i)) −wj∥2/sj + bj}, (7) where φ(x(i)) ∈R|φ| is the feature vector corresponding to the source pattern x(i), and wj ∈R|φ| is the (unknown) cluster center in the feature space. The feature space may be very high- or even infinite-dimensional. Since each cluster center wi ∈R|φ| lives in the same space as the projected sources φ(x(i)), it is representable in the basis of the projections as wj = M X m=1 αmjφ(x(m)) + w⊥ j , (8) where ˜w⊥ i ∈R|φ| is orthogonal to the span of φ(x1), . . . , φ(xM), and {αmj} is a set of coefficients (here j and m index |y| codes and M patterns respectively). Then we may transform the encoder distribution (7) to p(yj|x(m)) ∝ exp n − ³ Kmm −2kT (x(m))aj + aT j Kaj + cj ´ /sj o def = exp{−fj(x(m))}, (9) where k(x(m)) corresponds to the mth column (or row) of the Gram matrix K def = {Kij} def = {φ(x(i))T φ(x(j))} ∈RM×M, aj ∈RM is the jth column of the matrix of the coefficients A def = {amj} ∈RM×|y|, and cj = (w⊥ j )T w⊥ j −sjbj. Without loss of generality, we may assume that c = {cj} ∈R|y| is a free unconstrained parameter. Additionally, we will ensure positivity of the dispersions sj by considering a construction constraint sj = exp{˜sj}, where ˜sj ∈R. Learning Optimal Parameters First we will assume that the Gram matrix K ∈RM×M is fixed and known (which effectively corresponds to considering a fixed affinity matrix, see e.g. Dhillon et al. (2004)). Objective (1) should be optimized with respect to the log-dispersions ˜sj ≡log(sj), biases cj, and coordinates A ∈RM×|y| in the space spanned by the feature vectors {φ(x(i))|i = 1, . . . , M}. From (9) we get ∂I(x, y) ∂aj = 1 sj ⟨p(yj|x) (k(x) −Kaj) αj(x)⟩˜p(x) ∈RM, (10) ∂I(x, y) ∂˜sj = 1 2sj ⟨p(yj|x)fj(x)αj(x)⟩˜p(x) , (11) where ˜p(x) ∝PM m−1 δ(x−x(m)) is the empirical distribution. Analogously, we obtain ∂I(x, y)/∂cj = ⟨αj(x)⟩˜p(x), (12) where the coefficients αj(x) are given by (6). For a known Gram matrix K ∈RM×M, the gradients ∂I/∂aj, ∂I/∂˜sj, and ∂I/∂cj given by expressions (10) – (12) may be used in numerical optimization for the model parameters. Note that the matrix multiplication in (10) is performed once for each aj, so that the complexity of computing the gradient is ∼O(M 2|y|) per iteration. We also note that one could potentially optimize (1) by applying the iterative Arimoto-Blahut algorithm for maximizing the channel capacity (see e.g. Cover and Thomas (1991)). However, for any given constrained encoder it is generally difficult to derive closed-form updates for the parameters of p(y|x), which motivates a numerical optimization. Learning Optimal Kernels Since we presume that explicit computations in R|φ| are expensive, we cannot compute the Gram matrix by trivially applying its definition K = {φ(xi)T φ(xj)}. Instead, we may interpret scalar products in feature spaces as kernel functions φ(x(i))T φ(x(j)) = KΘ(x(i), x(j); Θ), ∀x(i), x(j) ∈Rx, (13) where KΘ : Rx × Rx →R satisfies Mercer’s kernel properties (e.g. Scholkopf and Smola (2002)). We may now apply our unsupervised framework to implicitly learn the optimal nonlinear features by optimizing I(x, y) with respect to the parameters Θ of the kernel function KΘ. After some algebraic manipulations, we get M ∂I(x, y) ∂Θ = M X m=1 KL(p(y|x(m))∥p(y)) |y| X k=1 ∂fk(x(m)) ∂Θ p(yk|x(m)) − M X m=1 |y| X j=1 ∂fj(x(m)) ∂Θ p(yj|x(m)) log p(yj|x(m)) p(yj) (14) where fk(x(m)) is given by (9). The computational complexity of computing the updates for Θ is O(M|y|2), where M is the number of training patterns and |y| is the number of clusters (which is assumed to be small). Note that in contrast to spectral methods (see e.g. Shi and Malik (2000), Ng et al. (2001)) neither the objective (1) nor its gradients require inversion of the Gram matrix K ∈RM×M or computations of its eigenvalue decomposition. In the special case of the radial basis function (RBF) kernels Kβ(x(i), x(j)) = exp{−β∥x(i) −x(j)∥2}, (15) the gradients of the encoder potentials are simply given by ∂fj(x(m)) ∂β = 1 sj ³ aT j ˜Kaj −2˜kT (x(m))aj ´ , (16) where ˜K def = { ˜Kij} def = K(x(i), x(j))(1 −δ(x(i) −x(j))), and δ is the Kronecker delta. By substituting (16) into the general expression (14), we obtain the gradient of the mutual information with respect to the RBF kernel parameters. 3 Demonstrations We have empirically compared our kernelized information-theoretic clustering approach with Gaussian mixture, k-means, feature-space k-means, non-kernelized information-theoretic clustering (see Section 2.1), and a multi-class spectral clustering method optimizing the normalized cuts. We illustrate the methods on datasets that are particularly easy to visualize. Figure 1 shows a typical application of the methods to the spiral data, where x1(t) = t cos(t)/4, x2(t) = t sin(t)/4 correspond to different coordinates of x ∈R|x|, |x| = 2, and t ∈[0, 3.(3)π]. The kernel parameters β of the RBF-kernelized encoding distribution were initialized at β0 = 2.5 and learned according to (16). The initial settings of the coefficients A ∈RM×|y| in the feature space were sampled from NAij(0, 0.1). The log-variances ˜s1, . . . , ˜s|y| were initialized at zeros. The encoder parameters A and {˜sj|j = 1, . . . , |y|} (along with the RBF kernel parameter β) were optimized by applying the scaled conjugate gradients. We found that Gaussian mixtures trained by maximizing the likelihood usually resulted in highly stochastic cluster allocations; additionally, they led to a large variation in cluster sizes. The Gaussian mixtures were initialized using k-means – other choices usually led to worse performance. We also see that the k-means effectively breaks, as the similarly clustered points lie close to each other in R2 (according to the L2-norm), but the allocated clusters are not locally smooth in t. On the other hand, our method with the RBF-kernelized encoders typically led to locally smooth cluster allocations. −2 −1 0 1 2 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5Gaussian mixture clustering for |y|=3 −2 −1 0 1 2 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 K−means clustering for |y|=3 −2 −1 0 1 2 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 KMI Clustering, β=0.825 (β0 = 2.500), |y|=3 Gaussian Mixture Clustering Training Patterns p(yj|x(m)) 10 20 30 40 50 60 0.5 1 1.5 2 2.5 3 3.5 K−means Clustering Training Patterns p(yj|x(m)) 10 20 30 40 50 60 0.5 1 1.5 2 2.5 3 3.5 Kernelized Encoders, β = 0.825 Training Patterns p(yj|x(m)) 10 20 30 40 50 60 0.5 1 1.5 2 2.5 3 3.5 Figure 1: Cluster allocations (top) and the corresponding responsibilities (bottom) p(yj|x(m)) for |x| = 2, |y| = 3, M = 70 (the patterns are sorted to indicate local smoothness in the phase parameter). Left: Gaussian mixtures; middle: K-means; right: information-maximization for the (RBF-)kernelized encoder (the learned parameter β ≈0.825). Light, medium, and dark-gray squares show the cluster colors corresponding to deterministic cluster allocations. The color intensity of each training point x(m) is the average of the pure cluster intensities, weighted by the responsibilities p(yj|x(m)). Nearly indistinguishable dark colors of the Gaussian mixture clustering indicate soft cluster assignments. Figure 2 shows typical results for spatially translated letters with |x| = 2, M = 150, and |y| = 2 (or |y| = 3), where we compare Gaussian mixture, feature-space k-means, the spectral method of Ng et al. (2001), and our information-theoretic clustering method. The initializations followed the same procedure as the previous experiment. The results produced by our kernelized infomax method were generally stable under different initializations, provided that β0 was not too large or too small. In contrast to Gaussian mixture, spectral, and feature-space k-means clustering, the clusters produced by kernelized infomax for the cases considered are arguably more anthropomorphically appealing. Note that feature-space k-means, as well as the spectral method, presume that the kernel matrix K ∈RM×M is fixed and known (in the latter case, the Gram matrix defines the edge weights of the graph). For illustration purposes, we show the results for the fixed Gram matrices with kernel parameters β set to the initial values β0 = 1 or the learned values β ≈ 0.604 of the kernelized infomax method for |y| = 2. One may potentially improve the performance of these methods by running the algorithms several times (with different kernel parameters β), and choosing β which results in tightest clusters (Ng et al. (2001)). We were indeed able to apply the spectral method to obtain clusters for TA and T (for β ≈1.1). While being useful in some situations, the procedure generally requires multiple runs. In contrast, the kernelized infomax method typically resulted in meaningful cluster allocations (TT and A) after a single run of the algorithm (see Figure 2 (c)), with the results qualitatively consistent under a variety of initializations. Additionally, we note that in situations when we used simpler encoder models (see expression (3)) or did not adapt parameters of the kernel functions, the extracted clusters were often more intuitive than those produced by rival methods, but inferior −3 −2 −1 0 1 2 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 Gaussian mixture clustering for |y|=2 −3 −2 −1 0 1 2 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 Feauture space K−means, β=0.604 −3 −2 −1 0 1 2 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5KMI Clustering, β = 0.6035 (from β0=1), |y|=2 (a) (b) (c) −3 −2 −1 0 1 2 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 Spectral Clustering, β ≈ 0.604, |y|=2 −3 −2 −1 0 1 2 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 KMI Clustering, β0=1.000, |y|=3, I = 1.03 −3 −2 −1 0 1 2 3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 KMI Clusters: β ≈ 0.579 (β0 = 1), |y|=3, I = 1.10 (d) (e) (f) Figure 2: Learning cluster allocations for |y| = 2 and |y| = 3. Where appropriate, the stars show the cluster centers. (a) two-component Gaussian mixture trained by the EM algorithm; (b) feature-space k-means with β = 1.0 and β ≈0.604 (the only pattern clustered differently (under identical initializations) is shown by ⊚); (c) kernelized infomax clustering for |y| = 2 (the inverse variance β of the RBF kernel varied from β0 = 1 (at the initialization) to β ≈0.604 after convergence); (d) spectral clustering for |y| = 2 and β ≈0.604; (e) kernelized infomax clustering for |y| = 3 with a fixed Gram matrix; (f) kernelized infomax clustering for |y| = 3 started at β0 = 1 and reaching β ≈0.579 after convergence. to the ones produced by (7) with the optimal learned β. Our results suggest that by learning kernel parameters we may often obtain higher values of the objective I(x, y), as well as more appealing cluster labeling (e.g. for the examples shown on Figure 2 (e), (f) we get I(x, y) ≈1.03 and I(x, y) ≈1.10 respectively). Undoubtedly, a careful choice of the kernel function could potentially lead to an even better visualization of the locally smooth, non-degenerate structure. 4 Discussion The proposed information-theoretic clustering framework is fundamentally different from the generative latent variable clustering approaches. Instead of explicitly parameterizing the data-generating process, we impose constraints on the encoder distributions, transforming the clustering problem to learning optimal discrete encodings of the unlabeled data. Many possible parameterizations of such distributions may potentially be considered. Here we discussed one such choice, which implicitly utilizes projections of the data to high-dimensional feature spaces. Our method suggests a formal information-theoretic procedure for learning optimal cluster allocations. One potential disadvantage of the method is a potentially large number of local optima; however, our empirical results suggest that the method is stable under different initializations, provided that the initial variances are sufficiently large. Moreover, the results suggest that in the cases considered the method favorably compares with the common generative clustering techniques, k-means, feature-space k-means, and the variants of the method which do not use nonlinearities or do not learn parameters of kernel functions. A number of interesting interpretations of clustering approaches in feature spaces are possible. Recently, it has been shown (Bach and Jordan (2003); Dhillon et al. (2004)) that spectral clustering methods optimizing normalized cuts (Shi and Malik (2000); Ng et al. (2001)) may be viewed as a form of weighted feature-space k-means, for a specific fixed similarity matrix. We are currently relating our method to the common spectral clustering approaches and a form of annealed weighted featurespace k-means. We stress, however, that our information-maximizing framework suggests a principled way of learning optimal similarity matrices by adapting parameters of the kernel functions. Additionally, our method does not require computations of eigenvalues of the similarity matrix, which may be particularly beneficial for large datasets. Finally, we expect that the proper information-theoretic interpretation of the encoder framework may facilitate extensions of the information-theoretic clustering method to richer families of encoder distributions. References Agakov, F. V. and Barber, D. (2006). Auxiliary Variational Information Maximization for Dimensionality Reduction. In Proceedings of the PASCAL Workshop on Subspace, Latent Structure and Feature Selection Techniques. Springer. To appear. Bach, F. R. and Jordan, M. I. (2003). Learning spectral clustering. In NIPS. MIT Press. Barber, D. and Agakov, F. V. (2003). The IM Algorithm: A Variational Approach to Information Maximization. In NIPS. MIT Press. Brunel, N. and Nadal, J.-P. (1998). Mutual Information, Fisher Information and Population Coding. Neural Computation, 10:1731–1757. Chechik, G. and Tishby, N. (2002). Extracting relevant structures with side information. In NIPS, volume 15. MIT Press. Cover, T. M. and Thomas, J. A. (1991). Elements of Information Theory. Wiley, NY. Dhillon, I. S. and Guan, Y. (2003). Information Theoretic Clustering of Sparse CoOccurrence Data. In Proceedings of the 3rd IEEE International Conf. on Data Mining. Dhillon, I. S., Guan, Y., and Kulis, B. (2004). Kernel k-means, Spectral Clustering and Normalized Cuts. In KDD. ACM. Fisher, J. W. and Principe, J. C. (1998). A methodology for information theoretic feature extraction. In Proc. of the IEEE International Joint Conference on Neural Networks. Linsker, R. (1988). Towards an Organizing Principle for a Layered Perceptual Network. In Advances in Neural Information Processing Systems. American Institute of Physics. Ng, A. Y., Jordan, M., and Weiss, Y. (2001). On spectral clustering: Analysis and an algorithm. In NIPS, volume 14. MIT Press. Scholkopf, B. and Smola, A. (2002). Learning with Kernels. MIT Press. Shi, J. and Malik, J. (2000). Normalized Cuts and Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905. Tishby, N., Pereira, F. C., and Bialek, W. (1999). The information bottleneck method. In Proceedings of the 37-th Annual Allerton Conference on Communication, Control and Computing. Kluwer Academic Publishers. Torkkola, K. and Campbell, W. M. (2000). Mutual Information in Learning Feature Transformations. In ICML. Morgan Kaufmann.
|
2005
|
79
|
2,898
|
Active Learning for Misspecified Models Masashi Sugiyama Department of Computer Science, Tokyo Institute of Technology 2-12-1, O-okayama, Meguro-ku, Tokyo, 152-8552, Japan sugi@cs.titech.ac.jp Abstract Active learning is the problem in supervised learning to design the locations of training input points so that the generalization error is minimized. Existing active learning methods often assume that the model used for learning is correctly specified, i.e., the learning target function can be expressed by the model at hand. In many practical situations, however, this assumption may not be fulfilled. In this paper, we first show that the existing active learning method can be theoretically justified under slightly weaker condition: the model does not have to be correctly specified, but slightly misspecified models are also allowed. However, it turns out that the weakened condition is still restrictive in practice. To cope with this problem, we propose an alternative active learning method which can be theoretically justified for a wider class of misspecified models. Thus, the proposed method has a broader range of applications than the existing method. Numerical studies show that the proposed active learning method is robust against the misspecification of models and is thus reliable. 1 Introduction and Problem Formulation Let us discuss the regression problem of learning a real-valued function f (x ) defined on R d from training examples f(x i ; y i ) j y i = f (x i ) + i g n i=1 ; where f i g n i=1 are i.i.d. noise with mean zero and unknown variance 2. We use the following linear regression model for learning. b f (x ) = p X i=1 i ' i (x ); where f' i (x )g p i=1 are fixed linearly independent functions and = ( 1 ; 2 ; : : : ; p ) > are parameters to be learned. We evaluate the goodness of the learned function b f (x ) by the expected squared test error over test input points and noise (i.e., the generalization error). When the test input points are drawn independently from a distribution with density p t (x ), the generalization error is expressed as G = E Z b f (x) f (x ) 2 p t (x)dx; where E denotes the expectation over the noise f i g n i=1. In the following, we suppose that p t (x) is known1. In a standard setting of regression, the training input points are provided from the environment, i.e., fx i g n i=1 independently follow the distribution with density p t (x ). On the other hand, in some cases, the training input points can be designed by users. In such cases, it is expected that the accuracy of the learning result can be improved if the training input points are chosen appropriately, e.g., by densely locating training input points in the regions of high uncertainty. Active learning—also referred to as experimental design—is the problem of optimizing the location of training input points so that the generalization error is minimized. In active learning research, it is often assumed that the regression model is correctly specified [2, 1, 3], i.e., the learning target function f (x ) can be expressed by the model. In practice, however, this assumption is often violated. In this paper, we first show that the existing active learning method can still be theoretically justified when the model is approximately correct in a strong sense. Then we propose an alternative active learning method which can also be theoretically justified for approximately correct models, but the condition on the approximate correctness of the models is weaker than that for the existing method. Thus, the proposed method has a wider range of applications. In the following, we suppose that the training input points fx i g n i=1 are independently drawn from a user-defined distribution with density p x (x), and discuss the problem of finding the optimal density function. 2 Existing Active Learning Method The generalization error G defined by Eq.(1) can be decomposed as G = B + V ; where B is the (squared) bias term and V is the variance term given by B = Z E b f (x ) f (x ) 2 p t (x)dx and V = E Z b f (x ) E b f (x) 2 p t (x )dx: A standard way to learn the parameters in the regression model (1) is the ordinary leastsquares learning, i.e., parameter vector is determined as follows. b O LS = argmin " n X i=1 b f (x i ) y i 2 # : It is known that b O LS is given by b O LS = L O LS y ; where L O LS = (X > X ) 1 X > ; X i;j = ' j (x i ); and y = (y 1 ; y 2 ; : : : ; y n ) > : Let G O LS, B O LS and V O LS be G, B and V for the learned function obtained by the ordinary least-squares learning, respectively. Then the following proposition holds. 1In some application domains such as web page analysis or bioinformatics, a large number of unlabeled samples—input points without output values independently drawn from the distribution with density p t (x)—are easily gathered. In such cases, a reasonably good estimate of p t (x ) may be obtained by some standard density estimation method. Therefore, the assumption that p t (x) is known may not be so restrictive. Proposition 1 ([2, 1, 3]) Suppose that the model is correctly specified, i.e., the learning target function f (x ) is expressed as f (x ) = p X i=1 i ' i (x ): Then B O LS and V O LS are expressed as B O LS = 0 and V O LS = 2 J O LS ; where J O LS = tr (U L O LS L > O LS ) and U i;j = Z ' i (x )' j (x)p t (x)dx: Therefore, for the correctly specified model (1), the generalization error G O LS is expressed as G O LS = 2 J O LS : Based on this expression, the existing active learning method determines the location of training input points fx i g n i=1 (or the training input density p x (x )) so that J O LS is minimized [2, 1, 3]. 3 Analysis of Existing Method under Misspecification of Models In this section, we investigate the validity of the existing active learning method for misspecified models. Suppose the model does not exactly include the learning target function f (x ), but it approximately includes it, i.e., for a scalar Æ such that jÆ j is small, f (x ) is expressed as f (x ) = g (x) + Æ r (x ); where g (x) is the orthogonal projection of f (x ) onto the span of f' i (x )g p i=1 and the residual r (x) is orthogonal to f' i (x )g p i=1: g (x) = p X i=1 i ' i (x) and Z r (x)' i (x )p t (x)dx = 0 for i = 1; 2; : : : ; p: In this case, the bias term B is expressed as B = Z E b f (x ) g (x) 2 p t (x )dx + C ; where C = Z (g (x ) f (x )) 2 p t (x )dx: Since C is constant which does not depend on the training input density p x (x ), we subtract C in the following discussion. Then we have the following lemma2. Lemma 2 For the approximately correct model (3), we have B O LS C = Æ 2 hU L O LS z r ; L O LS z r i = O (Æ 2 ); V O LS = 2 J O LS = O p (n 1 ); where z r = (r (x 1 ); r (x 2 ); : : : ; r (x n )) > : 2Proofs of lemmas are provided in an extended version [6]. Note that the asymptotic order in Eq.(1) is in probability since V O LS is a random variable that includes fx i g n i=1. The above lemma implies that G O LS C = 2 J O LS + o p (n 1 ) if Æ = o p (n 1 2 ): Therefore, the existing active learning method of minimizing J O LS is still justified if Æ = o p (n 1 2 ). However, when Æ 6= o p (n 1 2 ), the existing method may not work well because the bias term B O LS C is not smaller than the variance term V O LS, so it can not be neglected. 4 New Active Learning Method In this section, we propose a new active learning method based on the weighted leastsquares learning. 4.1 Weighted Least-Squares Learning When the model is correctly specified, b O LS is an unbiased estimator of . However, for misspecified models, b O LS is generally biased even asymptotically if Æ = O p (1). The bias of b O LS is actually caused by the covariate shift [5]—the training input density p x (x) is different from the test input density p t (x ). For correctly specified models, influence of the covariate shift can be ignored, as the existing active learning method does. However, for misspecified models, we should explicitly cope with the covariate shift. Under the covariate shift, it is known that the following weighted least-squares learning is asymptotically unbiased even if Æ = O p (1) [5]. b W LS = argmin " n X i=1 p t (x i ) p x (x i ) b f (x i ) y i 2 # : Asymptotic unbiasedness of b W LS would be intuitively understood by the following identity, which is similar in spirit to importance sampling: Z b f (x ) f (x ) 2 p t (x )dx = Z b f (x ) f (x ) 2 p t (x) p x (x) p x (x )dx: In the following, we assume that p x (x) is strictly positive for all x. Let D be the diagonal matrix with the i-th diagonal element D i;i = p t (x i ) p x (x i ) : Then it can be confirmed that b W LS is given by b W LS = L W LS y ; where L W LS = (X > D X ) 1 X > D : 4.2 Active Learning Based on Weighted Least-Squares Learning Let G W LS, B W LS and V W LS be G, B and V for the learned function obtained by the above weighted least-squares learning, respectively. Then we have the following lemma. Lemma 3 For the approximately correct model (3), we have B W LS C = Æ 2 hU L W LS z r ; L W LS z r i = O p (Æ 2 n 1 ); V W LS = 2 J W LS = O p (n 1 ); where J W LS = tr (U L W LS L > W LS ): This lemma implies that G W LS C = 2 J W LS + o p (n 1 ) if Æ = o p (1): Based on this expression, we propose determining the training input density p x (x ) so that J W LS is minimized. The use of the proposed criterion J W LS can be theoretically justified when Æ = o p (1), while the existing criterion J O LS requires Æ = o p (n 1 2 ). Therefore, the proposed method has a wider range of applications. The effect of this extension is experimentally investigated in the next section. 5 Numerical Examples We evaluate the usefulness of the proposed active learning method through experiments. Toy Data Set: We first illustrate how the proposed method works under a controlled setting. Let d = 1 and the learning target function f (x) be f (x) = 1 x + x 2 + Æ x 3. Let n = 100 and f i g 100 i=1 be i.i.d. Gaussian noise with mean zero and standard deviation 0:3. Let p t (x) be the Gaussian density with mean 0:2 and standard deviation 0:4, which is assumed to be known here. Let p = 3 and the basis functions be ' i (x) = x i 1 for i = 1; 2; 3. Let us consider the following three cases. Æ = 0; 0:04; 0:5, where each case corresponds to “correctly specified”, “approximately correct”, and “misspecified” (see Figure 1). We choose the training input density p x (x) from the Gaussian density with mean 0:2 and standard deviation 0:4 , where = 0:8; 0:9; 1:0 ; : : : ; 2:5: We compare the accuracy of the following three methods: (A) Proposed active learning criterion + WLS learning : The training input density is determined so that J W LS is minimized. Following the determined input density, training input points fx i g 100 i=1 are created and corresponding output values fy i g 100 i=1 are observed. Then WLS learning is used for estimating the parameters. (B) Existing active learning criterion + OLS learning [2, 1, 3]: The training input density is determined so that J O LS is minimized. OLS learning is used for estimating the parameters. (C) Passive learning + OLS learning: The test input density p t (x ) is used as the training input density. OLS learning is used for estimating the parameters. First, we evaluate the accuracy of J W LS and J O LS as approximations of G W LS and G O LS. The means and standard deviations of G W LS, J W LS, G O LS, and J O LS over 100 runs are depicted as functions of in Figure 2. These graphs show that when Æ = 0 (“correctly specified”), both J W LS and J O LS give accurate estimates of G W LS and G O LS. When Æ = 0:04 (“approximately correct”), J W LS again works well, while J O LS tends to be negatively biased for large . This result is surprising since as illustrated in Figure 1, the learning target functions with Æ = 0 and Æ = 0:04 are visually quite similar. Therefore, it intuitively seems that the result of Æ = 0:04 is not much different from that of Æ = 0. However, the simulation result shows that this slight difference makes J O LS unreliable. When Æ = 0:5 (“misspecified”), J W LS is still reasonably accurate, while J O LS is heavily biased. These results show that as an approximation of the generalization error, J W LS is more robust against the misspecification of models than J O LS, which is in good agreement with the theoretical analyses given in Section 3 and Section 4. −1.5 −1 −0.5 0 0.5 1 1.5 2 0 2 4 6 8 Learning target function f(x) −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 Input density functions δ=0 δ=0.04 δ=0.5 pt(x) px(x) Figure 1: Learning target function and input density functions. Table 1: The means and standard deviations of the generalization error for Toy data set. The best method and comparable ones by the t-test at the significance level 5% are described with boldface. The value of method (B) for Æ = 0:5 is extremely large but it is not a typo. Æ = 0 Æ = 0:04 Æ = 0:5 (A) 1:99 0:07 2:02 0:07 5:94 0:80 (B) 1:34 0:04 3:27 1:23 303 197 (C) 2:60 0:44 2:62 0:43 6:87 1:15 All values in the table are multiplied by 10 3. Æ = 0 Æ = 0:04 Æ = 0:5 “correctly specified” “approximately correct” “misspecified” 0.8 1.2 1.6 2 2.4 3 4 5 6 x 10 −3 G−WLS 0.8 1.2 1.6 2 2.4 0.03 0.04 0.05 0.06 0.07 J−WLS 0.8 1.2 1.6 2 2.4 2 3 4 5 x 10 −3 G−OLS 0.8 1.2 1.6 2 2.4 0.02 0.03 0.04 0.05 0.06 c J−OLS 0.8 1.2 1.6 2 2.4 3 4 5 6 x 10 −3 G−WLS 0.8 1.2 1.6 2 2.4 0.03 0.04 0.05 0.06 0.07 J−WLS 0.8 1.2 1.6 2 2.4 2 3 4 5 x 10 −3 G−OLS 0.8 1.2 1.6 2 2.4 0.02 0.03 0.04 0.05 0.06 c J−OLS 0.8 1.2 1.6 2 2.4 6 8 10 12 x 10 −3 G−WLS 0.8 1.2 1.6 2 2.4 0.03 0.04 0.05 0.06 0.07 J−WLS 0.8 1.2 1.6 2 2.4 0.1 0.2 0.3 0.4 0.5 G−OLS 0.8 1.2 1.6 2 2.4 0.02 0.03 0.04 0.05 0.06 c J−OLS Figure 2: The means and error bars of G W LS, J W LS, G O LS, and J O LS over 100 runs as functions of . In Table 1, the mean and standard deviation of the generalization error obtained by each method is described. When Æ = 0, the existing method (B) works better than the proposed method (A). Actually, in this case, training input densities that approximately minimize G W LS and G O LS were found by J W LS and J O LS. Therefore, the difference of the errors is caused by the difference of WLS and OLS: WLS generally has larger variance than OLS. Since bias is zero for both WLS and OLS if Æ = 0, OLS would be more accurate than WLS. Although the proposed method (A) is outperformed by the existing method (B), it still works better than the passive learning scheme (C). When Æ = 0:04 and Æ = 0:5 the proposed method (A) gives significantly smaller errors than other methods. Overall, we found that for all three cases, the proposed method (A) works reasonably well and outperforms the passive learning scheme (C). On the other hand, the existing method (B) works excellently in the correctly specified case, although it tends to perform poorly once the correctness of the model is violated. Therefore, the proposed method (A) is found to be robust against the misspecification of models and thus it is reliable. Table 2: The means and standard deviations of the test error for DELVE data sets. All values in the table are multiplied by 10 3. Bank-8fm Bank-8fh Bank-8nm Bank-8nh (A) 0:31 0:04 2:10 0:05 24:66 1:20 37:98 1:11 (B) 0:44 0:07 2:21 0:09 27:67 1:50 39:71 1:38 (C) 0:35 0:04 2:20 0:06 26:34 1:35 39:84 1:35 Kin-8fm Kin-8fh Kin-8nm Kin-8nh (A) 1:59 0:07 5:90 0:16 0:72 0:04 3:68 0:09 (B) 1:49 0:06 5:63 0:13 0:85 0:06 3:60 0:09 (C) 1:70 0:08 6:27 0:24 0:81 0:06 3:89 0:14 Bank−8fm Bank−8fh Bank−8nm Bank−8nh Kin−8fm Kin−8fh Kin−8nm Kin−8nh 0.9 1 1.1 1.2 (A)/(C) (B)/(C) (C)/(C) Figure 3: Mean relative performance of (A) and (B) compared with (C). For each run, the test errors of (A) and (B) are normalized by the test error of (C), and then the values are averaged over 100 runs. Note that the error bars were reasonably small so they were omitted. Realistic Data Set: Here we use eight practical data sets provided by DELVE [4]: Bank8fm, Bank-8fh, Bank-8nm, Bank-8nh, Kin-8fm, Kin-8fh, Kin-8nm, and Kin-8nh. Each data set includes 8192 samples, consisting of 8-dimensional input and 1-dimensional output values. For convenience, every attribute is normalized into [0; 1℄. Suppose we are given all 8192 input points (i.e., unlabeled samples). Note that output values are unknown. From the pool of unlabeled samples, we choose n = 1000 input points fx i g 1000 i=1 for training and observe the corresponding output values fy i g 1000 i=1 . The task is to predict the output values of all unlabeled samples. In this experiment, the test input density p t (x ) is unknown. So we estimate it using the independent Gaussian density. p t (x) = (2 b
2 M LE ) d 2 exp kx b M LE k 2 =(2b
2 M LE ) ; where b M LE and b
M LE are the maximum likelihood estimates of the mean and standard deviation obtained from all unlabeled samples. Let p = 50 and the basis functions be ' i (x ) = exp kx t i k 2 =2 for i = 1; 2; : : : ; 50; where ft i g 50 i=1 are template points randomly chosen from the pool of unlabeled samples. We select the training input density p x (x) from the independent Gaussian density with mean b M LE and standard deviation b
M LE, where = 0:7; 0:75; 0 :8; : : : ; 2:4: In this simulation, we can not create the training input points in an arbitrary location because we only have 8192 samples. Therefore, we first create temporary input points following the determined training input density, and then choose the input points from the pool of unlabeled samples that are closest to the temporary input points. For each data set, we repeat this simulation 100 times, by changing the template points ft i g 50 i=1 in each run. The means and standard deviations of the test error over 100 runs are described in Table 2. The proposed method (A) outperforms the existing method (B) for five data sets, while it is outperformed by (B) for the other three data sets. We conjecture that the model used for learning is almost correct in these three data sets. This result implies that the proposed method (A) is slightly better than the existing method (B). Figure 3 depicts the relative performance of the proposed method (A) and the existing method (B) compared with the passive learning scheme (C). This shows that (A) outperforms (C) for all eight data sets, while (B) is comparable or is outperformed by (C) for five data sets. Therefore, the proposed method (A) is overall shown to work better than other schemes. 6 Conclusions We argued that active learning is essentially the situation under the covariate shift—the training input density is different from the test input density. When the model used for learning is correctly specified, the covariate shift does not matter. However, for misspecified models, we have to explicitly cope with the covariate shift. In this paper, we proposed a new active learning method based on the weighted least-squares learning. The numerical study showed that the existing method works better than the proposed method if model is correctly specified. However, the existing method tends to perform poorly once the correctness of the model is violated. On the other hand, the proposed method overall worked reasonably well and it consistently outperformed the passive learning scheme. Therefore, the proposed method would be robust against the misspecification of models and thus it is reliable. The proposed method can be theoretically justified if the model is approximately correct in a weak sense. However, it is no longer valid for totally misspecified models. A natural future direction would be therefore to devise an active learning method which has theoretical guarantee with totally misspecified models. It is also important to notice that when the model is totally misspecified, even learning with optimal training input points would not be successful anyway. In such cases, it is of course important to carry out model selection. In active learning research—including the present paper, however, the location of training input points are designed for a single model at hand. That is, the model should have been chosen before performing active learning. Devising a method for simultaneously optimizing models and the location of training input points would be a more important and promising future direction. Acknowledgments: The author would like to thank MEXT (Grant-in-Aid for Young Scientists 17700142) for partial financial support. References [1] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. Journal of Artificial Intelligence Research, 4:129–145, 1996. [2] V. V. Fedorov. Theory of Optimal Experiments. Academic Press, New York, 1972. [3] K. Fukumizu. Statistical active learning in multilayer perceptrons. IEEE Transactions on Neural Networks, 11(1):17–26, 2000. [4] C. E. Rasmussen, R. M. Neal, G. E. Hinton, D. van Camp, M. Revow, Z. Ghahramani, R. Kustra, and R. Tibshirani. The DELVE manual, 1996. [5] H. Shimodaira. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of Statistical Planning and Inference, 90(2):227–244, 2000. [6] M. Sugiyama. Active learning for misspecified models. Technical report, Department of Computer Science, Tokyo Institute of Technology, 2005.
|
2005
|
8
|
2,899
|
Temporally changing synaptic plasticity Minija Tamosiunaite1,2, Bernd Porr3, and Florentin W¨org¨otter1,4 1 Department of Psychology, University of Stirling Stirling FK9 4LA, Scotland 2 Department of Informatics, Vytautas Magnus University Kaunas, Lithuania 3 Department of Electronics & Electrical Engineering, University of Glasgow Glasgow, GT12 8LT, Scotland 4 Bernstein Centre for Computational Neuroscience, University of G¨ottingen, Germany {minija,worgott}@cn.stir.ac.uk; b.porr@elec.gla.ac.uk Abstract Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. Similar to a previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning rule and investigate how their interaction will influence plasticity. We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited. Starting with weak synapses, which only elicit local D-spikes, a slow, unspecific growth process is induced. As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. This way a winner-take-all-mechanism emerges in a two-stage process, enhancing the best-correlated inputs. These results suggest that synaptic plasticity is a temporal changing process by which the computational properties of dendrites or complete neurons can be substantially augmented. 1 Introduction The traditional view on Hebbian plasticity is that the correlation between pre- and postsynaptic events will drive learning. This view ignores the fact that synaptic plasticity is driven by a whole sequence of events and that some of these events are causally related. For example, usually through the synaptic activity at a cluster of synapses the postsynaptic spike will be triggered. This signal can then travel retrogradely into the dendrite (as a so-called back-propagating- or BP-spike, [3]), leading to a depolarization at this and other clusters of synapses by which their plasticity will be influenced. More locally, something similar can happen if a cluster of synapses is able to elicit a dendritic spike (D-spike, [4, 5]), which may not travel far, but which certainly leads to a local depolarization “under” these and adjacent synapses, triggering synaptic plasticity of one kind or another. Hence synaptic plasticity seems to be to some degree influenced by recurrent processes. In this study, we will use a differential Hebbian learning rule [2, 6] to emulate spike timing dependent plasticity (STDP, [7, 8]). With one specifically chosen example architecture we will investigate how the temporal relation between dendritic- and back propagating spikes could influence plasticity. Specifically we will report how learning could change during the course of network development, and how that could enrich the computational properties of the affected neuronal compartments. Figure 1: Basic learning scheme with x1, ..., xn representing inputs to cluster 1, hAMP A, hNMDA - filters shaping AMPA and NMDA signals, hDS, ˜hDS, hBP - filters shaping D and BP-spikes, q1, q2 - differential thresholds, τ - a delay. Weight impact is saturated. Only the first of m clusters is shown explicitly; clusters 2, 3, ..., m would be employing the same BP spike (not shown). The symbol ⊕represents a summation node and ⊗multiplication. 2 The Model A block diagram of the model is shown in Fig. 1. The model includes several clusters of synapses located on dendritic branches. Dendritic spikes are elicited following the summation of several AMPA signals passing threshold q1. NMDA receptor influence on dendritic spike generation was not considered as the contribution of NMDA potentials to the total membrane potential is substantially smaller than that of AMPA channels at a mixed synapse. Inputs to the model arrive in groups, but each input line gets only one pulse in a given group (Fig. 2 C). Each synaptic cluster is limited to generating one dendritic spike from one arriving pulse group. Cell firing is not explicitly modelled but said to be achieved when the summation of several dendritic spikes at the cell soma has passed threshold q2. This leads to a BP-spike. Progression of signals along a dendrite is not modelled explicitly, but expressed by means of delays. Since we do not model biophysical processes, all signal shapes are obtained by appropriate filters h, where u = x ∗h is the convolution of spike train x with filter h. A differential Hebbian-type learning rule is used to drive synaptic plasticity [2, 6] with ˙ρ = µu˙v, where ρ denotes synaptic weight, u stands for the synaptic input, v for the output, and µ for the learning rate. see e.g.; u and ˙v annotations in Fig. 1, top left. NMDA signals are used as the pre-synaptic signals, dendritic spikes, or dendritic spikes complemented by back-propagating spikes, define the post-synaptic signals for the learning rule. In addition, synaptic weights were sigmoidally saturated with limits zero and one. Filter shapes forming AMPA and NMDA channel responses, as well as back- propagating spikes and some forms of dendritic spikes used in this study were described by: h(t) = e−2πt/τ −e−8πt/τ 6π/τ (1) where τ determines the total duration of the pulse. The ratio between rise and fall time is 1 : 4. We use for AMPA channels: τ = 6 ms, for NMDA channels: τ = 120 ms, for dendritic spikes: τ = 235 ms, and for BP-spikes: τ = 40 ms. Note, we are approximating the NMDA characteristic by a non-voltage dependent filter function. In conjunction with STDP, this simplification is justified by Saudargiene et al [2, 9], showing that voltage dependency induces only a second-order effect on the shape of the STDP curve. Individual input timings are drawn from a uniform distribution from within a pre-specified interval which can vary under different conditions. We distinguish three basic input groups: strongly correlated inputs (several inputs over an interval of up to 10 ms), less correlated (dispersed over an interval of 10-100 ms) and uncorrelated (dispersed over the interval of more than 100 ms). Figure 2: Example STDP curves (A,B), input pulse distribution (C), and model setup (D). A) STDP curve obtained with a D-spike using Eq. 1 with τ = 235 ms, B) from a BP spike with τ = 40 ms. C) Example input pulse distribution for two pulse groups. D) Model neuron with two dendritic branches (left and right), consisting of two sub-branches which get inputs X or Y , which are similar for either side. DS stands for D-spike, BP for a BP-spike. 3 Results 3.1 Experimental setup Fig. 2 A,B shows two STDP curves, one obtained with a wide D-spike the other one with a much sharper BP-spike. The study investigates interactions of such post-synaptic signals in time. Though the signals interact linearly, the much stronger BP signal dominates learning when elicited. In the absence of a BP spike the D-spike dominates plasticity. This seems to correspond to new physiological observations concerning the relations between post-synaptic signals and the actually expressed form of plasticity [10]. We specifically investigate a two-phase processes, where plasticity is first dominated by the D- spike and later by a BP-spike. Fig. 2 D shows a setup in which two-phase plasticity could arise. We assume that inputs to compact clusters of synapses are similar (e.g. all left branches in Fig. 2 D) but dissimilar over larger distances (between left and right branches). First, e.g. early in development, synapses may be weak and only the conjoint action of many synchronous inputs will lead to a local D-spike. Local plasticity from these few D-spikes (indicated by the circular arrow under the dendritic branches in Fig. 2) strengthens these synapses and at some point Dspikes are elicited more reliably at conjoint branches. This could finally also lead to spiking at the soma and, hence, to a BP-spike, changing plasticity of the individual synapses. To emulate such a multi-cluster system we actually model only one left and one right branch. Plasticity in both branches is driven by D-spikes in the first part of the experiment. Assuming that at some point the cell will be driven into spiking, a BP-spike is added after several hundred pulse groups (second part of the experiment). Figure 3: Temporal weight development for the setup shown in Fig 2 with one sub-branch for the driving cluster (A), and one for the non-driving cluster (B). Initially all weights grow gradually until the driving cluster leads to a BP-spike after 200 pulse groups. Thus only the weights of its group x1 −x3 will continue to grow, now at an increased rate. 3.2 An emerging winner-take-all mechanism In Fig. 3 we have simulated two clusters each with nine synapses. For both clusters, we assume that the input activity for three synapses is closely correlated and that they occur in a temporal interval of 6 ms (group x, y: 1 −3). Three other inputs are wider dispersed (interval of 35 ms, group x, y: 4−6) and the three remaining ones arrive uncorrelated in an interval of 150 ms (group x, y: 7 −9). The activity of the second cluster is determined by the same parameters. Pulse groups arriving at the second cluster, however, were randomly shifted by maximally ±20 ms relative to the centre of the pulse group of the first cluster. All synapses start with weights 0.5, which will not suffice to drive the soma of the cell into spiking. Hence initially plasticity can only take place by D-spikes, and we assume that D-spikes will not reach the other cluster. Hence, learning is local. The wide D-spike leads to a broad learning curve which has a span of about ±17.5ms around zero, covering the dispersion of input groups 1 −3 as well as 4 −6. Furthermore it has a slightly bigger area under the LTP part as compared to the LTD part. As a consequence, in both diagrams (Fig. 3 A,B) we see that all weights 1 −6 grow, only for the least correlated input 6 −9 the weights remain close their origin. The correlated group 1 −3, however, benefits most strongly, because it is more likely that a D-spike will be elicited by this group than by any other combination. Conjoint growth at a whole cluster of such synapses would at some point drive the cell into somatic firing. Here we just assume that this happens for one cluster (Fig. 3 A) at a certain time point. This can, for example, be the case when the input properties of the two input groups are different leading to (slightly) less weight growth in the other cluster. As soon as this happens a BP-spike is triggered and the STDP curve takes a narrow shape similar to that in Fig. 2 B now strongly enhancing all causally driving synapses, hence group x1 −x3 (Fig. 3 A). This group grows at an increased rate while all other synapses shrink. Hence, in general this system exhibits two-phase plasticity. This result was reproduced in a model with 100 synapses in each input group (data not shown) and in the next sections we will show that a system with two growth phases is rather robust against parameter variations. Figure 4: Robustness of the observed effects. Plotted are the average weights of the less correlated group (ordinate) against the correlated group (abscissa). Simulation with three correlated and three less correlated inputs, for AMPA: τ = 6 ms, for NMDA: τ = 117 ms, for D-spike: τ = 235 ms, for BP-spike: τ = 6 −66 ms, q1 = 0.14. D/BP spike amplitude relation from 1/1.5 to 1/15, depending on BP-spike width, and keeping the area under the BP-spike constant, µ = 0.2. For further explanation see text. 3.3 Robustness This system is not readily suited for analytical investigation like the simpler ones in [9]. However, a fairly exhaustive parameter analysis is performed. Fig. 4 shows a plot of 350 experiments with the same basic architecture, using only one synapse cluster and the same chain of events as before but with different parameter settings. Only ”strong correlated” (< 10 ms) and ”less correlated” (10 −100 ms) inputs were used in this experiment. Each point represents one experiment consisting of 600 pulse groups. On the abscissa we plot the average weight of the three correlated synapses; on the ordinate the average weight of the three less correlated synapses after these 600 pulse groups. We assume, as in the last experiment, that a BP-spike is triggered as soon as q2 is passed, which happens around pulse group 200 in all cases. Four parameters were varied to obtain this plot. (1) The width of the BP-spike was varied between 5 ms and 50 ms. (2) The interval width for the temporal distribution of the three correlated spikes was varied between 1 ms and 10 ms. Hence 1 ms amounts to three synchronously elicited spikes. (3) The interval width for the temporal distribution of the three less correlated spikes was varied between 1 ms and 100 ms. (4) The shift of the BP-spike with respect to the beginning of the D-spike was varied in an interval of ±80 ms. Mainly parameters 3 and 4 have an effect on the results. The first parameter, BP spike width, shows some small interference with the spike shift for the widest spikes. The second parameter has almost no influence, due to the small parameter range (10 ms). Symbol coding is used in Fig. 4 to better depict the influence of parameters 3 and 4 in their different ranges. Symbols “dots”, “diamonds” and “others” (circles and plusses) refer to a BP-spike shifts: of less than −5 ms (dots), between −5 ms and +5 ms (diamonds) and larger than +5 ms (circles and pluses). Circles in the latter region show cases with the less correlated dispersion interval below 40 ms, and plusses the cases of the dispersion 40 ms or higher. The “dot” region (−5 ms) shows cases where correlated synapses will grow, while less correlated synapses can grow or shrink. This happens because the BP spike is too early to influence plasticity in the strongly correlated group, which will grow by the DS-mechanism only, but the BP-spike still falls in the dispersion range of the less correlated group, influencing its weights. At a shift of −5 ms a fast transition in the weight development occurs. The reason for this transition is that the BP-spike, being very close to the D-spike, overrules the effect of the D-spike. The randomness whether the input falls into pre- or post-output zone in both, correlated and less correlated, groups is large enough, and leads to weights staying close to origin or to shrinkage. The circles and plusses encode the dispersion of the wide, less correlated spike distributions in the case when time shifts of the BP-spike are positive (> 5 ms, hence BP-spike after D-spike). Dispersions are getting wider essentially from top to bottom (circle to dot). Clearly this shows that there are many cases corresponding to the example depicted in Fig. 3 (horizontal tail of Fig. 4 A), but there are also many conventional situations, where both weight-groups just grow in a similar way (diagonal). The data points show a certain regularity when the BP spike shift moves from big values towards the borderline of +5 ms, where the weights stop to grow. For big shifts, points cluster on the upper, diagonal tail in or near the dot region. With a smaller BP spike shift points move up this tail and then drop down to the horizontal tail, which occurs for shifts of about 20 ms. This pattern is typical for the bigger dispersion in the range of 20 −60 ms and data points essentially follow the circle drawn in the figure. This happens because as soon as the BP-spike gets closer to the D-spike, it will start to exert its influence. But this will first only affect the less correlated group as there are almost always some inputs so late that they “collide” with the BP-spike. Time of collision, however, is random and sometimes these input are “pre” while sometimes they are “post” with respect to the BP-spike. Hence LTP and LTD will be essentially balanced in the less correlated group, leading on average to zero weight growth. This effect is most pronounced when the less correlated group has an intermediate dispersion (see the circles from the upper tail dropping to the lower tail in the range of dispersions 20 −40 ms ), while it does not occur if the dispersion of correlated and less correlated groups are similar (1 −20 ms). Furthermore, the clear separation into the top- (circles, 1−40 ms) and bottom-tail (plusses, 61 −100 ms) indicates that it is possible to let the parameters drift quite a bit without leaving the respective regions. Hence, while the moment-to-moment weight growth might change, the general pattern will stay the same. 4 Discussion Just like with the famous Baron von M¨unchausen, who was able to pull himself out of a swamp by his own hair, the current study suggests that plasticity change as a consequence of itself might lead to specific functional properties. In order to arrive at this conclusion, we have used a simplified model of STDP and combined it with a custom designed and also simplified dendritic architecture. Hence, can the conclusions of this study be valid and where are the limitations? We believe that answer to the first question is affirmative because the degree of abstraction used in this model and the complexity of the results match. This model never attempted to address the difficult issues of the biophysics of synaptic plasticity (for a discussion see [2]) and it was also not our goal to investigate the mechanisms of signal propagation in a dendrite [11]. Both aspects had been reduced to a few basic descriptors and this way we were able to show for the first time that a useful synaptic selection process can develop over time. The system consisted of a first “pre-growth” phase (until the BPspike sets in) followed by a second phase where only one group of synapses grows strongly, while the others shrink again. In general this example describes a scenario where groups of synapses first undergo less selective classical Hebbian-like growth, while later more pronounced STDP sets in, selecting only the main driving group. We believe that in the early development of a real brain such a two-phase system might be beneficial for the stable selection of those synapses that are better correlated. It is conceivable that at early developmental stages correlations are in general weaker, while the number of inputs to a cell is probably much higher than in the adult stage, where many have been pruned. Hence highly selective and strong STDP-like plasticity employed too early might lead to a noiseinduced growth of ”the wrong” synapses. This, however, might be prevented by just such a soft pre-selection mechanisms which would gradually drive clusters of synapses apart by a local dendritic process before the stronger influence of the back-propagating spike sets in. This is supported by recent results from Holthoff et al [1, 12], who have shown that Dspikes will lead to a different type of plasticity than BP-spikes in layer 5 pyramidal cells in mouse cortex. Many more complications exist, for example the assumed chain of events of D- and BP-spikes may be very different in different neurons and the interactions between these signals may be far more non-linear (but see [10]). This will require to re-address these issues in greater detail when dealing with a specific given neuron but the general conclusions about the self-influencing and local [2, 13] character of synaptic plasticity and their possible functional use should hopefully remain valid. 5 Acknowledgements The authors acknowledge the support from SHEFC INCITE and IBRO. We are grateful to B. Graham, L. Smith and D. Sterratt for their helpful comments on this work. The authors wish to especially express their thanks to A. Saudargiene for her help at many stages in this project. References [1] K. Holthoff, Y. Kovalchuk, R. Yuste, and A. Konnerth. Single-shock plasticity induced by local dendritic spikes. In Proc. G¨ottingen NWG Conference, page 245B, 2005. [2] A. Saudargiene, B. Porr, and F. W¨org¨otter. How the shape of pre- and postsynaptic signals can influence STDP: a biophysical model. Neural Comp., 16:595–626, 2004. [3] N.L. Golding, W. L. Kath, and N. Spruston. Dichotomy of action-potential backpropagation in ca1 pyramidal neuron dendrites. J Neurophysiol., 86:2998–3010, 2001. [4] M. E. Larkum, J. J. Zhu, and B. Sakmann. Dendritic mechanisms underlying the coupling of the dendritic with the axonal action potential initiation zone of adult rat layer 5 pyramidal neurons. J. Physiol. (Lond. ), 533:447–466, 2001. [5] N. L. Golding, P. N. Staff, and N. Spurston. Dendritic spikes as a mechanism for cooperative long-term potentiation. Nature, 418:326–331, 2002. [6] B. Porr and F. W¨org¨otter. Isotropic sequence order learning. Neural Comp., 15:831– 864, 2003. [7] J. C. Magee and D. Johnston. A synaptically controlled, associative signal for Hebbian plasticity in hippocampal neurons. Science, 275:209–213, 1997. [8] H. Markram, J. L¨ubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science, 275:213–215, 1997. [9] A. Saudargiene, B. Porr, and F. W¨org¨otter. Local learning rules: predicted influence of dendritic location on synaptic modification in spike-timing-dependent plasticity. Biol. Cybern., 92:128–138, 2005. [10] H.-X. Wang, Gerkin R. C., D. W. Nauen, and G.-Q. Bi. Coactivation and timingdependent integration of synaptic potentiation and depression. Nature Neurosci., 8:187–193, 2005. [11] P. Vetter, A. Roth, and M. H¨ausser. Propagation of action potentials in dendrites depends on dendritic morphology. J. Neurophsiol., 85:926–937, 2001. [12] K. Holthoff, Y. Kovalchuk, R. Yuste, and A. Konnerth. Single-shock LTD by local dendritic spikes in pyramidal neurons of mouse visual cortex. J. Physiol., 560.1:27– 36, 2004. [13] R. C. Froemke, M-m. Poo, and Y. Dan. Spike-timing-dependent synaptic plasticity depends on dendritic location. Nature, 434:221–225, 2005.
|
2005
|
80
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.