index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
2,200
|
Learning to Classify Galaxy Shapes Using the EM Algorithm Sergey Kirshner Information and Computer Science University of California Irvine, CA 92697-3425 skirshne@ics.uci.edu Igor V. Cadez Sparta Inc., 23382 Mill Creek Drive #100, Laguna Hills, CA 92653 igor cadez@sparta.com Padhraic Smyth Information and Computer Science University of California Irvine, CA 92697-3425 smyth@ics.uci.edu Chandrika Kamath Center for Applied Scienti£c Computing Lawrence Livermore National Laboratory Livermore, CA 94551 kamath2@llnl.gov Abstract We describe the application of probabilistic model-based learning to the problem of automatically identifying classes of galaxies, based on both morphological and pixel intensity characteristics. The EM algorithm can be used to learn how to spatially orient a set of galaxies so that they are geometrically aligned. We augment this “ordering-model” with a mixture model on objects, and demonstrate how classes of galaxies can be learned in an unsupervised manner using a two-level EM algorithm. The resulting models provide highly accurate classi£cation of galaxies in cross-validation experiments. 1 Introduction and Background The £eld of astronomy is increasingly data-driven as new observing instruments permit the rapid collection of massive archives of sky image data. In this paper we investigate the problem of identifying bent-double radio galaxies in the FIRST (Faint Images of the Radio Sky at Twenty-cm) Survey data set [1]. FIRST produces large numbers of radio images of the deep sky using the Very Large Array at the National Radio Astronomy Observatory. It is scheduled to cover more that 10,000 square degrees of the northern and southern caps (skies). Of particular scienti£c interest to astronomers is the identi£cation and cataloging of sky objects with a “bent-double” morphology, indicating clusters of galaxies ([8], see Figure 1). Due to the very large number of observed deep-sky radio sources, (on the order of 106 so far) it is infeasible for the astronomers to label all of them manually. The data from the FIRST Survey (http://sundog.stsci.edu/) is available in both raw image format and in the form of a catalog of features that have been automatically derived from the raw images by an image analysis program [8]. Each entry corresponds to a single detectable “blob” of bright intensity relative to the sky background: these entries are called Figure 1: 4 examples of radio-source galaxy images. The two on the left are labelled as “bent-doubles” and the two on the right are not. The con£gurations on the left have more “bend” and symmetry than the two non-bent-doubles on the right. components. The “blob” of intensities for each component is £tted with an ellipse. The ellipses and intensities for each component are described by a set of estimated features such as sky position of the centers (RA (right ascension) and Dec (declination)), peak density ¤ux and integrated ¤ux, root mean square noise in pixel intensities, lengths of the major and minor axes, and the position angle of the major axis of the ellipse counterclockwise from the north. The goal is to £nd sets of components that are spatially close and that resemble a bent-double. In the results in this paper we focus on candidate sets of components that have been detected by an existing spatial clustering algorithm [3] where each set consists of three components from the catalog (three ellipses). As of the year 2000, the catalog contained over 15,000 three-component con£gurations and over 600,000 con£gurations total. The set which we use to build and evaluate our models consists of a total of 128 examples of bent-double galaxies and 22 examples of non-bent-double con£gurations. A con£guration is labelled as a bent-double if two out of three astronomers agree to label it as such. Note that the visual identi£cation process is the bottleneck in the process since it requires signi£cant time and effort from the scientists, and is subjective and error-prone, motivating the creation of automated methods for identifying bent-doubles. Three-component bent-double con£gurations typically consist of a center or “core” component and two other side components called “lobes”. Previous work on automated classi£cation of three-component candidate sets has focused on the use of decision-tree classi£ers using a variety of geometric and image intensity features [3]. One of the limitations of the decision-tree approach is its relative in¤exibility in handling uncertainty about the object being classi£ed, e.g., the identi£cation of which of the three components should be treated as the core of a candidate object. A bigger limitation is the £xed size of the feature vector. A primary motivation for the development of a probabilistic approach is to provide a framework that can handle uncertainties in a ¤exible coherent manner. 2 Learning to Match Orderings using the EM Algorithm We denote a three-component con£guration by C = (c1, c2, c3), where the ci’s are the components (or “blobs”) described in the previous section. Each component cx is represented as a feature vector, where the speci£c features will be de£ned later. Our approach focuses on building a probabilistic model for bent-doubles: p (C) = p (c1, c2, c3), the likelihood of the observed ci under a bent-double model where we implicitly condition (for now) on the class “bent-double.” By looking at examples of bent-double galaxies and by talking to the scientists studying them, we have been able to establish a number of potentially useful characteristics of the components, the primary one being geometric symmetry. In bent-doubles, two of the components will look close to being mirror images of one another with respect to a line through the third component. We will call mirror-image components lobe compocomponent 3 component 1 component 2 core lobe 2 lobe 1 1 core lobe 1 lobe 2 2 lobe 2 lobe 1 core 3 lobe 2 core lobe 1 6 core lobe 2 5 lobe 1 lobe 1 lobe 2 core 4 Figure 2: Possible orderings for a hypothetical bent-double. A good choice of ordering would be either 1 or 2. nents, and the other one the core component. It also appears that non-bent-doubles either don’t exhibit such symmetry, or the angle formed at the core component is too straight— the con£guration is not “bent” enough. Once the core component is identi£ed, we can calculate symmetry-based features. However, identifying the most plausible core component requires either an additional algorithm or human expertise. In our approach we use a probabilistic framework that averages over different possible orderings weighted by their probability given the data. In order to de£ne the features, we £rst need to determine the mapping of the components to labels “core”, “lobe 1”, and “lobe 2” (c, l1, and l2 for short). We will call such a mapping an ordering. Figure 2 shows an example of possible orderings for a con£guration. We can number the orderings 1, . . . , 6. We can then write p (C) = 6 X k=1 p (cc, cl1, cl2|Ω= k) p (Ω= k) , (1) i.e., a mixture over all possible orientations. Each ordering is assumed a priori to be equally likely, i.e., p(Ω= k) = 1 6. Intuitively, for a con£guration that clearly looks like a bentdouble the terms in the mixture corresponding to the correct ordering would dominate, while the other orderings would have much lower probability. We represent each component cx by M features (we used M = 3). Note that the features can only be calculated conditioned on a particular mapping since they rely on properties of the (assumed) core and lobe components. We denote by fmk (C) the values corresponding to the mth feature for con£guration C under the ordering Ω= k, and by fmkj (C) we denote the feature value of component j: fmk (C) = (fmk1 (C) , . . . , fmkBm (C)) (in our case, Bm = 3 is the number of components). Conditioned on a particular mapping Ω= k, where x ∈{c, l1, l2} and c,l1,l2 are de£ned in a cyclical order, our features are de£ned as: • f1k (C) : Log-transformed angle, the angle formed at the center of the component (a vertex of the con£guration) mapped to label x; • f2k (C) : Logarithms of side ratios, |center of x to center of next(x)| |center of x to center of prev(x)|; • f3k (C) : Logarithms of intensity ratios, peak ¤ux of next(x) peak ¤ux of prev(x) , and so (C|Ω= k) = (f1k (C) , f2k (C) f3k (C)) for a 9-dimensional feature vector in total. Other features are of course also possible. For our purposes in this paper this particular set appears to capture the more obvious visual properties of bent-double galaxies. For a set D = {d1, . . . , dN} of con£gurations, under an i.i.d. assumption for con£gurations, we can write the likelihood as P (D) = N Y i=1 K X k=1 P (Ωi = k) P (f1k (di) , . . . , fMk (di)) , where Ωi is the ordering for con£guration di. While in the general case one can model P (f1k (di) , . . . , fMk (di)) as a full joint distribution, for the results reported in this paper we make a number of simplifying assumptions, motivated by the fact that we have relatively little labelled training data available for model building. First, we assume that the fmk (di) are conditionally independent. Second, we are also able to reduce the number of components for each fmk (di) by noting functional dependencies. For example, given two angles of a triangle, we can uniquely determine the third one. We also assume that the remaining components for each feature are conditionally independent. Under these assumptions the multivariate joint distribution P (f1k (di) , . . . , fMk (di)) is factored into a product of simple distributions, which (for the purposes of this paper) we model using Gaussians. If we know for every training example which component should be mapped to label c, we can then unambiguously estimate the parameters for each of these distributions. In practice, however, the identity of the core component is unknown for each object. Thus, we use the EM algorithm to automatically estimate the parameters of the above model. We begin by randomly assigning an ordering to each object. For each subsequent iteration the E-step consists of estimating a probability distribution over possible orderings for each object, and the M-step estimates the parameters of the feature-distributions using the probabilistic ordering information from the E-step. In practice we have found that the algorithm converges relatively quickly (in 20 to 30 iterations) on both simulated and real data. It is somewhat surprising that this algorithm can reliably “learn” how to align a set of objects, without using any explicit objective function for alignment, but instead based on the fact that feature values for certain orderings exhibit a certain self-consistency relative to the model. Intuitively it is this self-consistency that leads to higher-likelihood solutions and that allows EM to effectively align the objects by maximizing the likelihood. After the model has been estimated, the likelihood of new objects can also be calculated under the model, where the likelihood now averages over all possible orderings weighted by their probability given the observed features. The problem described above is a speci£c instance of a more general feature unscrambling problem. In our case, we assume that con£gurations of three 3-dimensional components (i.e. 3 features) each are generated by some distribution. Once the objects are generated, the orders of their components are permuted or scrambled. The task is then to simultaneously learn the parameters of the original distributions and the scrambling for each object. In the more general form, each con£guration consists of L M-dimensional con£gurations. Since there are L! possible orderings of L components, the problem becomes computationally intractable if L is large. One solution is to restrict the types of possible scrambles (to cyclic shifts for example). 3 Automatic Galaxy Classi£cation We used the algorithm described in the previous section to estimate the parameters of features and orderings of the bent-double class from labelled training data and then to rank candidate objects according to their likelihood under the model. We used leave-one-out cross-validation to test the classi£cation ability of this supervised model, where for each of the 150 examples we build a model using the positive examples from the set of 149 “other” examples, and then score the “left-out” example with this model. The examples are then sorted in decreasing order by their likelihood score (averaging over different possi0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False positive rate True positive rate Figure 3: ROC plot for a model using angle, ratio of sides, and ratio of intensities, as features, and learned using ordering-EM with labelled data. ble orderings) and the results are analyzed using a receiver operating characteristic (ROC) methodology. We use AROC, the area under the curve, as a measure of goodness of the model, where a perfect model would have AROC = 1 and random performance corresponds to AROC = 0.5. The supervised model, using EM for learning ordering models, has a cross-validated AROC score of 0.9336 (Figure 3) and appears to be quite useful at detecting bent-double galaxies. 4 Model-Based Galaxy Clustering A useful technique in understanding astronomical image data is to cluster image objects based on their morphological and intensity properties. For example, consider how one might cluster the image objects in Figure 1 into clusters, where we have features on angles, intensities, and so forth. Just as with classi£cation, clustering of the objects is impeded by not knowing which of the “blobs” corresponds to the true “core” component. From a probabilistic viewpoint, clustering can be treated as introducing another level of hidden variables, namely the unknown class (or cluster) identity of each object. We can generalize the EM algorithm for orderings (Section 2) to handle this additional hidden level. The model is now a mixture of clusters where each cluster is modelled as a mixture of orderings. This leads to a more complex two-level EM algorithm than that presented in Section 2, where at the inner-level the algorithm is learning how to orient the objects, and at the outer level the algorithm is learning how to group the objects into C classes. Space does not permit a detailed presentation of this algorithm—however, the derivation is straightforward and produces intuitive update rules such as: ˆµcmj = 1 N ˆP (cl = c|Θ) N X i=1 K X k=1 P (cli = c|Ωi = k, D, Θ) P (Ωi = k|D, Θ) fmkj (di) where µcmj is the mean for the cth cluster (1 ≤c ≤C), the mth feature (1 ≤m ≤M), and the jth component of fmk (di), and Ωi = k corresponds to ordering k for the ith object. We applied this algorithm to the data set of 150 sky objects, where unlike the results in Section 3, the algorithm now had no access to the class labels. We used the Gaussian conditional-independence model as before, and grouped the data into K = 2 clusters. Figures 4 and 5 show the highest likelihood objects, out of 150 total objects, under the Bent−double Bent−double Bent−double Bent−double Bent−double Bent−double Bent−double Bent−double Figure 4: The 8 objects with the highest likelihood conditioned on the model for the larger of the two clusters learned by the unsupervised algorithm. Bent−double Non−bent−double Non−bent−double Non−bent−double Non−bent−double Non−bent−double Bent−double Non−bent−double Figure 5: The 8 objects with the highest likelihood conditioned on the model for the smaller of the two clusters learned by the unsupervised algorithm. 0 50 100 150 0 50 100 150 Supervised Rank Unsupervised Rank bent−doubles non−bent−doubles Figure 6: A scatter plot of the ranking from the unsupervised model versus that of the supervised model. models for the larger cluster and smaller cluster respectively. The larger cluster is clearly a bent-double cluster: 89 of the 150 objects are more likely to belong to this cluster under the model and 88 out of the 89 objects in this cluster have the bent-double label. In other words, the unsupervised algorithm has discovered a cluster that corresponds to “strong examples” of bent-doubles relative to the particular feature-space and model. In fact the non-bentdouble that is assigned to this group may well have been mislabelled (image not shown here). The objects in Figure 5 are clearly inconsistent with the general visual pattern of bent-doubles and this cluster consists of a mixture of non-bent-double and “weaker” bentdouble galaxies. The objects in Figures 5 that are labelled as bent-doubles seem quite atypical compared to the bent-doubles in Figure 4. A natural hypothesis is that cluster 1 (88 bent-doubles) in the unsupervised model is in fact very similar to the supervised model learned using the labelled set of 128 bent-doubles in Section 3. Indeed the parameters of the two Gaussian models agree quite closely and the similarity of the two models is illustrated clearly in Figure 6 where we plot the likelihoodbased ranks of the unsupervised model versus those of the supervised model. Both models are in close agreement and both are clearly performing well in terms of separating the objects in terms of their class labels. 5 Related Work and Future Directions A related earlier paper is Kirshner et al [6] where we presented a heuristic algorithm for solving the orientation problem for galaxies. The generalization to an EM framework in this paper is new, as is the two-level EM algorithm for clustering objects in an unsupervised manner. There is a substantial body of work in computer vision on solving a variety of different object matching problems using probabilistic techniques—see Mjolsness [7] for early ideas and Chui et al. [2] for a recent application in medical imaging. Our work here differs in a number of respects. One important difference is that we use EM to learn a model for the simultaneous correspondence of N objects, using both geometric and intensity-based features, whereas prior work in vision has primarily focused on matching one object to another (essentially the N = 2 case). An exception is the recent work of Frey and Jojic [4, 5] who used a similar EM-based approach to simultaneously cluster images and estimate a variety of local spatial deformations. The work described in this paper can be viewed as an extension and application of this general methodology to a real-world problem in galaxy classi£cation. Earlier work on bent-double galaxy classi£cation used decision tree classi£ers based on a variety of geometric and intensity-based features [3]. In future work we plan to compare the performance of this decision tree approach with the probabilistic model-based approach proposed in this paper. The model-based approach has some inherent advantages over a decision-tree model for these types of problems. For example, it can directly handle objects in the catalog with only 2 blobs or with 4 or more blobs by integrating over missing intensities and over missing correspondence information using mixture models that allow for missing or extra “blobs”. Being able to classify such con£gurations automatically is of signi£cant interest to the astronomers. Acknowledgments This work was performed under a sub-contract from the ASCI Scienti£c Data Management Project of the Lawrence Livermore National Laboratory. The work of S. Kirshner and P. Smyth was also supported by research grants from NSF (award IRI-9703120), the Jet Propulsion Laboratory, IBM Research, and Microsoft Research. I. Cadez was supported by a Microsoft Graduate Fellowship. The work of C. Kamath was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. We gratefully acknowledge our FIRST collaborators, in particular, Robert H. Becker for sharing his expertise on the subject. References [1] R. H. Becker, R. L. White, and D. J. Helfand. The FIRST Survey: Faint Images of the Radio Sky at Twenty-cm. Astrophysical Journal, 450:559, 1995. [2] H. Chui, L. Win, R. Schultz, J. S. Duncan, and A. Rangarajan. A uni£ed feature registration method for brain mapping. In Proceedings of Information Processing in Medical Imaging, pages 300–314. Springer-Verlag, 2001. [3] I. K. Fodor, E. Cant´u-Paz, C. Kamath, and N. A. Tang. Finding bent-double radio galaxies: A case study in data mining. In Proceedings of the Interface: Computer Science and Statistics Symposium, volume 33, 2000. [4] B. J. Frey and N. Jojic. Estimating mixture models of images and inferring spatial transformations using the EM algorithm. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1999. [5] N. Jojic and B. J. Frey. Topographic transformation as a discrete latent variable. In Advances in Neural Information Processing Systems 12. MIT Press, 2000. [6] S. Kirshner, I. V. Cadez, P. Smyth, C. Kamath, and E. Cant´u-Paz. Probabilistic modelbased detection of bent-double radio galaxies. In Proceedings 16th International Conference on Pattern Recognition, volume 2, pages 499–502, 2002. [7] E. Mjolsness. Bayesian inference on visual grammars by neural networks that optimize. Technical Report YALEU/DCS/TR-854, Department of Computer Science, Yale University, May 1991. [8] R. L. White, R. H. Becker, D. J. Helfand, and M. D. Gregg. A catalog of 1.4 GHz radio sources from the FIRST Survey. Astrophysical Journal, 475:479, 1997.
|
2002
|
189
|
2,201
|
Information Regularization with Partially Labeled Data Martin Szummer MIT AI Lab & CBCL Cambridge, MA 02139 szummer@ai.mit.edu Tommi Jaakkola MIT AI Lab Cambridge, MA 02139 tommi@ai.mit.edu Abstract Classification with partially labeled data requires using a large number of unlabeled examples (or an estimated marginal P(x)), to further constrain the conditional P(y|x) beyond a few available labeled examples. We formulate a regularization approach to linking the marginal and the conditional in a general way. The regularization penalty measures the information that is implied about the labels over covering regions. No parametric assumptions are required and the approach remains tractable even for continuous marginal densities P(x). We develop algorithms for solving the regularization problem for finite covers, establish a limiting differential equation, and exemplify the behavior of the new regularization approach in simple cases. 1 Introduction Many modern classification problems are rife with unlabeled examples. To benefit from such examples, we must exploit either implicitly or explicitly the link between the marginal density P(x) over examples x and the conditional P(y|x) representing the decision boundary for the labels y. High density regions or clusters in the data, for example, can be expected to fall solely in one or another class. Most discriminative methods do not attempt to explicitly model or incorporate information from the marginal density P(x). However, many discriminative algorithms such as SVMs exploit the notion of margin that effectively relates P(x) to P(y|x); the decision boundary is biased to fall preferentially in low density regions of P(x) so that only a few points fall within the margin band. The assumptions relating P(x) to P(y|x) are seldom made explicit. In this paper we appeal to information theory to explicitly constrain P(y|x) on the basis of P(x) in a regularization framework. The idea is in broad terms related to a number of previous approaches including maximum entropy discrimination [1], data clustering by information bottleneck [2], and minimum entropy data partitioning [3]. See also [4]. + + + + + + + + + + + + + + + – + – + + + – – – – + + – + I(x; y) 0 0.65 1 1 Figure 1: Mutual information I(x; y) measured in bits for four regions with different configurations of labels y= {+,-}. The marginal P(x) is discrete and uniform across the points. The mutual information is low when the labels are homogenous in the region, and high when labels vary. The mutual information is invariant to the spatial configuration of points within the neighborhood. 2 Information Regularization We begin by showing how to regularize a small region of the domain X. We will subsequently cover the domain (or any chosen subset) with multiple small regions, and describe criteria that ensure regularization of the whole domain on the basis of the individual regions. 2.1 Regularizing a Single Region Consider a small contiguous region Q in the domain X (e.g., an ϵ-ball). We will regularize the conditional probability P(y|x) by penalizing the amount of information the conditionals imply about the labels within the region. The regularizer is a function of both P(y|x) and P(x), and will penalize changes in P(y|x) more in regions with high P(x). Let L be the set of labeled points (size NL) and L ∪U be the set of labeled and unlabeled points (size NLU). The marginal P(x) is assumed to be given, and may be available directly in terms of a continuous density, or as an empirical density P(x) = 1/NLU · P i∈L∪U δ(x −xi) corresponding to a set of points {xi} that may not have labels (δ(·) is the Dirac delta function integrating to 1). As a measure of information, we employ mutual information [5], which is the average number of bits that x contains about the label in region Q (see Figure 1.) The measure depends both on the marginal density P(x) (specifically its restriction to x ∈Q namely P(x|Q) = P(x)/ R Q P(x) dx) and the conditional P(y|x). Equivalently, we can interpret mutual information as a measure of disagreement among P(y|x), x ∈Q. The measure is zero for any constant P(y|x). More precisely, the mutual information in region Q is IQ(x; y) = X y Z x∈Q P(x|Q)P(y|x) log P(y|x) P(y|Q) dx, (1) where P(y|Q) = R x∈Q P(x|Q)P(y|x) dx. The densities conditioned on Q are normalized to integrate to 1 within the region Q. Note that the mutual information is invariant to permutations of the elements of X within Q, which suggests that the regions must be small enough to preserve locality. The regularization penalty has to further scale with the number of points in the region (or the probability mass). We introduce the following regularization principle: Information regularization penalize (MQ/VQ) · IQ(x; y), which is the information about the labels within a local region Q, weighted by the overall probability mass MQ in the region, and normalized by a measure of variability VQ (variance) of x in the region. Here MQ = R x∈Q P(x) dx. The mutual information IQ(x; y) measures the information per point, and to obtain the total mutual information contained in a region, we must multiply by the probability mass MQ. The regularization will be stronger in regions with high P(x). VQ is a measure of variance of x restricted to the region, and is introduced to remove overall dependence on the size of the region. In one dimension, VQ = var(x|Q). When the region is small, then the marginal will be close to uniform over the region and VQ ∝R2, where R is, e.g., the radius for spherical regions. We omit here the analysis of the ddimensional case and only note that we may choose VQ = tr ΣQ, where the covariance ΣQ = R x∈Q(x −EQ(x))(x −EQ(x))T P(x|Q) dx. The choice of VQ is based on the limiting argument discussed next. 2.2 Limiting Behavior for Vanishing Size Regions When the size of the region is scaled down, the mutual information will go to zero for any continuous P(y|x). We derive here the appropriate regularization penalty in the limit of vanishing regions. For simplicity, we only consider the one-dimensional case. Within a small region Q we can (under mild continuity assumptions) approximate P(y|x) by a Taylor expansion around the mean point x0 ∈Q, obtaining P(y|Q) ≈P(y|x0) to first order. By using log(1 + z) ≈z −z2/2 and substituting the approximate P(y|x) and P(y|Q) into IQ(x; y), we get the following first order expression for mutual information: IQ(x; y) = 1 2 var(x|Q) | {z } size-dependent X y P(y|x0) d log P(y|x) dx 2 x0 | {z } size-independent (2) var(x|Q) is dependent on the size (and more generally shape) of region Q while the remaining parts are independent of the size (and shape). The regularization penalty should not scale with the resolution at which we penalize information and we thus divide out the size-dependent part. The size-independent part is the Fisher information [5], where we think of P(y|x) as parameterized by x. The expression d log P(y|x)/dx is known as the Fisher score. 2.3 Regularizing the Domain We want to regularize the conditional P(y|x) across the domain X (or any subset of interest). Since individual regions must be relatively small to preserve locality, we need multiple regions to cover the domain. The cover is the set C of these regions. Since the regularization penalty is assigned to each region, the regions must overlap to ensure that the conditionals in different regions become functionally dependent. See Figure 2. In general all areas with significant marginal density P(x) should be included in the cover or will not be regularized (areas of zero marginal need not be considered). The cover should generally be connected (with respect to neighborhood relations of the regions) so that labeled points have potential to influence all conditionals. The amount of overlap between any two regions in the cover determines how strongly the corresponding conditionals are tied to each other. On the other hand, the regions should be small to preserve locality. The limit of a large number of small overlapping regions can be defined, and we ensure continuity of P(y|x) when the offset between regions vanishes relative to their size (in all dimensions). 3 Classification with Information Regularization Information regularization across multiple regions can be performed, for example, by minimizing the maximum information per region, subject to correct classification of the labeled points. Specifically, we constrain each region in the cover (Q ∈C) to carry at most γ units of information. min P(y|xk), γ γ (3a) s.t. (MQ/VQ) · IQ(x; y) ≤γ ∀Q ∈C (3b) P(y|xk) = δ(y, ˜yk) ∀k ∈L (3c) 0 ≤P(y|xk) ≤1, P y P(y|xk) = 1 ∀k ∈L ∪U, ∀y. (3d) We have incorporated the labeled points by constraining their conditionals to the observed values (eq. 3c) (see below for other ways of incorporating labeled information). The solution P(y|x) to this optimization problem is unique in regions that achieve the information constraint with equality (as long as P(x) > 0). (Uniqueness follows from the strict convexity of mutual information as a function of P(y|x) for nonzero P(x)). Define an atomic subregion as a non-empty intersection of regions that cannot be further intersected by any region (Figure 2). All unlabeled points in an atomic subregion belong to the same set of regions, and therefore participate in exactly the same constraints. They will be regularized the same way, and since mutual information is a convex function, it will be minimized when the conditionals P(y|x) are equal in the atomic subregion. We can therefore parsimoniously represent conditionals of atomic subregions, instead of individual points, merely by treating such atomic subregions as “merged points” and weighting the associated constraint by the probability mass contained in the subregion. 3.1 Incorporating Noisy Labels Labeled points participate in the information regularization in the same way as unlabeled points. However, their conditionals have additional constraints, which incorporate the label information. In equation 3c we used the constraint P(y|xk) = δ(y, ˜yk) for all labeled points. This constraint does not permit noise in the labels (and cannot be used when two points at the same location have disagreeing labels.) Alternatively, we can apply either of the constraints (fix-lbl): P(y|xi) = (1 −b)δ(y,˜yi)b1−δ(y,˜yi), ∀i ∈L (exp-lbl): EP(i)[P(˜yi|xi)] ≥1 −b. The expectation is over the labeled set L, where P(i) = 1/NL. The parameter b ∈[0, 0.5) models the amount of label noise, and is determined from prior knowledge or can be optimized via cross-validation. Constraint (fix-lbl) is written out for the binary case for simplicity. The conditionals of the labeled points are directly determined by their labels, and are treated as fixed constants. Since b < 0.5, the thresholded conditional classifies labeled points in the observed class. In constraint (exp-lbl), the conditionals for labeled points can have an average error at most b, where the averaged is over all labeled points. Thus, a few points may have conditionals that deviate significantly from their observed labels, giving robustness against mislabeled points and outliers. To obtain classification decisions, we simply choose the class with the maximum posterior yk = argmaxy P(y|xk). Working with binary valued P(y|x) ∈0, 1 directly would yield a more difficult combinatorial optimization problem. 3.2 Continuous Densities Information regularization is also computationally feasible for continuous marginal densities, known or estimated. For example, we may be given a continuous unlabeled data distribution P(x) and a few discrete labeled points, and regularize across a finite set of covering regions. The conditionals are uniform inside atomic subregions (except at labeled points), requiring estimates of only a finite number of conditionals. 3.3 Implementation Firstly, we choose appropriate regions forming a cover, and find the atomic subregions. The choices differ depending on whether the data is all discrete or whether continuous marginals P(x) are given. Secondly, we perform a constrained optimization to find the conditionals. If the data is all discrete, create a spherical region centered at every labeled and unlabeled point (or over some reduced set still covering all the points). We have used regions of fixed radius R, but the radius could also be set adaptively at each point to the distance of its Knearest neighbor. The union of such regions is our cover, and we choose the radius R (or K) large enough to create a connected cover. The cover induces a set of atomic subregions, and we merge the parameters P(y|x) of points inside individual atomic subregions (atomic subregions with no observed points can be ignored). The marginal of each atomic subregion is proportional to the number of (merged) points it contains. If continuous marginals are given, they will put probability mass in all atomic subregions where the marginal is non-zero. To avoid considering an exponential number of subregions, we can limit the overlap between the regions by creating a sparser cover. Given the cover, we now regularize the conditionals P(y|x) in the regions, according to eq. 3a. This is a convex minimization problem with a global minimum, since mutual information is convex in P(y|x). It can be solved directly in the given primal form, using a quasi-Newton BFGS method. For eq. 3a, the required gradients of the constraints for the binary class (y = {±1}) case (region Q, atomic subregion r) are: MQ VQ dIQ(x; y) dP(y = 1|xr) = MQ VQ P(xr|Q) log P(y = 1|xr) P(y = −1|xr) P(y = −1|Q) P(y = 1|Q) . (4) The Matlab BFGS implementation fmincon can solve 100 subregion problems in a few minutes. 3.4 Minimize Average Information An alternative regularization criterion minimizes the average mutual information across regions. When calculating the average, we must correct for the overlaps of intersecting regions to avoid doublecounting (in contrast, the previous regularization criterion (eq. 3b) avoided doublecounting by restricting information in each region individually). The influence of a region is proportional to the probability mass MQ contained in it. However, a point x may belong to N(x) regions. We define an adjusted density P ∗(x) = P(x)/N(x) to calculate an adjusted probability mass M ∗ Q which discounts overlap. We can then minimize average mutual information according to min P(y|xk) X Q M ∗ Q VQ IQ(x; y) (5a) s.t. P(y|xk) = δ(y, ˜yk) ∀k ∈L (5b) 0 ≤P(y|xk) ≤1, P y P(y|xk) = 1 ∀k ∈L ∪U, ∀y. (5c) with similar necessary adjustments to incorporate noisy labels. 3.4.1 Limiting Behavior The above average information criterion is a discrete version of a continuous regularization criterion. In the limit of a large number of small regions in the cover (where the spacing of the regions vanishes relative to their size), we obtain a well-defined regularization criterion resulting in continuous P(y|x): min P(y|x) s.t. P(˜yk|xk)=δ(y,˜yk) ∀k∈L Z X y P(x0)P(y|x0) d log P(y|x) dx 2 x0 dx0. (6) The regularizer can also be seen as the average Fisher information (see section 2.2). More generally, we can formulate the regularization problem as a Tikhonov regularization, where the loss is the negative log-probability of labels: min P(y|x) 1 NL X k∈L −log P(˜yk|xk) + λ Z X y P(x0)P(y|x0) d log P(y|x) dx 2 x0 dx0. (7) 3.4.2 Differential Equation Characterizing the Solution The optimization problem (eq. 6) can be solved using calculus of variations. Consider the one-dimensional binary class case and write the problem as min P(y=1|x) R f x, P(y = 1|x), P ′(y = 1|x) dx where f(·) = P(x)P ′(y = 1|x)2/[P(y = 1|x)(1 −P(y = 1|x))]. Necessary conditions for the solution P(y = 1|x) are provided by the Euler-Lagrange equations [6] ∂f ∂P(y = 1|x) −d dx ∂f ∂P ′(y = 1|x) = 0 ∀x. (8) (natural boundary conditions apply since we can assume P(x) = 0 and P ′(y|x) = 0 at the boundary of the domain X). After substituting f and simplifying we have P ′′(y = 1|x) = P ′(y = 1|x)2(1 −2P(y = 1|x)) 2P(y = 1|x)(1 −P(y = 1|x)) −P ′(x)P ′(y = 1|x) P(x) . (9) This differential equation governs the solution and we solve it numerically. The labeled points provide boundary conditions, e.g. P(y = ˜yk|xk) = 1 −b for some small fixed b ≥0. We must search for initial values of P ′(˜yk|xk) to match the boundary conditions of P(˜yk|xk). The solution is continuous and piecewise differentiable. 4 Results and Discussion We have experimentally studied the behavior of the regularizer with different marginal densities P(x). Figure 3 shows the one-dimensional case with a continuous marginal density 1 2 3 4 5 6 7 −1 −0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Posterior P(y|x) P(y|x) P(x) labeled points Figure 2: (Left) Three intersecting regions, and their atomic subregions (numbered). P(y|x) for unlabeled points will be constant in atomic subregions. Figure 3: (Right) The conditional (solid line) for a continuous marginal P(x) (dotted line) consisting of a mixture of two continuous Gaussian and two labeled points at (x=-0.8,y=-1) and (x=0.8,y=1). The row of circles at the top depicts the region structure used (a rendering of overlapping one-dimensional intervals.) −1 −0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Posterior P(y|x) P(y|x) P(x) labeled points −1 −0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Posterior P(y|x) P(y|x) P(x) labeled points Figure 4: Conditionals (solid lines) for two continuous marginals (dotted lines) plus two labeled points. Left: the marginal is uniform, and the conditional approaches a straight line. Right: the marginal is a mixture of two Gaussians (with lower variance and shifted compared to Figure 3.) The conditional changes slowly in regions of high density. (mixture of two Gaussians), and two discrete labeled points. We choose NQ=40 regions centered at uniform intervals of [−1, 1], overlapping each other half-way, creating NQ + 1 atomic subregions. There are two labeled points. We show the solution attained by minimizing the maximum information (eq. 3a), and using the (fix-lbl) constraint with label noise b = 0.05. The conditional varies smoothly between the labeled points of opposite classes. Note the dependence on the marginal density P(x). The conditional is smoother in high-density regions, and changes more rapidly in low-density regions, as expected. Figure 4 shows more examples, and Figure 5 illustrates solutions obtained via the differential equation (eq. 6). −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P(y|x) p(x) x x −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P(y|x) p(x) x x Figure 5: Conditionals for two other continuous marginals plus two labeled points (marked as crosses and located at x=-1, 2 in the left figure and x=-2, 2 in the right), solved via the differential equation (eq. 6). The conditionals are continuous but non-differentiable at the two labeled points (marked as crosses). 5 Conclusion We have presented an information theoretic regularization framework for combining conditional and marginal densities in a semi-supervised estimation setting. The framework admits both discrete and continuous (known or estimated) densities. The tractability is largely a function of the number of nonempty intersections of chosen covering regions. The principle extends beyond the presented scope. It provides flexible means of tailoring the regularizer to particular needs. The shape and structure of the regions give direct ways of imposing relations between particular variables or values of those variables. The regions can be easily defined on low-dimensional data manifolds. In future work we will try the regularizer on large high-dimensional datasets and explore theoretical connections to network information theory. Acknowledgements The authors gratefully acknowledge support from Nippon Telegraph & Telephone (NTT) and NSF ITR grant IIS-0085836. Tommi Jaakkola also acknowledges support from the Sloan Foundation in the form of the Sloan Research Fellowship. Martin Szummer would like to thank Thomas Minka for valuable comments. References [1] Tommi Jaakkola, Marina Meila, and Tony Jebara. Maximum entropy discrimination. Technical Report AITR-1668, Mass. Inst. of Technology AI lab, 1999. http://www.ai.mit.edu/. [2] Naftali Tishby and Noam Slonim. Data clustering by markovian relaxation and the information bottleneck method. In Advances in Neural Information Processing Systems (NIPS), volume 13, pages 640–646. MIT Press, 2001. [3] Stephen Roberts, C. Holmes, and D. Denison. Minimum-entropy data partitioning using reversible jump Markov chain Monte Carlo. IEEE Trans. Pattern Analysis and Mach. Intell. (PAMI), 23(8):909–914, 2001. [4] Matthias Seeger. Input-dependent regularization of conditional density models. Unpublished. http://www.dai.ed.ac.uk/homes/seeger/, 2001. [5] Thomas Cover and Joy Thomas. Elements of Information Theory. Wiley, 1991. [6] Robert Weinstock. Calculus of Variations. Dover, 1974.
|
2002
|
19
|
2,202
|
FloatBoost Learning for Classification Stan Z. Li Microsoft Research Asia Beijing, China ZhenQiu Zhang Institute of Automation CAS, Beijing, China Heung-Yeung Shum Microsoft Research Asia Beijing, China HongJiang Zhang Microsoft Research Asia Beijing, China Abstract AdaBoost [3] minimizes an upper error bound which is an exponential function of the margin on the training set [14]. However, the ultimate goal in applications of pattern classification is always minimum error rate. On the other hand, AdaBoost needs an effective procedure for learning weak classifiers, which by itself is difficult especially for high dimensional data. In this paper, we present a novel procedure, called FloatBoost, for learning a better boosted classifier. FloatBoost uses a backtrack mechanism after each iteration of AdaBoost to remove weak classifiers which cause higher error rates. The resulting float-boosted classifier consists of fewer weak classifiers yet achieves lower error rates than AdaBoost in both training and test. We also propose a statistical model for learning weak classifiers, based on a stagewise approximation of the posterior using an overcomplete set of scalar features. Experimental comparisons of FloatBoost and AdaBoost are provided through a difficult classification problem, face detection, where the goal is to learn from training examples a highly nonlinear classifier to differentiate between face and nonface patterns in a high dimensional space. The results clearly demonstrate the promises made by FloatBoost over AdaBoost. 1 Introduction Nonlinear classification of high dimensional data is a challenging problem. While designing such a classifier is difficult, AdaBoost learning methods, introduced by Freund and Schapire [3], provides an effective stagewise approach: It learns a sequence of more easily learnable “weak classifiers”, and boosts them into a single strong classifier by a linear combination of them. It is shown that the AdaBoost learning minimizes an upper error bound which is an exponential function of the margin on the training set [14]. Boosting learning originated from the PAC (probably approximately correct) learning theory [17, 6]. Given that weak classifiers can perform slightly better than random guessing http://research.microsoft.com/ szli The work presented in this paper was carried out at Microsoft Research Asia. on every distribution over the training set, AdaBoost can provably achieve arbitrarily good bounds on its training and generalization errors [3, 15]. It is shown that such simple weak classifiers, when boosted, can capture complex decision boundaries [1]. Relationships of AdaBoost [3, 15] to functional optimization and statistical estimation are established recently. A number of gradient boosting algorithms are proposed [4, 8, 21]. A significant advance is made by Friedman et al. [5] who show that the AdaBoost algorithms minimize an exponential loss function which is closely related to Bernoulli likelihood. In this paper, we address the following problems associated with AdaBoost: 1. AdaBoost minimizes an exponential (some another form of ) function of the margin over the training set. This is for convenience of theoretical and numerical analysis. However, the ultimate goal in applications is always minimum error rate. A strong classifier learned by AdaBoost may not necessarily be best in this criterion. This problem has been noted, eg by [2], but no solutions have been found in literature. 2. An effective and tractable algorithm for learning weak classifiers is needed. Learning the optimal weak classifier, such as the log posterior ratio given in [15, 5], requires estimation of densities in the input data space. When the dimensionality is high, this is a difficult problem by itself. We propose a method, called FloatBoost (Section 3), to overcome the first problem. FloatBoost incorporates into AdaBoost the idea of Floating Search originally proposed in [11] for feature selection. A backtrack mechanism therein allows deletion of those weak classifiers that are non-effective or unfavorable in terms of the error rate. This leads to a strong classifier consisting of fewer weak classifiers. Because deletions in backtrack is performed according to the error rate, an improvement in classification error is also obtained. To solve the second problem above, we provide a statistical model (Section 4) for learning weak classifiers and effective feature selection in high dimensional feature space. A base set of weak classifiers, defined as the log posterior ratio, are derived based on an overcomplete set of scalar features. Experimental results are presented in (Section 5) using a difficult classification problem, face detection. Comparisons are made between FloatBoost and AdaBoost in terms of the error rate and complexity of boosted classifier. Results clear show that FloatBoost yields a strong classifier consisting of fewer weak classifiers yet achieves lower error rates. 2 AdaBoost Learning In this section, we give a brief description of AdaBoost algorithm, in the notion of RealBoost [15, 5], as opposed to the original discrete AdaBoost [3]. For two class problems, a set of labelled training examples is given as
, where ! is the class label associated with example "#%$'& . A stronger classifier is a linear combination of ( weak classifiers )+* ,.* / 021 43 0 5 (1) In this real version of AdaBoost, the weak classifiers can take a real value, 3 0 ,+6$ , and have absorbed the coefficients needed in the discrete version (there, 3 0 ,789 ). The class label for is obtained as ) ,:-<; =?>@BA )C* ED while the magnitude F )C* , F indicates the confidence. Every training example is associated with a weight. During the learning process, the weights are updated dynamically in such a way that more emphasis is placed on hard examples which are erroneously classified previously. It is important for the original AdaBoost. However, recent studies [4, 8, 21] show that the artificial operation of explicit re-weighting is unnecessary and can be incorporated into a functional optimization procedure of boosting. 0. (Input) (1) Training examples
, where ! ; of which examples have
#" %$ and examples have
" '&$ ; (2) The maximum number (*),+.- of weak classifiers to be combined; 1. (Initialization) /10324 " 576 for those examples with
#"89%$ or /10324 " 5.: for those examples with
" '&$ . (;=< ; 2. (Forward Inclusion) while (?>@( ),+.(1) (;AB(C@$ ; (2) Choose DE according to Eq.4; (3) Update /10 E 4 " AGFIHKJMLN&O
"QP ER " TS , and normalize to U " /10 E 4 " V$ ; 3. (Output) P W,XY[Z]\^L U E _a` D _ TS . Figure 1: RealBoost Algorithm. An error occurs when ) 5*b , or )C* ,Rced . The “margin” of an example 4 achieved by 3 ,7%$ on the training set examples is defined as 3 , . This can be considered as a measure of the confidence of the 3 ’s prediction. The upper bound on classification error achieved by )+* can be derived as the following exponential loss function [14] f ) * #/ Vgh^ijQkal mon j p (2) AdaBoost construct 3 5 by stage-wise minimization of Eq.(2). Given the current ) * h -rq * h 071 3 0 , , the best 3 * 5 for the new strong classifier ) * , ) * h 3 * is the one which leads to the minimum cost 3 * -tsu >av = @ w#x f ) * h 5 3 5 (3) It is shown in [15, 5] that the minimizer is 3 * 5 y{zo| >~} + F m * h p } +- F m * h p (4) where m * h p are the weights given at time ( . Using } 5 F } F , } 4 and letting 2* 5 yz| >
F + F +- (5) y zo| >~} + } +- (6) we arrive 3 * * , (7) The half log likelihood ratio is learned from the training examples of the two classes, and the threshold is determined by the log ratio of prior probabilities. can be adjusted to balance between detection rate and false alarm (ROC curve). The algorithm is shown in Fig.1 (Note: Re-weight formula in this description is equivalent to the multiplicative rule in the original form of AdaBoost [3, 15]). In Section 4, we will present an model for approximating } 5 F , m * h p . 3 FloatBoost Learning FloatBoost backtracks after the newest weak classifier 3 * is added and delete unfavorable weak classifiers 3 0 from the ensemble (1), following the idea of Floating Search [11]. Floating Search [11] is originally aimed to deal with non-monotonicity of straight sequential feature selection, non-monotonicitymeaning that adding an additional feature may lead to drop in performance. When a new feature is added, backtracks are performed to delete those features that cause performance drops. Limitations of sequential feature selection is thus amended, improvement gained with the cost of increased computation due to the extended search. 0. (Input) (1) Training examples
, where ; of which examples have
"89%$ and examples have
"^'&$ ; (2) The maximum number (*),+.- of weak classifiers; (3) The error rate P E , and the acceptance threshold . 1. (Initialization) (1) / 0324 " 576 for those examples with
" 9%$ or / 0324 " 5.: for those examples with
"8 &$ ; (2) ) _ max-value (for '$ I( ),+.- ), (;=< , 2 . 2. (Forward Inclusion) (1) (;AB(C@$ ; (2) Choose DE according to Eq.4; (3) Update / 0 E 4 " AGFIHKJMLN&O
"QP ER " TS , and normalize to U " / 0 E 4 " V$ ; (4) E E R DE ; If ) E P E~ , then ) E P E ; 3. (Conditional Exclusion) (1) D
Z Y[\
l P E &*D ; (2) If P E &*D > ) E8 , then (a) E8 E &*D ; ) E P E &*D ; (;( & $ ; (b) P E 9U l D ; (c) goto 3.(1); (3) else (a) if (;9(*),+.- or E~a> , then goto 4; (b) / 0 E 4 " A FHJMLN&O
"QP ER " TS ; goto 2.(1); 4. (Output) P W,XY[Z]\^L U 0 4 l D TS . Figure 2: FloatBoost Algorithm. The FloatBoost procedure is shown in Fig.2 Let * 3
3 * be the so-far-best set of ( weak classifiers; ) * be the error rate achieved by ) * 5.q * 021 3 0 , (or a weighted sum of missing rate and false alarm rate which is usually the criterion in one-class detection problem); "! # 0 be the minimum error rate achieved so far with an ensemble of $ weak classifiers. In Step 2 (forward inclusion), given already selected, the best weak classifier is added one at a time, which is the same as in AdaBoost. In Step 3 (conditional exclusion), FloatBoost removes the least significant weak classifier from * , subject to the condition that the removal leads to a lower error rate "! # * h . These are repeated until no more removals can be done. The procedure terminates when the risk on the training set is below f or the maximum number ( is reached. Incorporating the conditional exclusion, FloatBoost renders both effective feature selection and classifier learning. It usually needs fewer weak classifiers than AdaBoost to achieve the same error rate . 4 Learning Weak Classifiers The section presents a method for computing the log likelihood in Eq.(5) required in learning optimal weak classifiers. Since deriving a weak classifier in high dimensional space is a non-trivial task, here we provide a statistical model for stagewise learning of weak classifiers based on some scalar features. A scaler feature of is computed by a transform from the -dimensional data space to the real line, 5:8$ . A feature can be the coefficient of, say, a wavelet transform in signal and image processing. If projection pursuit is used as the transform, , is simply the -th coordinate of . A dictionary of candidate scalar features can be created ,
. In the following, we use m 0 p to denote the feature selected in the $ -th stage, while , is the feature computed from using the -th transform. Assuming that is an over-complete basis, a set of candidate weak classifiers for the optimal weak classifier (7) can be designed in the following way: First, at stage ( where ( features m p m
Ip
m * h p have been selected and the weight is given as m * h p , we can approximate 5 F , m * h p by using the distributions of ( features F , m * h p m p mp
m * h p F , m * h p (8) m p F , m * h p m
Ip F m p m * h p m * h p F , m p
m * h Ip m * h p F m p
m * h m * h p (9) Because is an over-complete basis set, the approximation is good enough for large enough ( and when the ( features are chosen appropriately. Note that m 0 p F , m p
m 0 h p is actually m 0 p F , m 0 h p because m 0 p contains the information about entire history of and accounts for the dependencies on m p
m 0 h p . Therefore, we have F , m * h p m p F , mp mp F , m p (10) m * h p F , m * h Ip F , m * h p (11) On the right-hand side of the above equation, all conditional densities are fixed except the last one +F , m * h p . Learning the best weak classifier at stage ( is to choose the best feature m * p for such that f is minimized according to Eq.(3). The conditional probability densities F , m * h p for the positive class and the negative class can be estimated using the histograms computed from the weighted voting of the training examples using the weights m * h p . Let m * p ,. y{z| >1 F m * h p F m * h p (12) and 3 m * p ,.- m * p , . We can derive the set of candidate weaker classifiers as m * p 3 m * p F (13) Recall that the best 3 * , among all in m * h p for the new strong classifier ) * 5) * h 3 * 5 is given by Eq.(3) among all 3 m * p , for which the optimal weak classifier has been derived as (7). According the theory of gradient based boosting [4, 8, 21], we can choose the optimal weak classifier by finding the 3 * that best fits the gradient f ) * h where f )+* h . f , f " (14) In our stagewise approximation formulation, this can be done by first finding the 3 * ,7 m * p that best fits f in direction and then scaling it so that the two has the same (re-weighted) norm. An alternative selection scheme is simply to choose so that the error rate (or some risk), computed from the two histograms F m * h p and F - m * h p , is minimized. 5 Experimental Results Face Detection The face detection problem here is to classifier an image of standard size (eg 20x20 pixels) into either face or nonface (imposter). This is essentially a one-class problem in that everything not a face is a nonface. It is a very hard problem. Learning based methods have been the main approach for solving the problem , eg [13, 16, 9, 12]. Experiments here follow the framework of Viola and Jones [19, 18]. There, AdaBoost is used for learning face detection; it performs two important tasks: feature selection from a large collection features; and constructing classifiers using selected features. Data Sets A set of 5000 face images are collected from various sources. The faces are cropped and re-scaled to the size of 20x20. Another set of 5000 nonface examples of the same size are collected from images containing no faces. The 5000 examples in each set is divided into a training set of 4000 examples and a test set of 1000 examples. See Fig.3 for a random sample of 10 face and 10 nonface examples. Figure 3: Face (top) and nonface (bottom) examples. Scalar Features Three basic types of scalar features are derived from each example, as shown in Fig.4, for constructing weak classifiers. These block differences are an extended set of steerable filters used in [10, 20]. There are hundreds of thousands of different for admissible , values. Each candidate weak classifier is constructed as the log likelihood ratio (12) computed from the two histograms F , m * h p of a scalar feature for the face ( ) and nonface ( ) examples (cf. the last part of the previous section). Figure 4: The three types of simple Harr wavelet like features m p defined on a sub-window . The rectangles are of size : and are at distances of apart. Each feature takes a value calculated by the weighted ( y ) sum of the pixels in the rectangles. Performance Comparison The same data sets are used for evaluating FloatBoost and AdaBoost. The performance is measured by false alarm error rate given the detection rate fixed at 99.5%. While a cascade of stronger classifiers are needed to achiever very low false alarm [19, 7], here we present the learning curves for the first strong classifier composed of up to one thousand weak classifiers. This is because what we aim to evaluate here is to contrast between FloatBoost and AdaBoost learning algorithms, rather than the system work. Interested reader is referred to [7] for a complete system which achieved a false alarm of ]d h with the detection rate of 95%. (A live demo of multi-view face detection system, the first real-time system of the kind in the world, is being submitted to the conference). 100 200 300 400 500 600 700 800 900 1000 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 # Weak Classifiers Error Rates AdaBoost−train AdaBoost−test FloatBoost−train FloatBoost−test Figure 5: Error Rates of FloatBoost vs AdaBoost for frontal face detection. The training and testing error curves for FloatBoost and AdaBoost are shown in Fig.5, with the detection rate fixed at 99.5%. The following conclusions can be made from these curves: (1) Given the same number of learned features or weak classifiers, FloatBoost always achieves lower training error and lower test error than AdaBoost. For example, on the test set, by combining 1000 weak classifiers, the false alarm of FloatBoost is 0.427 versus 0.485 of AdaBoost. (2) FloatBoost needs many fewer weak classifiers than AdaBoost in order to achieve the same false alarms. For example, the lowest test error for AdaBoost is 0.481 with 800 weak classifiers, whereas FloatBoost needs only 230 weak classifiers to achieve the same performance. This clearly demonstrates the strength of FloatBoost in learning to achieve lower error rate. 6 Conclusion and Future Work By incorporating the idea of Floating Search [11] into AdaBoost [3, 15], FloatBoost effectively improves the learning results. It needs fewer weaker classifiers than AdaBoost to achieve a similar error rate, or achieves lower a error rate with the same number of weak classifiers. Such a performance improvement is achieved with the cost of longer training time, about 5 times longer for the experiments reported in this paper. The Boosting algorithm may need substantial computation for training. Several methods can be used to make the training more efficient with little drop in the training performance. Noticing that only examples with large weigh values are influential, Friedman et al. [5] propose to select examples with large weights, i.e. those which in the past have been wrongly classified by the learned weak classifiers, for the training weak classifier in t+- he next round. Top examples within a fraction of of the total weight mass are used, where 88A d
d Id4
?D . References [1] L. Breiman. “Arcing classifiers”. The Annals of Statistics, 26(3):801–849, 1998. [2] P. Buhlmann and B. Yu. “Invited discussion on ‘Additive logistic regression: a statistical view of boosting (friedman, hastie and tibshirani)’ ”. The Annals of Statistics, 28(2):377–386, April 2000. [3] Y. Freund and R. Schapire. “A decision-theoretic generalization of on-line learning and an application to boosting”. Journal of Computer and System Sciences, 55(1):119–139, Aug 1997. [4] J. Friedman. “Greedy function approximation: A gradient boosting machine”. The Annals of Statistics, 29(5), October 2001. [5] J. Friedman, T. Hastie, and R. Tibshirani. “Additive logistic regression: a statistical view of boosting”. The Annals of Statistics, 28(2):337–374, April 2000. [6] M. J. Kearns and U. Vazirani. An Introduction to Computational Learning Theory. MIT Press, Cambridge, MA, 1994. [7] S. Z. Li, L. Zhu, Z. Q. Zhang, A. Blake, H. Zhang, and H. Shum. “Statistical learning of multi-view face detection”. In Proceedings of the European Conference on Computer Vision, page ???, Copenhagen, Denmark, May 28 - June 2 2002. [8] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Functional gradient techniques for combining hypotheses. In A. Smola, P. Bartlett, B. Sch¨olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 221–247. MIT Press, Cambridge, MA, 1999. [9] E. Osuna, R. Freund, and F. Girosi. “Training support vector machines: An application to face detection”. In CVPR, pages 130–136, 1997. [10] C. P. Papageorgiou, M. Oren, and T. Poggio. “A general framework for object detection”. In Proceedings of IEEE International Conference on Computer Vision, pages 555–562, Bombay, India, 1998. [11] P. Pudil, J. Novovicova, and J. Kittler. “Floating search methods in feature selection”. Pattern Recognition Letters, (11):1119–1125, 1994. [12] D. Roth, M. Yang, and N. Ahuja. “A snow-based face detector”. In Proceedings of Neural Information Processing Systems, 2000. [13] H. A. Rowley, S. Baluja, and T. Kanade. “Neural network-based face detection”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):23–28, 1998. [14] R. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. “Boosting the margin: A new explanation for the effectiveness of voting methods”. The Annals of Statistics, 26(5):1651–1686, October 1998. [15] R. E. Schapire and Y. Singer. “Improved boosting algorithms using confidence-rated predictions”. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 80–91, 1998. [16] K.-K. Sung and T. Poggio. “Example-based learning for view-based human face detection”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):39–51, 1998. [17] L. Valiant. “A theory of the learnable”. Communications of ACM, 27(11):1134–1142, 1984. [18] P. Viola and M. Jones. “Asymmetric AdaBoost and a detector cascade”. In Proceedings of Neural Information Processing Systems, Vancouver, Canada, December 2001. [19] P. Viola and M. Jones. “Rapid object detection using a boosted cascade of simple features”. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, December 12-14 2001. [20] P. Viola and M. Jones. “Robust real time object detection”. In IEEE ICCV Workshop on Statistical and Computational Theories of Vision, Vancouver, Canada, July 13 2001. [21] R. Zemel and T. Pitassi. “A gradient-based boosting algorithm for regression problems”. In Advances in Neural Information Processing Systems, volume 13, Cambridge, MA, 2001. MIT Press.
|
2002
|
190
|
2,203
|
Adaptive Scaling for Feature Selection in SVMs Yves Grandvalet Heudiasyc, UMR CNRS 6599, Universit´e de Technologie de Compi`egne, Compi`egne, France Yves.Grandvalet@utc.fr St´ephane Canu PSI INSA de Rouen, St Etienne du Rouvray, France Stephane.Canu@insa-rouen.fr Abstract This paper introduces an algorithm for the automatic relevance determination of input variables in kernelized Support Vector Machines. Relevance is measured by scale factors defining the input space metric, and feature selection is performed by assigning zero weights to irrelevant variables. The metric is automatically tuned by the minimization of the standard SVM empirical risk, where scale factors are added to the usual set of parameters defining the classifier. Feature selection is achieved by constraints encouraging the sparsity of scale factors. The resulting algorithm compares favorably to state-of-the-art feature selection procedures and demonstrates its effectiveness on a demanding facial expression recognition problem. 1 Introduction In pattern recognition, the problem of selecting relevant variables is difficult. Optimal subset selection is attractive as it yields simple and interpretable models, but it is a combinatorial and acknowledged unstable procedure [2]. In some problems, it may be better to resort to stable procedures penalizing irrelevant variables. This paper introduces such a procedure applied to Support Vector Machines (SVM). The relevance of input features may be measured by continuous weights or scale factors, which define a diagonal metric in input space. Feature selection consists then in determining a sparse diagonal metric, and sparsity can be encouraged by constraining an appropriate norm on scale factors. Our approach can be summarized by the setting of a global optimization problem pertaining to 1) the parameters of the SVM classifier, and 2) the parameters of the feature space mapping defining the metric in input space. As in standard SVMs, only two tunable hyper-parameters are to be set: the penalization of training errors, and the magnitude of kernel bandwiths. In this formalism we derive an efficient algorithm to monitor slack variables when optimizing the metric. The resulting algorithm is fast and stable. After presenting previous approaches to hard and soft feature selection procedures in the context of SVMs, we present our algorithm. This exposure is followed by an experimental section illustrating its performances and conclusive remarks. 2 Feature Selection via adaptive scaling Scaling is a usual preprocessing step, which has important outcomes in many classification methods including SVM classifiers [9, 3]. It is defined by a linear transformation within the input space: , where diag is a diagonal matrix
of scale factors. Adaptive scaling consists in letting to be adapted during the estimation process with the explicit aim of achieving a better recognition rate. For kernel classifiers, is a set of hyperparameters of the learning process. According to the structural risk minimization principle [8], can be tuned in two ways: 1. estimate the parameters of classifier by empirical risk minimization for several values of
to produce a structure of classifiers multi-indexed by
!
" . Select one element of the structure by finding the set #
minimizing some estimate of generalization error. 2. estimate the parameters of classifier and the hyper-parameters $
!
by empirical risk minimization, while a second level hyper-parameter, say % , constrains
!
" in order to avoid overfitting. This procedure produces a structure of classifiers indexed by % , whose value is computed by minimizing some estimate of generalization error. The usual paradigm consists in computing the estimate of generalization error for regularly spaced hyper-parameter values and picking the best solution among all trials. Hence, the first approach requires intensive computation, since the trials should be completed over a & -dimensional grid over '
values. Several authors suggested to address this problem by optimizing an estimate of generalization error with respect to the hyper-parameters. For SVM classifiers, Cristianini et al. [4] first proposed to apply an iterative optimization scheme to estimate a single kernel width hyper-parameter. Weston et al. [9] and Chapelle et al. [3] generalized this approach to multiple hyper-parameters in order to perform adaptive scaling and variable selection. The experimental results in [9, 3] show the benefits of this optimization. However, relying on the optimization of generalization error estimates over many hyper-parameters is hazardous. Once optimized, the unbiased estimates become down-biased, and the bounds provided by VC-theory usually hold for kernels defined a priori (see the proviso on the radius/margin bound in [8]). Optimizing these criteria may thus result in overfitting. In the second solution considered here, the estimate of generalization error is minimized with respect to % , a single (second level) hyper-parameter, which constrains #
!
. The role of this constraint is twofold: control the complexity of the classifier, and encourage variable selection in input space. This approach is related to some successful soft-selection procedures, such as lasso and bridge [5] in the frequentist framework and Automatic Relevance Determination (ARD) [7] in the Bayesian framework. Note that this type of optimization procedure has been proposed for linear SVM in both frequentist [1] and Bayesian frameworks [6]. Our method generalizes this approach to nonlinear SVM. 3 Algorithm 3.1 Support Vector Machines The decision function provided by SVM is (*),+.-$/ 0 , where function is defined as: 214365 "798 ;:=<2> </?<A@ <CB 798 B (1) where the parameters 1 B 8 are obtained by solving the following optimization problem: ) 143 1 7
: < < subject to > < 1 3 5 < 798 < BB < BB (2) with 5 defined as 5 . In this problem setting,
and the parameters of the feature space mapping (typically a kernel bandwidth) are tunable hyper-parameters which need to be determined by the user. 3.2 A global optimization problem In [9, 3], adaptive scaling is performed by iteratively finding the parameters 1 B 8 of the SVM classifier for a fixed value of
!
and minimizing a bound on the estimate of generalization error with respect to hyper-parameters
B
. The algorithm minimizes 1) the SVM empirical criterion with respect to parameters and 2) an estimate of generalization error with respect to hyper-parameters. In the present approach, we avoid the enlargement of the set of hyper-parameters by letting
!
to be standard parameters of the classifier. Complexity is controlled by
and by constraining the magnitude of . The latter defines the single hyper-parameter of the learning process related to scaling variables. The learning criterion is defined as follows: ) ) 1 3 1 7!
" : < " < subject to > < 1 3 5 < 7 8 < BB# < $ BB# & :
" &%
&% %
' BB & (3) Like in standard SVM classification, the minimization of an estimate of generalization error is postponed to a later step, which consists in picking the best solution among all trials on the two dimensional grid of hyper-parameters % B
. In (3), the constraint on should favor sparse solutions. To allow
to go to zero, ( should be positive. To encourage sparsity, zeroing a small #
should allow a high increase of &) , *,+ ' , hence ( should be small. In the limit of (-. , the constraint counts the number of non-zero scale parameters, resulting in a hard selection procedure. This choice might seem appropriate for our purpose, but it amounts to attempt to solve a highly non-convex optimization problem, where the number of local minima grows exponentially with the input dimension & . To avoid this problem, we suggest to use ( , which is the smallest value for which the problem is convex with the linear mapping 5 . Indeed, for linear kernels, the constraint on amounts to minimize the standard SVM criterion where the penalization on the /10 norm is replaced by the penalization of the /3254 47682 norm. Hence, setting ( provides the solution of the / SVM classifier described in [1]. For non-linear kernels however, the two solutions differ notably since the present algorithm modifies the metric in input space, while the / SVM classifier modifies the metric in feature space. Finally, note that unicity can be guaranteed for ( and Gaussian kernels with large bandwidths ( % -9 ). 3.3 An alternated optimization scheme Problem (3) is complex; we propose to solve iteratively a series of simplier problems. The function is first optimized with respect to parameters 1 B 8 for a fixed mapping 5 (standard SVM problem). Then, the parameters of the feature space mapping are optimized while some characteristics of are kept fixed: At step , starting from a given value, the optimal 1 / B 8 0 are computed. Then is determined by a descent algorithm. In this scheme, 1 / B 8! 0 are computed by solving the standard quadratic optimization problem (2). Our implementation, based on an interior point method, will not be detailed here. Several SVM retraining are necessary, but they are faster than the usual training since the algorithm is initialized appropriately with the solutions of the preceding round. For solving the minimization problem with respect to , we use a reduced conjugate gradient technique. The optimization problem was simplified by assuming that some of the other variables are fixed. We tried several versions: 1) 1 fixed; 2) Lagrange multipliers fixed; 3) set of support vectors fixed. For the three versions, the optimal value of 8 , or at least the optimal value of the slack variables can be obtained by solving a linear program, whose optimum is computed directly (in a single iteration). We do not detail our first version here, since the two last ones performed much better. The main steps of the two last versions are sketched below. 3.4 Sclaling parameters update Starting from an initial solution B 1 B 8/ 0 , our goal is to update by solving a simple intermediate problem providing an improved solution to the global problem (3). We first assume that the Lagrange multipliers defining 1 are not affected by updates, so that 1 is defined as 1
?
>
5 < . Regarding problem (3), 1 is sub-optimal when varies; nevertheless 1 is guaranteed to be an admissible solution. Hence, we minimize an upper bound of the original primal cost which guarantees that any admissible update (providing a decrease of the cost) of the intermediate problem will provide a decrease of the cost of the original problem. The intermediate optimization problem is stated as follows: ) : <
? < ?
> < >
@ < B
$7!
: < < subject to > < : <
?
>
@ <CB
798 < BB < BB & :
&%
&% %
! ' BB & (4) Solving this problem is still difficult since the cost is a complex non-linear function of scale factors. Hence, as stated above, will be updated by a descent algorithm. The latter requires the evaluation of the cost and its gradient with respect to . In particular, this means that we should be able to compute < < and < < ' for any value of . For given values of and , is the solution of the following problem: )
: < < subject to > < :
?
>
@ <*B
798 < BB < ! BB B (5) whose dual formulation is : < < > < :
" ?
>
@ < B
subject to : < < > <
< ! BB (6) This linear problem is solved directly by the following algorithm: 1) sort > <
" ?
>
@ <0B
in descending order for all positive examples on the one side and for all negative examples on the other side; 2) compute the pairwise sum of sorted values; 3) set <
for all positive and negative examples whose sum is positive. With ,
< < and its derivative with respect to are easily computed. Parameters are then updated by a conjugate reduced gradient technique, i.e. a conjugate gradient algorithm ensuring that the set of constraints on are always verified. 3.5 Updating Lagrange multipliers Assume now that only the support vectors remain fixed while optimizing . This assumption is used to derive a rule to update at reasonable computing cost the Lagrange multipliers together with by computing ' . At B B 8 , the following holds [3]: 1. for support vectors of the first category < 2> < :
?
>
@ <*B
7 8 > <
(7) 2. for support vectors of the second category (such that < ) ?"<
. From these equations, and the assumption that support vectors remain support vectors (and that their category do not change) one derives a system of linear equations defining the derivatives of and 8 with respect to [3]: 1. for support vectors of the first category :
?
>
@ < B
$7 :
?
>
@ < B
7 #8 (8) 2. for support vectors of the second category ?< ' 3. Finally, the system is completed by stating that the Lagrange multipliers should obey the constraint :
?
>
: :
?
>
(9) The value of is updated from these equations, and the step size is limited to ensure that
? < for support vectors of the first category. Hence, in this version, 1 is also an admissible sub-optimal solution regarding problem (3). 4 Experiments In the experiments reported below, we used ( for the constraint on (3). The scale parameters were optimized with the last version, where the set of support vectors is assumed to be fixed. Finally, the hyper-parameters % B
were chosen using the span bound [3]. Although the value of the bound itself was not a faithful estimate of test error, the average loss induced by using the minimizer of these bounds was quite small. 4.1 Toy experiment In [9], Weston et al. compared two versions of their feature selection algorithm, to standard SVMs and filter methods (i.e. preprocessing methods selecting features either based on Pearson correlation coefficients, Fisher criterion score, or the Kolmogorov-Smirnov statistic). Their artificial data benchmarks provide a basis for comparing our approach with their, which is based on the minimization of error bounds. Two types of distributions are provided, whose detailed characteristics are not given here. In the linear problem, 6 dimensions out of 202 are relevant. In the nonlinear problem, two features out of 52 are relevant. For each distribution, 30 experiments are conducted, and the average test recognition rate measures the performance of each method. For both problems, standard SVM achieve a 50% error rate in the considered range of training set sizes. Our results are shown in Figure 1. 10 20 30 40 50 75 100 0 0.1 0.2 0.3 0.4 0.5 10 20 30 40 50 75 100 0 0.1 0.2 0.3 0.4 0.5 Figure 1: Results obtained on the benchmarks of [9]. Left: linear problem; right nonlinear problem. The number of training examples is represented on the -axis, and the average test error rate on the > -axis. Our test performances are qualitatively similar to the ones obtained by gradient descent on the radius/margin bound in [9], which are only improved by the forward selection algorithm minimizing the span bound. Note however that Weston et al. results are obtained after a correct number of features was specified by the user, whereas the present results were obtained fully automatically. Knowing the number of features that should be selected by the algorithm is somewhat similar to select the optimal value of parameter ( for each '% . In the non-linear problem, for training examples, an average of 26.5 features are selected; for 8 , an average of 6.6 features are selected. These figures show that although our feature selection scheme is effective, it should be more stringent: a smaller value of ( would be more appropriate for this type of problem. The two relevant variables are selected in of cases for , in for n=50, and in for and . For these two sample sizes, they are even always ranked first and second. Regarding training times, the optimization of required an average of over 100 times more computing time than standard SVM fitting for the linear problem and 40 times for the nonlinear problem. These increases scale less than linearly with the number of variables, and are certainly yet to be improved. 4.2 Expression recognition We also tested our algorithm on a more demanding task to test its ability to handle a large number of features. The considered problem consists in recognizing the happiness expression among the five other facial expressions corresponding to universal emotions (disgust, sadness, fear, anger, and surprise). The data sets are made of 8 gray level images of frontal faces, with standardized positions of eyes, nose and mouth. The training set comprises 8 positive images, and negative ones. The test set is made of positive images and negative ones. We used the raw pixel representation of images, resulting in 4200 highly correlated features. For this task, the accuracy of standard SVMs is 92.6% (11 test errors). The recognition rate is not significantly affected by our feature selection scheme (10 errors), but more than 1300 pixels are considered to be completely irrelevant at the end of the iterative procedure (estimating required about 80 times more computing time than standard SVM). This selection brings some important clues for building relevant attributes for the facial recognition expression task. Figure 2 represents the scaling factors , where black is zero and white represents the highest value. We see that, according to the classifier, the relevant areas for recognizing the happiness expression are mainly in the mouth area, especially on the mouth wrinkles, and to a lesser extent in the white of the eyes (which detects open eyes) and the outer eyebrows. On the right hand side of this figure, we displayed masked support faces, i.e. support faces scaled by the expression mask. Although we lost many important features regarding the identity of people, the expression is still visible on these faces. Areas irrelevant for the recognition task (forehead, nose, and upper cheeks) have been erased or softened by the expression mask. 5 Conclusion We have introduced a method to perform automatic relevance determination and feature selection in nonlinear SVMs. Our approach considers that the metric in input space defines a set of parameters of the SVM classifier. The update of the scale factors is performed by iteratively minimizing an approximation of the SVM cost. The latter is efficiently minimized with respect to slack variables when the metric varies. The approximation of the cost function is tight enough to allow large update of the metric when necessary. Furthermore, because at each step our algorithm guaranties the global cost to decrease, it is stable. Figure 2: Left: expression mask of happiness provided by the scaling factors ; Right, top row: the two positive masked support face; Right, bottom row: four negative masked support faces. Preliminary experimental results show that the method provides sensible results in a reasonable time, even in very high dimensional spaces, as illustrated on a facial expression recognition task. In terms of test recognition rates, our method is comparable with [9, 3]. Further comparisons are still needed to demonstrate the practical merits of each paradigm. Finally, it may also be beneficial to mix the two approaches: the method of Cristianini et al. [4] could be used to determine % and
. The resulting algorithm would differ from [9, 3], since the relative relevance of each feature (as measured by
% ) would be estimated by empirical risk minimization, instead of being driven by an estimate of generalization error. References [1] P. S. Bradley and O. L. Mangasarian. Feature selection via concave minimization and support vector machines. In Proc. 15th International Conf. on Machine Learning, pages 82–90. Morgan Kaufmann, San Francisco, CA, 1998. [2] L. Breiman. Heuristics of instability and stabilization in model selection. The Annals of Statistics, 24(6):2350–2383, 1996. [3] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46(1):131–159, 2002. [4] N. Cristianini, C. Campbell, and J. Shawe-Taylor. Dynamically adapting kernels in support vector machines. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11. MIT Press, 1999. [5] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: data mining , inference, and prediction. Springer series in statistics. Springer, 2001. [6] T. Jebara and T. Jaakkola. Feature selection and dualities in maximum entropy discrimination. In Uncertainity In Artificial Intellegence, 2000. [7] R. M. Neal. Bayesian Learning for Neural Networks, volume 118 of Lecture Notes in Statistics. Springer, 1996. [8] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer Series in Statistics. Springer, 1995. [9] J. Weston, S. Mukherjee, O. Chapelle, M. Pontil, T. Poggio, and V. Vapnik. Feature selection for SVMs. In Advances in Neural Information Processing Systems 13. MIT Press, 2000.
|
2002
|
191
|
2,204
|
Adaptive Classification by Variational Kalman Filtering Peter Sykacek Department of Engineering Science University of Oxford Oxford, OX1 3PJ, UK psyk@robots.ox.ac.uk Stephen Roberts Department of Engineering Science University of Oxford Oxford, OX1 3PJ, UK sjrob@robots.ox.ac.uk Abstract We propose in this paper a probabilistic approach for adaptive inference of generalized nonlinear classification that combines the computational advantage of a parametric solution with the flexibility of sequential sampling techniques. We regard the parameters of the classifier as latent states in a first order Markov process and propose an algorithm which can be regarded as variational generalization of standard Kalman filtering. The variational Kalman filter is based on two novel lower bounds that enable us to use a non-degenerate distribution over the adaptation rate. An extensive empirical evaluation demonstrates that the proposed method is capable of infering competitive classifiers both in stationary and non-stationary environments. Although we focus on classification, the algorithm is easily extended to other generalized nonlinear models. 1 Introduction The demand for adaptive learning methods, e.g. for use in brain computer interfaces (BCIs) [15] has recently triggered a considerable interest in such algorithms. We may approach adaptive learning with algorithms that were designed for stationary environments and use learning rates to make these methods adaptive. These approaches can be traced back to early work on learning algorithms (e.g. [1]). A more recent account to this approach is [17], who combines the probabilistic method of sequential variational inference ([9]) and a forgetting factor to obtain an adaptive learning method. Probabilistic or Bayesian methods allow also for a completely different interpretation of adaptive learning. We may regard the model coefficients as latent (i.e. unobserved) states of a first order Markov process.
(1) The posterior distribution,
, at state ! summarizes all information obtained about the model. This posterior and the conditional distribution, "#$% , represent the prior for the following state. The conditional distribution can be thought of as additive process or state noise with precision . Predictions are obtained by a probabilistic observation model & ' . Using this model, we obtain an appropriate adaptation rate by hierarchical Bayesian inference of the process noise precision . Equation (1) suggests that we may interpret adaptive Bayesian inference as generalization of the well known Kalman filter ([12]). This view of adaptive learning has been used by [6], who use extended Kalman filtering to obtain a Laplace approximation of the posterior over and maximum likelihood II ([3]) for inference of the adaptation rate. Another generalization of Kalman filtering are the recently quite popular particle filters (e.g. [7]). Being Monte Carlo methods, particle filters have over Laplace approximations the advantage of much greater flexibility. This comes however at the expense of a higher representational and computational complexity. To combine the flexibility of particle filtering with the computational advantage of parametric methods, we propose a variational approximation (e.g. [11] , [2] and [8]) for inference of the Markov process in Equation (1). Unlike maximum likelihood II, the variational Kalman filter allows us to have a non degenerate distribution over the process noise precision. We derive in this paper a variational Kalman filter classifier and show with an extensive empirical evaluation that the resulting classifiers obtain excellent generalization accuracies both in stationary and non-stationary domains. 2 Methods 2.1 A generalized nonlinear classifier Classification is a prediction problem, where some regressor, , predicts the expectation of a response variable . Since a categorical polytomous solution is easily recovered from dichotomous solutions ([16], pages 44-45), we restrict all further discussions to dichotomos classification using responses. We thus have only one degree of freedom and predict the binary probability, # , which depends on the model parameters . To obtain a flexible discriminant, we use a generalized nonlinear model, i.e. a radial basis function (RBF) network ([14] and [5]), with logistic output transformation (Equation (3)). '
, (2) ' ' " (3) The classifier has a nonlinear feature space which for reasons of adaptivity depends on and a linear mapping into latent space . We allow for Gaussian basis functions, i.e. !#" $&%('*)+" -, " /. or thin plate splines, i.e. !0" , " 132*4 ' , " . Both basis functions are parameterized by their center locations , " . Since we want to have a simple unimodal posterior over model parameters, we update the coefficients of the basis set randomly according to a Metropolis Hastings kernel ([13]) and solve for the conditional posterior
analytically. 2.2 The variational Kalman filter In order to ease discussion of adaptive inference, we illustrate the dependencies implied by Equation (1) in figure 1 as a directed acyclic graph (DAG). In accordance with Kalman filtering, we assume a Gaussian posterior at time with mean 5 and precision 6 and zero mean Gaussian state noise with isotropic precision . Inference of is based on a “flat” proper Gamma prior specified by parameters 7 and 8 . In order to obtain reasonable posteriors over , we follow [10] and assume constant adaptation within a window of size 9 . The proposed variational Bayesian approach ignores the anti-causal information flow and is thus based on maximizing a lower bound on the logarithmic model evidence of a windowed Kalman filter. Following these assumptions, we obtain the expression for the log evidence in Equation (4) by substituting the generalized nonlinear model (Equations (2) to (3)) into the formulation of adaptive Bayesian learning (1). We have then to make all β α y wn w n n−1 observation n λ I wn−1 n−1 Λ Figure 1: This figure illustrates adaptive inference as a directed acyclic graph. The coefficients of the classifier, , are assumed to be Gaussian, following a first order Markov process. The hyper parameter is given a Gamma prior specified by parameters 7 and 8 . distributions explicit and integrate over all model coefficients, which is done analytically over all prior states . 132*4%
' 132*4 &
6 (4) $ % ' 5 6 5 " 8 7 "! 8 $ $# % The structure of Equation (4) suggests that the approximate posterior % can be chosen to be Gaussian and the approximate posterior % $ can be chosen to be a Gamma distribution. These functional forms do however not simply result from a mean field approximation of the posterior as % $$& % . In order to obtain the required conjugacy, we have to use lower bounds for the probability of the target label, ' " and for both 6 ' and $&%(' 5 6 5 . 2.3 Variational lower bounds In order to achieve conjugacy with a Gaussian distribution, we use the lower bound for the logistic sigmoid proposed in [9] 1 2 4$ ')( " 132*4% 132*4*,+ 2.-/0*21 43'3 (5) 5768 / :9 . ; 1 <=?> @ . 1 . AB in which 1 are the variational parameters of a locally linear expansion in ' . of every prediction contained in the window. In order to get expressions that are conjugate with a Gamma distribution over the process noise precision , we derive two novel lower bounds. Assuming a -dimensional parameter vector , we get $&%(' 132*4 6 ( 1 2 4 132*4 6 (6) 5 6 and $ % ' 5 6 5 ( (7) $ % ' 5 6 5 $ % ' 5 6 . 5 which are expressions in and 132*4 and thus conjugate with a Gamma distribution. Both bounds are expanded in the identical parameter which is justified since both are linear expansions in and maximization must thus lead to identical values. Using these lower bounds together with a mean field assumption, % & & % , and the usual Jensens inequalities, we immediately obtain a negative free energy as lower bound of the log evidence in Equation (4). For reasons of brevity we do not include this expression here. 2.4 Parameter updates In order to distinguish between the parameters of the prior and posterior distributions, we henceforth denote the latter with superscript . Inference requires to maximize the negative free energy with respect to all variational parameters. These are the coefficients of the 9 Gaussian distributions, % , the 9 parameters in the bounds of the logistic sigmoid, 1 , the coefficients of the Gamma posterior over the noise process precision, % and the parameter in the Gamma conjugacy bounds, . Maximization with respect to % results in a Gaussian distribution with precision 6 and mean 5 . 6 6 5768 /&9 . 1 (8) 5 6 6 5 " # Maximization with respect to % results in a Gamma distribution with location parameter 7 and scale parameter 8 . 7 7 9 (9) 8 8 &
5 6 5 5 6 . 5 5 5 ' 6 6 . According to [9], maximization with respect to 1 leads to 1 5 . 6 % (10) Maximization with respect to the variational parameter leads for both bounds to 7 8 % (11) In order to allow the basis mapping in Equation (2) to track modifications in the input data distributions, we propose the perturbation , where 9 is drawn from a Gaussian and accept the proposal according to probability
8 < = & . . ! 9 5 5 6 # & . . ! 9 5 5 6 # A! B (12) If we assume that the negative free energy describes the log evidence exactly, this is a Metropolis Hastings kernel ([13]) that leaves the marginal posterior
invariant. We could thus represent the marginal posterior with random samples. For computational reasons however, we use the scheme only for random updates of . An algorithm for parameter inference will first propose a random update of and then iterate maximizations according to Equation (8) to Equation (11) until we observe convergence of the negative free energy. Alternatively we can use a fixed number of iterations, for which our experiments suggest that ' iterations suffice. 2.5 Model predictions Since we do not know the response when predicting, we have to sum the negative free energy over . This results in a new expression for 5 which we obtain from Equation (8) by dropping the term that depends on . Due to the dependency on 1 , maximization with respect to % has to alternate with maximization with respect to 1 , the latter again being done according to Equation (10). Having reached convergence, we obtain an approximate log probability for by taking the expectation of the bound of the sigmoid in Equation (5) with respect to % and maximizing with respect to 1 . 1 2 4$#" " 132*4 132*4 * +2-"/ * 1 3'3 % (13) Exponentiating the approximate log probabilities results in a sub probability measure over with " %$ , with the difference " representing an additional uncertainty about , introduced by the approximation of the logistic sigmoid. 3 Experiments All experiments reported in this section use a model with Gaussian basis functions with precision ) " &% ' . For updating the basis, we use zero mean Gaussian random variates with precision ** . The initial prior over parameters is a zero mean Gaussian with isotropic precision 6'& &% . For maximizing the negative free energy we use ' iterations. The first experiment aims at obtaining a parametrization for 7 , 8 and the window length, 9 , that allows us to make inferences of the process noise that are insensitive to the actual “drift” of the problem. We use for that purpose the test set from the synthetic problem in [16]1. The samples of this balanced problem are reshuffled such that consecutive class labels differ. In order to get a non-stationarity, we swap the class labels in the second half of the data. The results shown in figure 2 are obtained with 7 &% and 8 . We propose these settings together with a window size 9 , because this is a good compromise between fast tracking and high stationary accuracy. We are now ready to compare the algorithm with an equivalent static classifier using several public data sets and classification of single trial EEG which, due to learning effects in humans, is known to be non-stationary. In order to avoid that the model has an influence on 1This data set can be obtained at http://www.stats.ox.ac.uk/pub/PRNN/. 0 200 400 600 800 1000 0 50 100 150 200 250 300 350 Simulations using σλ=1e+003 window sz. 1 window sz. 5 window sz. 10 window sz. 15 window sz. 20 0 200 400 600 800 1000 0 0.2 0.4 0.6 0.8 1 Simulations using σλ=1e+003 window sz. 1 window sz. 5 window sz. 10 window sz. 15 window sz. 20 Figure 2: Results obtained on Ripleys’ synthetic data set with swapped class labels after sample 500. The top graph shows the expected value of the precision of the noise process, ! for different window sizes (i.e. for different numbers of samples used for infering the adaptation rate). The bottom graph shows the instantaneous generalization accuracy estimated in a window of size * . The prior over is a Gamma distribution with expectation and variance . the results, we compare the generalization accuracy of the variational Kalman filter classifier (vkf) with an identical non-adaptive model. Inference of the static model is based on sequential variational learning ([9]). We obtain sequential variational inference (svi) from our approach by setting in Equation (1) to infinity. The comparisons are evaluated for significance using McNemar’s test, a method for analyzing paired results that is suggested in [16]. The comparison uses vehicle data2, satellite image data, Johns Hopkins University ionosphere data, balance scale weight and distance data and the wine recognition database, all taken from the StatLog database which is available at the UCI repository ([4]). The satellite image data set is used as is provided with 4435 samples in the training and 2000 samples in the test set. Vehicle data are merged such that we have 500 samples in the training and 252 in the test set. The other data were split into two equal sized data sets, which were both used as training and independent test sets respectively. We also use the pima diabetes data set from [16]3. Table 1 compares the generalization accuracies (in fractions) obtained with the variational Kalman filter with generalization accuracies obtained with sequential variational inference. The probability of the null hypothesis, , that both classifiers are equal suggests that only the differences for the Balance scale and the Pima Indian data sets are significant, with either method being better in one case. Since the generalization accuracies of both methods are almost identical, we conclude that if applied to 2Vehicle data was donated to StatLog by the Turing Institute Glasgow, Scotland. 3This data set can be obtained at http://www.stats.ox.ac.uk/pub/PRNN/. Data sets Generalization results vkf svi J.H.U. ionosphere 0.87 0.88 0.41 Satellite image 0.81 0.81 0.29 Balance scale 0.89 0.87 0.03 Pima diabetes 0.76 0.80 0.03 Vehicle 0.77 0.77 0.42 Wine 0.97 0.95 0.25 Table 1: Generalization accuracies obtained with the variational Kalman filter (vkf) and sequential variational inference (svi). Cognitive task Generalization results vkf svi rest/move, no feedback 0.69 0.61 0.00 rest/move, feedback 0.71 0.70 0.39 move/math, no feedback 0.69 0.62 0.00 move/math, feedback 0.64 0.60 0.00 Table 2: Generalization accuracies obtained for classification of single trial EEG show that the variational Kalman filter significantly improves the results in three out of four cases. stationary problems, we may expect the variational Kalman filter to obtain generalization accuracies that are similar to those of static methods. In order to assess the variational Kalman filter on a non-stationary problem, we apply it to classification of single trial EEG, a problem which is part of BCIs. The data for this experiment has been obtained from eight untrained subjects that perform two different task combinations (rest EEG vs. imagined movements and imagined movements vs. a mathematical task), once without and once with visual feedback. For one cognitive experiment each pair of tasks is repeated ten times. We classify on a one second basis an thus have per subject and task combination * samples. The regressors in this experiment are three reflection coefficients (a parametrization of autoregressive models, see e.g. [18]). The comparison in table 2 reports within subject results obtained by two fold cross testing. Using half of the data, we allow for convergence of the methods before estimating the generalization accuracy on the other half of the data. The generalization accuracies in table 2 are averaged across subjects. We obtain in three out of four experiments a significant improvement with the variational Kalman filter. 4 Discussion We propose in this paper a parametric approach for adaptive inference of nonlinear classification. Our algorithm can be regarded as variational generalization of Kalman filtering which we obtain by using two novel lower bounds that allow us to have a non-degenerate distribution over the adaptation rate. Inference is done by iteratively maximizing a lower bound of the log evidence. As a result we obtain an approximate posterior that is a product of a multivariate Gaussian and a Gamma distribution. Our simulations have shown that the approach is capable of infering classifiers that have good generalization performance both in stationary and non-stationary domains. In situations with moderate sized latent spaces, e.g. in the BCI experiments reported above, prediction and parameter updates can be done in real time on conventional PCs. Although we focus on classification, the algorithm is based on general ideas and thus easily applicable to other generalized nonlinear models. Acknowledgements We would like to express gratitude to the anonymous reviewers of this paper for their valuable suggestions for improving the paper. Peter Sykacek is currently supported by grant Nr. F46/399 kindly provided by the BUPA foundation. References [1] S.-I. Amari. A theory of adaptive pattern classifiers. IEEE Transactions on Electronic Computers, 16:299–307, 1967. [2] H. Attias. Inferring parameters and structure of latent variable models by variational Bayes. In Proc. 15th Conf. on Uncertainty in AI, 1999, 1999. [3] J. O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, New York, 1985. [4] C.L. Blake and C.J. Merz. UCI repository of machine learning databases. http://www.ics.uci.edu/ mlearn/MLRepository.html, 1998. University of California, Irvine, Dept. of Information and Computer Sciences. [5] D. S. Broomhead and D. Lowe. Multivariable functional interpolation and adaptive networks. Complex Systems, 2:321–355, 1988. [6] J.F.G. de Freitas, M. Niranjan, and A.H. Gee. Regularisation in Sequential Learning Algorithms. In M. Jordan, M. Kearns, and S. Solla, editors, Advances in Neural Information Processing Systems (NIPS 10), pages 458–464, 1998. [7] A. Doucet, J. F. G. de Freitas, and N. Gordon, editors. Sequential Monte Carlo Methods in Practice. Springer-Verlag, 2001. [8] Z. Ghahramani and M. J. Beal. Variational inference for Bayesian mixture of factor analysers. In Advances in Neural Information Processing Systems 12, pages 449–455, 2000. [9] T. S. Jaakkola and M. I. Jordan. Bayesian parameter estimation via variational methods. Statistics and Computing, 10:25–37, 2000. [10] A.H. Jazwinski. Adaptive filtering. Automatica, pages 475–485, 1969. [11] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. In M. I. Jordan, editor, Learning in Graphical Models. MIT Press, Cambridge, MA, 1999. [12] R. E. Kalman. A new approach to linear filtering and prediction problems. Trans. ASME, J. Basic Eng., 82:35–45, 1960. [13] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equations of state calculations by fast computing machines. Journal of Chemical Physics, 21:1087–1091, 1953. [14] J. Moody and C. J. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, 1:281–294, 1989. [15] W. Penny, S. Roberts, E. Curran, and M. Stokes. EEG-based communication: a pattern recognition approach. IEEE Trans. Rehab. Eng., pages 214–216, 2000. [16] B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996. [17] Masa-aki Sato. Online model selection based on the variational Bayes. Neural Computation, pages 1649–1681, 2001. [18] P. Sykacek and S. Roberts. Bayesian time series classification. In T.G. Dietterich, S. Becker, and Z. Gharamani, editors, Advances in Neural Processing Systems 14, pages 937–944. MIT Press, 2002.
|
2002
|
192
|
2,205
|
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages D¨orthe Malzahn Manfred Opper Informatics and Mathematical Modelling, Technical University of Denmark, R.-Petersens-Plads Building 321, DK-2800 Lyngby, Denmark Neural Computing Research Group, School of Engineering and Applied Science, Aston University, Birmingham B4 7ET, United Kingdom dm@imm.dtu.dk opperm@aston.ac.uk Abstract We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages obtained by Monte-Carlo sampling. 1 Introduction The application of tools from Statistical Mechanics to analyzing the average case performance of learning algorithms has a long tradition in the Neural Computing and Machine Learning community [1, 2]. When data are generated from a highly symmetric distribution and the dimension of the data space is large, methods of statistical mechanics of disordered systems allow for the computation of learning curves for a variety of interesting and nontrivial models ranging from simple perceptrons to Support-vector Machines. Unfortunately, the specific power of this approach, which is able to give explicit distribution dependent results represents also a major drawback for practical applications. In general, data distributions are unknown and their replacement by simple model distributions might only reveal some qualitative behavior of the true learning performance. In this paper we suggest a novel application of the Statistical Mechanics techniques to a topic within Machine Learning for which the distribution over data is well known and controlled by the experimenter. It is given by the resampling of an existing dataset in the so called bootstrap approach [3]. Creating bootstrap samples of the original dataset by random resampling with replacement and retraining the statistical model on the bootstrap sample is a widely applicable statistical technique. By replacing averages over the true unknown distribution of data with suitable averages over the bootstrap samples one can estimate various properties such as the bias, the variance and the generalization error of a statistical model. While in general bootstrap averages can be approximated by Monte-Carlo sampling, it is useful to have also analytical approximations which avoid the time consuming retraining of the model for each sample. Existing analytical approximations (based on asymptotic techniques) such as the delta method and the saddle point method (see e.g.[5]) require usually explicit analytical formulas for the estimators of the parameters for a trained model. These may not be easily obtained for more complex models in Machine Learning. In this paper, we discuss an application of the replica method of Statistical Physics [4] which combined with a variational method [6] can produce approximate averages over the random drawings of bootstrap samples. Explicit formulas for parameter estimates are avoided and replaced by the implicit condition that such estimates are expectations with respect to a certain Gibbs distribution to which the methods of Statistical Physics can be well applied. We demonstrate the method for the case of regression with Gaussian processes (GP) (which is a kernel method that has gained high popularity in the Machine Learning community in recent years [7]) and compare our analytical results with results obtained by Monte-Carlo sampling. 2 Basic setup and Gibbs distribution We will keep the notation in this section fairly general, indicating that most of the theory can be developed for a broader class of models. We assume that a fixed set of data
is modeled by a likelihood of the type ! " #%$ '& # )(* (1) where the “training error” & is parametrized by a parameter (which can be a finite or even infinite dimensional object) which must be estimated from the data. We will later specialize to supervised learning problems where each data point ,+ . consists of an input + (usually a finite dimensional vector) and a real label - . In this case, stands for a function ,+' which models the outputs, or for the parameters (like the weights of a neural network) which parameterize such functions. We will later apply our approach to the mean square error given by & # / 0 1 + # # (2) The first basic ingredient of our approach is the assumption that the estimator for the unknown “true” function can be represented as the mean with respect to a posterior distribution over all possible ’s. This avoids the problem of writing down explicit, complicated formulas for estimators. To be precise, we assume that the statistical estimator 2 354 (which is based on the training set 6 ) can be represented as the expectation of with respect to the measure 758 ! :9 / ;=< 8 9 " #%$ >& # )(* (3) which is constructed from a suitable prior distribution < 8 9 and the likelihood (1). ; @?BAC< 8 9 " #%$ & # (* (4) denotes a normalizing partition function. Our choice of (3) does not mean that we restrict ourselves to Bayesian estimators. By introducing specific (“temperature” like) parameters in the prior and the likelihood, the measure (3) can be strongly concentrated at its mean such that maximum likelihood/MAP estimators can be included in our framework. 3 Bootstrap averages We will explain our analytical approximation to resampling averages for the case of supervised learning problems. If we are interested in, say, estimating the expected error on test points 1 which are not contained in the training set = of size and if we have no hold out data, we can create artificial data sets by resampling (with replacement) data from the original set , where each data point is taken with equal probability / . Hence, some of the ’s will appear several times in the bootstrap sample and others not at all. A proxy for the true average test error can be obtained by retraining the model on each bootstrap training set , calculating the test error only on those points which are not contained in and finally averaging over many sets . In practice, the case maybe of main importance, but we will also allow for estimating a lager part of the “learning curve” by allowing for and . We will not discuss the statistical properties of such bootstrap estimates and their refinements (such as Efron’s .632 estimate) in this paper, but refer the reader to the standard literature [3, 5]. For any given set , we represent a bootstrap sample by the vector of “occupation” numbers
with
$ . is the number of times example appears in the set . Denoting the expectation over random bootstrap samples by 3 , Efron’s estimator for the bootstrap generalization error is / " $ 3 2 3 +) 3 8 9 (5) where we specialized to the square error for testing. Eq.(5) computes the average bootstrap test error at each data point ! from 6 . The Kronecker symbol, defined by # / for ! #" and $ else, guarantees that only realizations of bootstrap training sets contribute which do not contain the test point. Introducing the abbreviation % @ + (6) (which is a linear function of ), and using the definition of the estimator 2 3 as an average of ’s over the Gibbs distribution (3), the bootstrap estimate (5) can be rewritten as / " $ / 3 8 9 3 / ; ? A < 8 9 AC< 8 9 % & % ' '6 " #%$ # & # )( & # . ( *+*, (7) which involves 0 copies (or replicas) and of the variable . More complicated types of test errors which are polynomials or can be approximated by polynomials in 2 3 can be rewritten in a similar way, involving more replicas of the variable . 4 Analytical averages using the “replica trick” For fixed , the distribution of ’s is multinomial. It is simpler (and does not make a big difference when is sufficiently large) when we work with a Poisson distribution for the size of the set with as the mean number of data points in the sample. In this case we get the simpler, factorizing joint distribution $ < .0/2143 05 (8) for the occupation numbers where < . With Eq. (8) follows 3 8 9 /6173 . 1The average is over the unknown distribution of training data sets. To enable the analytical average over the vector (which is the “quenched disorder” in the language of Statistical Physics) it is necessary to introduce the auxiliary quantity / / 17 " $ 3 ; 1 ? A < 8 9 A < 8 9 % % & ' ' " #%$ # & # ( & # . (* * , (9) for real, which allows to write . The advantage of this definition is that for integers 0 , can be represented in terms of replicas of the original variable for which an explicit average over ’s is possible. At the end of all calculations an analytical continuation to arbitrary real and the limit
$ must be performed. Using the definition of the partition function (4), we get for integer 0 / / 14 " $ 3 ? $ A < 8 9 % % ' (10) ' " #%$ # " $ & # (* * , Exchanging the expectation over datasets with the expectation over ’s and using the explicit form of the distribution (8) we obtain / / 14 " $ $ / 1 % % (11) where the brackets
! denote an average with respect to a Gibbs measure for replicas which is given by 7
/ $ < 8 9 8 " 9 (12) where " " #%$ $ / 1# %$ (13) and where the partition function has been introduced for convenience to normalize the measure for '& $ . In most nontrivial cases, averages with respect to the measure (12) can not be calculated exactly. Hence, we have to apply a sensible approximation. Our idea is to use techniques which have been frequently applied to probabilistic models [10] such as the variational approximation, the mean field approximation and the TAP approach. In this paper, we restrict ourself to a variational Gaussian approximation. More advanced approximations will be given elsewhere. 5 Variational approximation A method, frequently used in Statistical Physics which has also attracted considerable interest in the Machine Learning community, is the variational approximation [8]. Its goal is to replace an intractable distribution like (12) by a different, sufficiently close distribution from a tractable class which we will write in the form 7 $ < 8 9 8 " 9 (14) 7 will be used in (11) instead of 7 to approximate the average. " will be chosen (see e.g. [10]) to minimize the relative entropy between 7 and 7 resulting in a minimization of the variational free energy ? $ AC< 8 9 8 " 9 ( " " (15) being an upper bound to the true free energy for any integer . The brackets
! denote averages with respect to the variational distribution (14). For our application to Gaussian process models, we will now specialize to Gaussian priors < 8 9 . For " , we choose the quadratic expression " / " #%$ / 0 " $ 2 ,+ # + # ,+ # )( " $ 2 + # ,+ # )( * (16) as a suitable trial Hamiltonian, leading to a Gaussian distribution (14). The functions 2 + # and 2 ,+ # are the variational parameters to be optimized. To continue the variational solutions to arbitrary real , we assume that the optimal parameters should be replica symmetric, i.e. we set 2 ,+ # 2 ,+ # as well as 2 ,+ # 2 ,+ # for & and 2 ,+ # 2 ,+ # . The variational free energy can then be expressed by the local moments (“order parameters” in the language of Statistical Physics) + # ,+ # , ,+ + # +
,+ # ,+ ,+ # for & and + + # ,+ + # + ,+ # which have the same replica symmetric structure. Since each of the ' matrices (such as 2 ) are assumed to have only two types of entries, it is possible to obtain variational equations which contain the number of replicas as a simple parameter for which the limit $ can be explicitly performed (see appendix). In this limit, the limiting order parameters + # , ,+ # + # are found to have simple interpretations as the (approximate) mean and variance of the predictor 2 3 ,+ # with respect to the average over bootstrap data sets while + + # becomes the (approximate) bootstrap averaged posterior covariance. 6 Explicit results for regression with Gaussian processes We consider a GP model for regression with training energy given by Eq. (2). In this case, the prior measure < 8 9 can be simply represented by an dimensional Gaussian distribution for the vector +
+ . having zero mean and covariance matrix + + # , where + . is the covariance kernel of the GP. Using the limiting (for $ ) values of order parameters, and by approximating 7 by 7 in Eq.(11), the explicit result for the bootstrap mean square generalization error is found to be / / 143 " $ 8 + ( ,+ + 9 " $ < 5 / / ( ,+ + 1 (17) The entire analysis can be repeated for testing (keeping the training energy fixed) with a general loss function of the type 2 3 ,+7 . The result is / / 143 3 " $ . 2 3 ,+7 (18) / / 143 " $ ? A /61 0 " $ < 5 + ( "! ,+7 + / ( ,+ + 1 # 0 500 1000 1500 2000 Size m of Bootstrap Sample 4 5 6 7 8 Bootstrap Test Error Simulation Theory m=N } N=1000 0 500 1000 1500 2000 Size m of Bootstrap Sample 1.4 1.5 1.6 1.7 1.8 1.9 2.0 Bootstrap Test Error Simulation Theory N=1000 } m=N Figure 1: Average bootstrapped generalization error on Abalone data using square error loss (left) and epsilon insensitive loss (right). Simulation (circles) and theory (lines) based on the same data set with / $6$2$ data points. The GP model uses an RBF kernel + +
+ + 0 with on whitened inputs. For the data noise we set 1 $ / . We have applied our theory to the Abalone data set [11] where we have computed the approximate bootstrapped generalization errors for the square error loss and the so-called -insensitive loss which is defined by 0 if 8 $ / 9
1 1 if 8 / / ( 9 if 8 / ( 9 (19) with 2 3 ,+ . We have set $ and $ / . The bootstrap average from our theory is obtained from Eq.(18). Figure 1 shows the generalization error measured by the square error loss (Eq.(17), left panel) as well as the one measured by the -insensitive loss (right panel). Our theory (line) is compared with simulations (circles) which were based on Monte-Carlo sampling averages that were computed using the same data set having / $2$6$ . The Monte-Carlo training sets of size are obtained by sampling from with replacement. We find a good agreement between theory and simulations in the region were . When we oversample the data set , however, the agreement is not so good and corrections to our variational Gaussian approximation would be required. Figure 2 shows the bootstrap average of the posterior variance
$ ,+7 + over the whole data set , / $2$2$ , and compares our theory (line) with simulations (circles) which were based on Monte-Carlo sampling averages. The overall approximation looks better than for the bootstrap generalization error. Finally, it is important to note that all displayed theoretical learning curves have been obtained computationally much faster than their respective simulated learning curves. 7 Outlook The replica approach to bootstrap averages can be extended in a variety of different directions. Besides the average generalization error, one can compute its bootstrap sample fluctuations by introducing more complicated replica expressions. It is also straightforward to apply the approach to more complex problems in supervised learning which are related to Gaussian processes, such as GP classifiers or Support-vector Machines. Since 0 500 1000 1500 2000 Size m of Bootstrap Sample 10 -2 10 -1 Posterior Variance Simulation Theory } N=1000 Figure 2: Bootstrap averaged posterior variance for Abalone data. Simulation (circles) and theory (line) based on the same data set 6 with / $2$6$ data points. our method requires the solution of a set of variational equations of the size of the original training set, we can expect that its computational complexity should be similar to the one needed for making the actual predictions with the basic model. This will also apply to the problem of very large datasets, where one may use a variety of well known sparse approximations (see e.g. [9] and references therein). It will also be important to assess the quality of the approximation introduced by the variational method and compare it to alternative approximation techniques in the computation of the replica average (11), such as the mean field method and its more complex generalizations (see e.g. [10]). Acknowledgement We would like to thank Lars Kai Hansen for stimulating discussions. DM thanks the Copenhagen Image and Signal Processing Graduate School for financial support. Appendix: Variational equations For reference, we will give the explicit form of the equations for variational and order parameters in the limit
$ . The derivations will be given elsewhere. We obtain ,+ / " #%$ ,+7 + # 2 ,+ # (20) + + / " #%$ ,+ + # ,+ + # 2 + # (21) where the matrix ,+7 + # is given by 1 ( 1 (22) where # ,+ + # is the kernel matrix. Finally # # 2 + 2 ,+ . . The order parameter equations Eqs.(20-22) must be solved together with the variational equations which are given by 2 + 1 ( ,+ + . (23) 2 + 2 ,+ (24) 2 +) 8 + ( ,+ +7 9 2 ,+ % (25) with 2 + 2 ,+7 2 + . . Combining Eqs.(22) and (23), a self consistent matrix equation ( ' 1 is obtained where depends on the diagonal elements + + . Its iterative solution (based on a good initial guess for + + ) requires usually only a few iterations. The order parameters ,+7 and + + can then be solved subsequently using Eq.(20,21) with (24,25). References [1] A. Engel and C. Van den Broeck, Statistical Mechanics of Learning (Cambridge University Press, 2001). [2] H. Nishimori, Statistical Physics of Spin Glasses and Information Processing (Oxford Science Publications, 2001). [3] B. Efron and R. J. Tibshirani, An Introduction to the Bootstrap, Monographs on Statistics and Applied Probability 57 (Chapman Hall, 1993). [4] M. M´ezard, G. Parisi, and M. A. Virasoro, Spin Glass Theory and Beyond, Lecture Notes in Physics 9 (World Scientific, 1987). [5] J. Shao and D. Tu, The Jackknife and Bootstrap, Springer Series in Statistics (Springer Verlag, 1995). [6] D. Malzahn and M. Opper, A variational approach to learning curves, NIPS 14, Editors: T.G. Dietterich, S. Becker, Z. Ghahramani, (MIT Press, 2002). [7] R. Neal, Bayesian Learning for Neural Networks, Lecture Notes in Statistics 118 (Springer, 1996). [8] R. P. Feynman and A. R. Hibbs, Quantum mechanics and path integrals (Mc GrawHill Inc., 1965). [9] L. Csat´o and M. Opper, Sparse Gaussian Processes, Neural Computation 14, No 3, 641 - 668 (2002). [10] M. Opper and D. Saad (editors), Advanced Mean Field Methods: Theory and Practice, (MIT Press, 2001). [11] From http://www1.ics.uci.edu/ mlearn/MLSummary.html. The data set contains 4177 examples. We used a representative fraction (the forth block (a 1000 data) from the list).
|
2002
|
193
|
2,206
|
Developing Topography and Ocular Dominance Using two aVLSI Vision Sensors and a Neurotrophic Model of Plasticity Terry Elliott Dept. Electronics & Computer Science University of Southampton Highfield Southampton, SO17 1BJ United Kingdom te@ecs.soton.ac.uk J¨org Kramer Institute of Neuroinformatics University of Z¨urich and ETH Z¨urich Winterthurerstrasse 190 8057 Z¨urich Switzerland kramer@ini.phys.ethz.ch Abstract A neurotrophic model for the co-development of topography and ocular dominance columns in the primary visual cortex has recently been proposed. In the present work, we test this model by driving it with the output of a pair of neuronal vision sensors stimulated by disparate moving patterns. We show that the temporal correlations in the spike trains generated by the two sensors elicit the development of refined topography and ocular dominance columns, even in the presence of significant amounts of spontaneous activity and fixed-pattern noise in the sensors. 1 Introduction A large body of evidence suggests that the development of the retinogeniculocortical pathway, which leads in higher vertebrates to the emergence of eye-specific laminae in the lateral geniculate nucleus (LGN), the formation of ocular dominance columns (ODCs) in the striate cortex and the establishment of retinotopic representations in both structures, is a competitive, activity-dependent process (see Ref. [1] for a review). Experimental findings indicate that at least in the case of ODC formation, this competition may be mediated by retrograde neurotrophic factors (NTFs) [2]. A computational model for synaptic plasticity based on this hypothesis has recently been proposed [1]. This model has successfully been applied to the development and refinement of retinotopic representations in the LGN and striate cortex, and to the formation of ODCs in the striate cortex due to competition between the eye-specific laminae of the LGN. In this model, the activity within the afferent cell sheets was simulated either as interocularly uncorrelated spontaneous retinal waves or, as a coarse model of visually evoked activity, as interocularly correlated Gaussian noise. Gaussian noise, however, is not a realistic model of evoked retinal activity, nor do the interocular correlations introduced adequately capture the correlations that arise due to the spatial disparity between the two retinas. For this study, we tested the ability of the plasticity model to generate topographic refinement and ODCs in response to afferent activity provided by a pair of biologically-inspired artificial vision sensors. These sensors capture some of the properties of biological retinas. They convert optical images into analog electrical signals and perform brightness adaptation and logarithmic contrast-encoding. Their output is encoded in asynchronous, binary spike trains, as provided by the retinal ganglion cells of biological retinas. Mismatch of processing elements and temporal noise are a natural by-product of biological retinas and such vision sensors alike. One goal of this work was to determine the robustness of the model towards such nonidealities. While the refinement of topography from the temporal correlations provided by one vision sensor in response to moving stimuli has already been explored [3], the present work focuses on the co-development of topography and ODCs in response to the correlations between the signals from two vision sensors stimulated by disparate moving bars. In particular, the dependence of ODC formation on disparity and noise is considered. 2 Vision Sensor The vision sensor used in the experiments is a two-dimensional array of 16 16 pixels fabricated with standard CMOS technology, where each pixel performs a two-way rectified temporal high-pass filtering operation on the incoming visual signal in the focal plane [4, 5]. The sensor adapts to background illuminance and responds to local positive and negative illuminance transients at separately coded terminals. The transients are converted into a stream of asynchronous binary pulses, which are multiplexed onto a common, arbitrated address bus, where the address encodes the location of the sending pixel and the sign of the transient. In the absence of any activity on the communication bus for a few hundred milliseconds the bus address decays to zero. A block diagram of a reduced-resolution array of pixels with peripheral arbitration and communication circuitry is shown in Fig. 1. Handshaking with external data acquisition circuitry is provided via the request ( ) and acknowledge ( ) terminals. Handshaking 111 000 001 010 011 100 101 110 OFF ON 11 00 01 10 OFF ON OFF ON OFF ON Arbiter tree Handshaking Arbiter tree X address Y address ACK REQ Figure 1: Block diagram of the sensor architecture (reduced resolution). If the array is used for imaging purposes under constant or slowly-varying ambient lighting conditions, it only responds to boundaries or edges of moving objects or shadows of sufficient contrast and not to static scenes. Depending on the settings of different bias controls the imager can be used in different modes. Separate gain controls for ON and OFF transients permit the imager to respond to only one type of transient or to both types with adjustable weighting. Together with these gain controls, a threshold bias sets the contrast response threshold and the rate of spontaneous activity. For sufficiently large thresholds, spontaneous activity is completely suppressed. Another bias control sets a refractory period that limits the maximum spike rate of each pixel. For short refractory periods, each contrast transient at a given pixel triggers a burst of spikes; for long refractory periods, a typical transient only triggers a single spike in the pixel, resulting in a very efficient, one-bit edge coding. 3 Sensor-Computer Interface The two vision sensors were coupled to a computer via two parallel ports. The handshaking terminals of each chip were shorted, so that the sensors could operate at their own speed without being artificially slowed down by the computer. This avoided the risk of overloading the multiplexer and thereby distorting the data. Furthermore, this scheme was simpler to implement than a handshaking scheme. The lack of synchronization entailed several problems: missing out on events, reading events more than once, and reading spurious zero addresses in the absence of recent activity in the sensors. The first two problems could satisfactorily be solved by choosing a long refractory period, so that each moving-edge stimulus only evoked a single spike per pixel. For a typical stimulus this resulted in interspike intervals on the multiplexed bus of a few milliseconds, which made it unlikely that events would be missed. Furthermore, the refractory period prevented any given pixel from spiking more than once in a row in response to a moving edge, so that multiple reads of the same address were always due to the same event being read several times and therefore could be discarded. The ambiguity of the (0,0) address readings, namely whether such a reading meant that the (0,0) pixel was active or that the address on the bus had decayed to zero due to lack of activity, could not be resolved. It was therefore decided to ignore the (0,0) address and to exclude the (0,0) cell from each map. Using this strategy it was found that the data read by the computer reflected the optical stimuli with a small error rate. 4 Visual Stimulation Two separate windows within the display of the LCD monitor of the computer used for data acquisition were each imaged onto one of the vision chips via a lens to provide the optical stimulation. The stimuli in each window consisted in eight separate sequences of images that were played without interruption, each new sequence being selected randomly after the completion of the previous one. Each sequence simulated a white bar sweeping across a black background. The sequences were distinguished only by the orientation and direction of motion of the bar, while the speed, as measured perpendicularly to the bar’s orientation, was constant and identical for each sequence. The bar could have four different orientations, aligned to the rows or columns of the vision sensor or to one of the two diagonals, and move in either direction. The bars had a finite width of 20 pixels on the LCD display, corresponding to about 8 pixel periods on the image sensors, and they were sufficiently long entirely to fill the field of view of the chips. The displays in the two windows stimulating the two chips were identical save for a fixed relative displacement between the bars along the direction of motion during the entire run, simulating the disparity seen by two eyes looking at the same object. The used displacements were 0, 10, and 15 pixels on the LCD display, corresponding to no disparity and disparities of 1/2 the bar width (4 sensor pixels) and 3/4 of the bar width (6 sensor pixels), respectively. The speed of the bar was largely unimportant, because the output spikes of the chip were sampled into bins of fixed sizes, rather than bins representing fixed time windows. The chosen white bar on a black background stimulated the vision sensor with a leading ON edge and a trailing OFF edge. However, because the spurious activity of the chip, mainly in the form of crosstalk, was increased if both ON and OFF responses were activated and because we required only the response to one edge type for this work, the ON responses from the chip were suppressed. 5 Neurotrophic Model of Plasticity Let the letters and label afferent cells within an afferent sheet, letters and label the afferent sheets, and letters and label target cells. The two afferent sheets represent the two chips’ arrays of pixels and are therefore 16 16 square arrays of cells. For convenience, the target array is also a 16 16 square array of cells. Let denote an afferent cell’s activity. For each time step of simulated development, we capture a fixed number of spikes from each chip. A pixel that has not spiked gives
, while one that has gives . If represents the number of synapses projected from cell in afferent sheet to target , then evolves according to the equation "! #%$'& $ & ( $ & "! $ &*),+. +0/ 132 154 # $'& $ + & $ & #%$,& $ + &687 :9<; (1) Here, 132 and 154 represent, respectively, an activity-independent and a maximum activitydependent release of NTF from target cells; the parameter a resting NTF uptake capacity by afferent cells; + a function characterising NTF diffusion between target cells, which we take for convenience to be a Gaussian of width = . The function ! >@? A B # is a simple model for the number of NTF receptors supported by an afferent cell, where ? denotes average afferent activity. The parameter sets the overall rate of development. Consistent with previous work [3], we set CD
E;
GF , =
E;IHKJ , 1 2 D
, 1 4 LF,
and M . Although this model appears complex, it can be shown to be equivalent to a non-linear Hebbian rule with competition implemented via multiplicative synaptic normalisation [6]. For a full discussion, derivation and justification of the model, see Ref. [7]. Both afferent sheets initially project roughly equally to all cells in the target sheet. The initial pattern of connectivity between the sheets is established following Goodhill’s method [8]. For a given afferent cell, let be the distance between some target cell and the target cell to which the afferent cell would project were topography perfect; let ENPORQ be the maximum such distance. Then the number of synapses projected by the afferent cell to this target cell is initially set to be proportional to S 7 NPOTQ3U 7 "VPW (2) where V>X8Y
EWZ\[ is a randomly selected number for each such pair of afferent and target cells. The parameter X]Y
WZZ[ determines the quality of the projections, with ^ giving initially greatest topographical bias, so that an afferent cell projects maximally to its topographically preferred target cell, and _
giving initially completely random projections. Here we set `
;IJ ; the impact of decreasing on the final structure of the topographic map has been thoroughly explored elsewhere [3]. The topographic representation of an afferent sheet on the target sheet is depicted using standard methods [1, 8]: the centres of mass of afferent projections to all target cells are calculated, and these are then connected by lines that preserve the neighbourhood relations among the target cells. 6 Results For each iteration step of the algorithm a fixed number of spikes was captured. The bin size determines the correlation space constants of the afferent cell sheets and therefore influences the final quality of the topographic mapping [3]. Unless otherwise noted the bin size was 32 per sensor, which corresponds to about two successive pixel rows stimulated by a moving contrast boundary. The presented simulations were performed for 15,000 to 20,000 iteration steps, sufficient for map development to be largely complete. (a) (b) (c) Figure 2: Distribution of ODCs in the target cell sheet for different disparities between the bar stimuli driving the two afferent sheets. The gray level of each target cell indicates the relative strengths of projections from the two afferent sheets, where ‘black’ represents one and ‘white’ the other afferent sheet. (a) No disparity; (b) disparity: 50% of bar width (4 sensor pixels); (c) disparity: 75% of bar width (6 sensor pixels). Several runs were performed for the three different disparities of the stimuli presented to the two sensors. Since the results for a given disparity were all qualitatively similar, we only show the results of one representative run for each value. The distribution of the formed ODCs in the target sheet is shown in Fig. 2, where the shading of each neuron indicates the relative numbers of projections from the two afferent sheets. In the absence of any disparity the formation of ODCs was suppressed. The residual ocular dominance modulations may be attributed to a small misalignment of the two chips with respect to the display. With the introduction of a disparity a very clear structure of ODCs emerges. The distribution of ODCs strongly depends on the disparity and does not vary significantly between runs for a given disparity. With increasing disparity the boundaries between ODCs become more distinct [9, 10]. The obtained maps are qualitatively similar to those obtained with simulated afferent inputs [1]. 0 0.05 0.1 0.15 0.2 0.25 0.3 0 2 4 6 8 10 12 14 16 Power Frequency Figure 3: Power spectra of the spatial frequency distribution of ODCs in the target cell sheet for different disparities and data sets. A ‘solid’ line denotes data with disparity of 75% of bar width (6 sensor pixels); a ‘dashed’ line denotes a disparity of 50% of bar width (4 sensor pixels); a ‘dotted’ line denotes no disparity. The power spectra obtained from two-dimensional Fourier transforms of the ODC distributions, represented in Fig. 3, show that the spatial frequency content of the ODCs is a function of disparity, consistent with experimental findings in the cat [8, 11, 12, 13], and that its variability between different runs of the same disparity is significantly smaller than between different disparities. The principal spatial frequency along each dimension of the target sheet is mainly determined by the NTF diffusion parameter [1] and the disparity. For the NTF diffusion parameter used here, it ranges between two and four cycles; increasing (decreasing) the diffusion parameter decreases (increases) the spatial frequency. The heights of the peaks show the degree of segregation, which increases with disparity, as already mentioned. (a) (b) (c) Figure 4: Topographic mapping between afferent sheets and target sheet for different disparities between the stimuli driving the two afferent sheets. The data are from the same runs as the ODC data of Fig. 2. (a) No disparity; (b) disparity: 50% of bar width (4 sensor pixels); (c) disparity: 75% of bar width (6 sensor pixels). The resulting topographic maps for the same runs are shown in Fig. 4. In the absence of disparity the topographic map is almost perfect, with nearly one-to-one mapping between the afferent sheets and the target sheet, apart from remaining edge effects. However, disruptions appear at ODC boundaries in the runs with disparate stimuli, these disruptions becoming more distinct with increasing disparity due to the increasing sharpness of ODC boundaries. The data presented above were obtained under suppression of spontaneous firing, so that each pixel generated exactly one spike in response to each moving bright-to-dark contrast boundary with an error rate of about 5%. By turning up the spontaneous firing rate we can test the robustness of the system to increased noise levels. We set the spontaneous firing rate to approximately 50%, so that roughly half of all spikes are not associated with an edge event. We also increased the bin size from 32 to 48 spikes per chip to compensate for the reduced intraocular correlations as a result of increased noise [3]. Fig. 5 shows a typical pattern of ODCs and the corresponding topographic map in the presence of 50% spontaneous activity. Although there are some distortions in the topographic map, in general it compares very favourably to maps developed in the absence of spontaneous activity. At an approximately 60% level of noise major disruptions in topographic map formation and attenuated ODC development are exhibited. Increasing the level of noise still further causes a complete breakdown of topographic and ODC map formation (data not shown). (a) (b) Figure 5: The pattern of ODCs and the topographic map that develop in the presence of approximately 50% noise. (a) The OD map; (b) the topographic map. The disparity is 50% of the bar width (4 sensor pixels). 7 Discussion The refinement of topography and the development of ODCs can be robustly simulated with the considered hybrid system, consisting of an integrated analog visual sensing system that captures some of the key features of retinal processing and a mathematical model of activity-dependent synaptic competition. Despite the different structure of the input stimuli and the different noise characteristics of the real sensors from those used in the pure simulations [1], the results are comparable. Several parameters of the vision sensors, such as refractory period and spontaneous firing rate, can be continuously varied with input bias voltages. This facilitates the evaluation of the performance of the model under different input conditions. The sensors were operated at long refractory periods, so that each pixel responded with a single spike to a contrast boundary moving across it. In this non-bursting mode the coding of the stimulus is very sparse, which makes the topographic refinement process more efficient [3]. The noise induced by the vision sensors manifests itself in occasionally missing responses of some pixels to a moving edge, in temporal jitter and a tunable level of spontaneous activity. With an optimal suppression of spontaneous firing, the error rate (number of missed and spurious events divided by total number of events) can be reduced to approximately 5%. Increased spontaneous activity levels show a strongly anisotropic distribution across the sensing arrays because of the inherent fixed-pattern noise present in the integrated sensors due to random mismatches in the fabricated circuits. This type of inhomogeneity has not been modeled in previous work. Spontaneous activity and mismatches between cells with the same functional role are prominent features of biological neural systems and biological information processing systems therefore have to deal with these nonidealities. The plasticity algorithm proves to be sufficiently robust with respect to these types of noise. The developed ODC and topographic maps depend quite strongly on the disparity between the two sensors. At zero disparity, the formation of ODCs is practically suppressed and topography becomes very smooth. As the disparity increases, the period of the resulting ODCs increases, consistent with experimental results in the cat [8, 11, 12, 13], and, as expected, the degree of segregation also increases [9, 10]. In the presence of high levels of spontaneous activity in the afferent pathways, with as much as half of all spikes not being stimulus–related, the maps continue to exhibit well developed ODCs and topography. Although there are indications of distortions in the topographic maps in the presence of approximately 50% spontaneous activity, the maps remain globally well structured. As spontaneous activity is increased further, map development becomes increasingly disrupted until it breaks down completely. 8 Conclusions We examined the refinement of topographic mappings and the formation of ocular dominance columns by coupling a pair of integrated vision sensors to a neurotrophic model of synaptic plasticity. We have shown that the afferent input from real sensors looking at moving bar stimuli yields similar results as simulated partially randomized input and that these results are insensitive to the presence of significant noise levels. Acknowledgments Tragically, J¨org Kramer died in July, 2002. TE dedicates this work to his memory. TE thanks the Royal Society for the support of a University Research Fellowship. JK was supported in part by the Swiss National Foundation Research SPP grant. We thank David Lawrence of the Institute of Neuroinformatics for his invaluable help with interfacing the chip to the PC. References [1] T. Elliott and N. R. Shadbolt, “A neurotrophic model of the development of the retinogeniculocortical pathway induced by spontaneous retinal waves,” Journal of Neuroscience, vol. 19, pp. 7951–7970, 1999. [2] A.K. McAllister, L.C. Katz, and D.C. Lo, “Neurotrophins and synaptic plasticity,” Annual Review of Neuroscience, vol. 22, pp. 295–318, 1999. [3] T. Elliott and J. Kramer, “Coupling an aVLSI neuromorphic vision chip to a neurotrophic model of synaptic plasticity: the development of topography,” Neural Computation, vol. 14, pp. 2353–2370, 2002. [4] J. Kramer, “An integrated optical transient sensor,” IEEE Trans. Circuits and Systems II: Analog and Digital Signal Processing, 2002, submitted. [5] J. Kramer, “An on/off transient imager with event-driven, asynchronous read-out,” in Proc. 2002 IEEE Int. Symp. on Circuits and Systems, Phoenix, AZ, May 2002, vol. II, pp. 165–168, IEEE Press. [6] T. Elliott and N. R. Shadbolt, “Multiplicative synaptic normalization and a nonlinear Hebb rule underlie a neurotrophic model of competitive synaptic plasticity,” Neural Computation, vol. 14, pp. 1311–1322, 2002. [7] T. Elliott and N. R. Shadbolt, “Competition for neurotrophic factors: Mathematical analysis,” Neural Computation, vol. 10, pp. 1939–1981, 1998. [8] G.J. Goodhill, “Topography and ocular dominance: a model exploring positive correlations,” Biological Cybernetics, vol. 69, pp. 109–118, 1993. [9] D.H. Hubel and T.N. Wiesel, “Binocular interaction in striate cortex of kittens reared with artificial squint,” Journal of Neurophysiology, vol. 28, pp. 1041–1059, 1965. [10] C.J. Shatz, S. Lindstr¨om, and T.N. Wiesel, “The distribution of afferents representing the right and left eyes in the cat’s visual cortex,” Brain Research, vol. 131, pp. 103–116, 1977. [11] S. L¨owel, “Ocular dominance column development: Strabismus changes the spacing of adjacent columns in cat visual cortex,” Journal of Neuroscience, vol. 14, pp. 7451–7468, 1994. [12] G.J. Goodhill and S. L¨owel, “Theory meets experiment: correlated neural activity helps determine ocular dominance column periodicity,” Trends in Neurosciences, vol. 18, pp. 437–439, 1995. [13] S.B. Tieman and N. Tumosa, “Alternating monocular exposure increases the spacing of ocularity domains in area 17 of cats,” Visual Neuroscience, vol. 14, pp. 929–938, 1997.
|
2002
|
194
|
2,207
|
Gaussian Process Priors With Uncertain Inputs Application to Multiple-Step Ahead Time Series Forecasting Agathe Girard Department of Computing Science University of Glasgow Glasgow, G12 8QQ agathe@dcs.gla.ac.uk Carl Edward Rasmussen Gatsby Unit University College London London, WC1N 3AR edward@gatsby.ucl.ac.uk Joaquin Qui˜nonero Candela Informatics and Mathematical Modelling Technical University of Denmark Richard Petersens Plads, Building 321 DK-2800 Kongens, Lyngby, Denmark jqc@imm.dtu.dk Roderick Murray-Smith Department of Computing Science University of Glasgow, Glasgow, G12 8QQ & Hamilton Institute National University of Ireland, Maynooth rod@dcs.gla.ac.uk Abstract We consider the problem of multi-step ahead prediction in time series analysis using the non-parametric Gaussian process model. -step ahead forecasting of a discrete-time non-linear dynamic system can be performed by doing repeated one-step ahead predictions. For a state-space model of the form
, the prediction of at time is based on the point estimates of the previous outputs. In this paper, we show how, using an analytical Gaussian approximation, we can formally incorporate the uncertainty about intermediate regressor values, thus updating the uncertainty on the current prediction. 1 Introduction One of the main objectives in time series analysis is forecasting and in many real life problems, one has to predict ahead in time, up to a certain time horizon (sometimes called lead time or prediction horizon). Furthermore, knowledge of the uncertainty of the prediction is important. Currently, the multiple-step ahead prediction task is achieved by either explicitly training a direct model to predict steps ahead, or by doing repeated one-step ahead predictions up to the desired horizon, which we call the iterative method. There are a number of reasons why the iterative method might be preferred to the ‘direct’ one. Firstly, the direct method makes predictions for a fixed horizon only, making it computationally demanding if one is interested in different horizons. Furthermore, the larger , the more training data we need in order to achieve a good predictive performance, because of the larger number of ‘missing’ data between and . On the other hand, the iterated method provides any -step ahead forecast, up to the desired horizon, as well as the joint probability distribution of the predicted points. In the Gaussian process modelling approach, one computes predictive distributions whose means serve as output estimates. Gaussian processes (GPs) for regression have historically been first introduced by O’Hagan [1] but started being a popular non-parametric modelling approach after the publication of [7]. In [10], it is shown that GPs can achieve a predictive performance comparable to (if not better than) other modelling approaches like neural networks or local learning methods. We will show that for a -step ahead prediction which ignores the accumulating prediction variance, the model is not conservative enough, with unrealistically small uncertainty attached to the forecast. An alternative solution is presented for iterative -step ahead prediction, with propagation of the prediction uncertainty. 2 Gaussian Process modelling We briefly recall some fundamentals of Gaussian processes. For a comprehensive introduction, please refer to [5], [11], or the more recent review [12]. 2.1 The GP prior model Formally, the random function, or stochastic process, is a Gaussian process, with mean and covariance function , if its values at a finite number of points, , are seen as the components of a normally distributed random vector. If we further assume that the process is stationary: it has a constant mean and a covariance function only depending on the distance between the inputs . For any , we have
(1) with giving the covariance between the points and , which is a function of the inputs corresponding to the same cases and . A common choice of covariance function is the Gaussian kernel1 "!$#&% ')(+* ,./0)1 0 ( 0 2 3 2 0 4 (2) where 5 is the input dimension. The 3 parameters (correlation length) allow a different distance measure for each input dimension 6 . For a given problem, these parameters will be adjusted to the data at hand and, for irrelevant inputs, the corresponding 3 0 will tend to zero. The role of the covariance function in the GP framework is similar to that of the kernels used in the Support Vector Machines community. This particular choice corresponds to a prior assumption that the underlying function is smooth and continuous. It accounts for a high correlation between the outputs of cases with nearby inputs. 1This choice was motivated by the fact that, in [8], we were aiming at unified expressions for the GPs and the Relevance Vector Machines models which employ such a kernel. More discussion about possible covariance functions can be found in [5]. 2.2 Predicting with Gaussian Processes Given this prior on the function and a set of data 1 , our aim, in this Bayesian setting, is to get the predictive distribution of the function value corresponding to a new (given) input . If we assume an additive uncorrelated Gaussian white noise, with variance
, relates the targets (observations) to the function outputs, the distribution over the targets is Gaussian, with zero mean and covariance matrix such that
. We then adjust the vector of hyperparameters 3 3
so as to maximise the log-likelihood ! #"%$ , where & is the vector of observations. In this framework, for a new ' , the predictive distribution is simply obtained by conditioning on the training data. The joint distribution of the variables being Gaussian, this conditional distribution, $ ( is also Gaussian with mean and variance ) * ,+ " (3) 2 ( * * (4) where * . / . . 0 is the 132 * vector of covariances between the new point and the training targets and . . * , with as given by (2). The predictive mean serves as a point estimate of the function output, 4 . with uncertainty . . And it is also a point estimate for the target, 4 5 , with variance 2 .
. 3 Prediction at a random input If we now assume that the input distribution is Gaussian,
)687 67 , the predictive distribution is now obtain by integrating over $ ) 6 7 ) 6 7 9 $ 6 (5) where . $ . is Normal, as specified by (3) and (4). 3.1 Gaussian approximation Given that this integral is analytically intractable ( $ . is a complicated function of ), we opt for an analytical Gaussian approximation and only compute the mean and variance of ' $ )687 ) 67 . Using the law of iterated expectations and conditional variance, the ‘new’ mean and variance are given by )67 ) 67 : 67 :<;!= 6 7?> $ @ A: 67 ) (6)
)67 ) 67 : 67 BDCE;!= 6 7?> $ BDC 687 F:<;!= 6 7G> $ : 67 2 BDC 687 ) (7) where : 67 indicates the expectation under H . In our initial development, we made additional approximations ([2]). A first and second order Taylor expansions of ) ' and 2 . respectively, around ) 67 , led to )67 687 ) )67 (8)
)67 687 2 )687 * ,I CKJML 2 2 L L ,N N N N O 7 1PDQ 7 687R L ) L N N N N O 7 1PDQ 7 687 L ) L N N N N O 7 1SPDQ 7 (9) The detailed calculations can be found in [2]. In [8], we derived the exact expressions of the first and second moments. Rewriting the predictive mean ) . as a linear combination of the covariance between the new and the training points (as suggested in [12]), with our choice of covariance function, the calculation of ' then involves the product of two Gaussian functions: ) 67 687 9 ) 6 / 9 6 (10) with + " . This leads to (refer to [9] for details) )67 ) 67 (11) with $ 6 7 $ 2 ! #&%
( 2 )6 7 ( 6 7 )6 7 ( , where BD 5 3 2 3 2 and is the 5 2 5 identity matrix. In the same manner, we obtain for the variance
)67 ) 67 )67 )67 I ( + ( I 2 (12) with $ , 67 $ 2 !$# %! ( * , " ( )67 * , 67 " ( )67 $# ! #&% (+* , ( &% , ( &% # (13) where " ' , . 3.2 Monte-Carlo alternative Equation (5) can be solved by performing a numerical approximation of the integral, using a simple Monte-Carlo approach: $ ) 6 7 6 7 9 $ 6 )( * * / 1 $ (14) where ' are (independent) samples from . 4 Iterative + -step ahead prediction of time series For the multiple-step ahead prediction task of time series, the iterative method consists in making repeated one-step ahead predictions, up to the desired horizon. Consider the time series -, and the state-space model /. #H /. 10 /. where H . . . is the state at time (we assume that the lag 2 is known) and the (white) noise has variance
D . Then, the“naive” iterative -step ahead prediction method works as follows: it predicts only one time step ahead, using the estimate of the output of the current prediction, as well as previous outputs (up to the lag 2 ), as the input to the prediction of the next time step, until the prediction steps ahead is made. That way, only the output estimates are used and the uncertainty induced by each successive prediction is not accounted for. Using the results derived in the previous section, we suggest to formally incorporate the uncertainty information about the intermediate regressor. That is, as we predict ahead in time, we now view the lagged outputs as random variables. In this framework, the input at time is a random vector with mean formed by the predicted means of the lagged outputs , * 2 , given by (11). The 2 2!2 input covariance matrix has the different predicted variances on its diagonal (with the estimated noise variance
added to them), computed with (12), and the off-diagonal elements are given by, in the case of the exact solution, ?H ( )6
, where is as defined previously and / 687 )687 with 687 . 4.1 Illustrative examples The first example is intended to provide a basis for comparing the approximate and exact solutions, within the Gaussian approximation of (5)), to the numerical solution (MonteCarlo sampling from the true distribution), when the uncertainty is propagated as we predict ahead in time. We use the second example, inspired from real-life problems, to show that iteratively predicting ahead in time without taking account of the uncertainties induced by each succesive prediction leads to inaccurate results, with unrealistically small error bars. We then assess the predictive performance of the different methods by computing the average absolute error ( 2 ), the average squared error ( 2 2 ) and average minus log predictive density2 ( 2 ), which measures the density of the actual true test output under the Gaussian predictive distribution and use its negative log as a measure of loss. 4.1.1 Forecasting the Mackey-Glass time series The Mackey-Glass chaotic time series constitutes a wellknown benchmark and a challenge for the multiple-step ahead prediction task, due to its strong non-linearity [4]: 0 = > 0 ( =
> =
> , . We have , , * and * . The series is re-sampled with period * and normalized. We choose 2 * for the number of lagged outputs in the state vector, "! 3 #$! &% '#(! &) '#(! * and the targets, , are corrupted by a white noise with variance $ * . We train a GP model with a Gaussian kernel such as (2) on * + points, taken at random from a series of ,+$$ points. Figure 1 shows the mean predictions with their uncertainties, given by the exact and approximate methods, and -( samples from the Monte-Carlo numerical approximation, from * to * $ steps ahead, for different starting points. Figure 2 shows the plot of the * + -step ahead mean predictions (left) and their , uncertainties (right), given by the exact and approximate methods, as well as the sample mean and sample variance obtained with the numerical solution (average over -$ points). These figures show the better performance of the exact method on the approximate one. Also, they allow us to validate the Gaussian approximation, noticing that the error bars encompass the samples from the true distribution. Table 1 provides a quantitative confirmation. Table 1: Average (over -($ test points) absolute error ( 2 ), squared error ( 2 2 ) and minus log predictive density ( 2 ) of the * $ -step ahead predictions obtained using the exact method ( . ), the approximate one ( . 2 ) and the sampling from the true distribution ( . ). 2 2 2 2 . /0 / 1$1$2 ,(/ 1 . 2 3-4/++2 / +2 * * 1+$ .5 /+,$, 1 4 ,$2 2 2To evaluate these losses in the case of Monte-Carlo sampling, we use the sample mean and sample variance. 0 10 20 30 40 50 60 70 80 90 100 −3 −2 −1 0 1 2 3 From 1 to 100 steps ahead k=1 k=100 True Data Exact m +/− 2σ Approx. m +/− 2σ MC samples 250 260 270 280 290 300 310 320 330 340 350 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 From 1 to 100 steps ahead k=1 k=100 True Data Exact m +/− 2σ Approx. m +/− 2σ Figure 1: Iterative method in action: simulation from * to * + steps ahead for different starting points in the test series. Mean predictions with , error bars given by the exact (dash) and approximate (dot) methods. Also plotted, -$ samples obtained using the numerical approximation. 100 150 200 250 300 350 400 450 500 550 600 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 100−step ahead predicted means true exact approx numerical 100 150 200 250 300 350 400 450 500 550 600 0 0.5 1 1.5 2 2.5 3 3.5 100−step ahead predicted variances exact approx. numerical Figure 2: * + -step ahead mean predictions (left) and uncertainties (right.) obtained using the exact method (dash), the approximate (dot) and the sample mean and variance of the numerical solution (dash-dot). 4.1.2 Prediction of a pH process simulation We now compare the iterative -step ahead prediction results obtained when propagating the uncertainty (using the approximate method) and when using the output estimates only (the naive approach). For doing so, we use the pH neutralisation process benchmark presented in [3]. The training and test data consist of pH values (outputs of the process) and a control input signal ( ). With a model of the form
, we train our GP on * , , , examples and consider a test set of points (all data have been normalized). Figure 3 shows the * -step ahead predicted means and variances obtained when propagating the uncertainty and when using information on the past predicted means only. The losses calculated are the following: 2 *4 * , 2 2 0 / and 2 , $, for the approximate method and 2 * 2$,$ , for the naive one! 0 2 4 6 8 10 12 1 1.2 1.4 1.6 1.8 2 22 24 26 28 30 32 34 −6 −4 −2 0 2 4 true Approx. m +/− 2σ Naive m +/− 2σ k=1 k=10 10 20 30 40 50 60 70 80 −1.5 −1 −0.5 0 0.5 1 1.5 2 10−step ahead predicted means true approx naive 10 20 30 40 50 60 70 80 10 −10 10 −5 10 0 10 5 10−step ahead predicted variances Figure 3: Predictions from * to * steps ahead (left). * -step ahead mean predictions with the corresponding variances, when propagating the uncertainty (dot) and when using the previous point estimates only (dash). 5 Conclusions We have presented a novel approach which allows us to use knowledge of the variance on inputs to Gaussian process models to achieve more realistic prediction variance in the case of noisy inputs. Iterating this approach allows us to use it as a method for efficient propagation of uncertainty in the multi-step ahead prediction task of non-linear time-series. In experiments on simulated dynamic systems, comparing our Gaussian approximation to Monte Carlo simulations, we found that the propagation method is comparable to Monte Carlo simulations, and that both approaches achieved more realistic error bars than a naive approach which ignores the uncertainty on current state. This method can help understanding the underlying dynamics of a system, as well as being useful, for instance, in a model predictive control framework where knowledge of the accuracy of the model predictions over the whole prediction horizon is required (see [6] for a model predictive control law based on Gaussian processes taking account of the prediction uncertainty). Note that this method is also useful in its own right in the case of noisy model inputs, assuming they have a Gaussian distribution. Acknowledgements Many thanks to Mike Titterington for his useful comments. The authors gratefully acknowledge the support of the Multi-Agent Control Research Training Network - EC TMR grant HPRN-CT-1999-00107 and RM-S is grateful for EPSRC grant Modern statistical approaches to off-equilibrium modelling for nonlinear system control GR/M76379/01. References [1] O’Hagan, A. (1978) Curve fitting and optimal design for prediction. Journal of the Royal Statistical Society B 40:1-42. [2] Girard, A. & Rasmussen, C. E. & Murray-Smith, R. (2002) Gaussian Process Priors With Uncertain Inputs: Multiple-Step Ahead Prediction. Technical Report, TR-2002-119, Dept. of Computing Science, University of Glasgow. [3] Henson, M. A. & Seborg, D. E. (1994) Adaptive nonlinear control of a pH neutralisation process. IEEE Trans Control System Technology 2:169-183. [4] Mackey, M. C. & Glass, L. (1977) Oscillation and Chaos in Physiological Control Systems. Science 197:287-289. [5] MacKay, D. J. C. (1997) Gaussian Processes - A Replacement for Supervised Neural Networks?. Lecture notes for a tutorial at NIPS 1997. [6] Murray-Smith, R. & Sbarbaro-Hofer, D. (2002) Nonlinear adaptive control using non-parametric Gaussian process prior models. 15th IFAC World Congress on Automatic Control, Barcelona [7] Neal, R. M. (1995) Bayesian Learning for Neural Networks PhD thesis, Dept. of Computer Science, University of Toronto. [8] Qui˜nonero Candela, J & Girard, A. & Larsen, J. (2002) Propagation of Uncertainty in Bayesian Kernels Models – Application to Multiple-Step Ahead Forecasting Submitted to ICASSP 2003. [9] Qui˜nonero Candela, J. & Girard, A. (2002) Prediction at an Uncertain Input for Gaussian Processes and Relevance Vector Machines - Application to Multiple-Step Ahead Time-Series Forecasting. Technical Report, IMM, Danish Technical University. [10] Rasmussen, C. E. (1996) Evaluation of Gaussian Processes and other Methods for Non-Linear Regression PhD thesis, Dept. of Computer Science, University of Toronto. [11] Williams, C. K. I. & Rasmussen, C. E. (1996) Gaussian Processes for Regression Advances in Neural Information Processing Systems 8 MIT Press. [12] Williams, C. K. I. (2002) Gaussian Processes To appear in The handbook of Brain Theory and Neural Networks, Second edition MIT Press.
|
2002
|
195
|
2,208
|
Maximum Likelihood and the Information Bottleneck Noam Slonim Yair Weiss School of Computer Science & Engineering, Hebrew University, Jerusalem 91904, Israel noamm,yweiss @cs.huji.ac.il Abstract The information bottleneck (IB) method is an information-theoretic formulation for clustering problems. Given a joint distribution , this method constructs a new variable
that defines partitions over the values of that are informative about . Maximum likelihood (ML) of mixture models is a standard statistical approach to clustering problems. In this paper, we ask: how are the two methods related ? We define a simple mapping between the IB problem and the ML problem for the multinomial mixture model. We show that under this mapping the problems are strongly related. In fact, for uniform input distribution over or for large sample size, the problems are mathematically equivalent. Specifically, in these cases, every fixed point of the IB-functional defines a fixed point of the (log) likelihood and vice versa. Moreover, the values of the functionals at the fixed points are equal under simple transformations. As a result, in these cases, every algorithm that solves one of the problems, induces a solution for the other. 1 Introduction Unsupervised clustering is a central paradigm in data analysis. Given a set of objects , one would like to find a partition which optimizes some score function. Tishby et al. [1] proposed a principled information-theoretic approach to this problem. In this approach, given the joint distribution , one looks for a compact representation of , which preserves as much information as possible about (see [2] for a detailed discussion). The mutual information, !#" $ , between the random variables and is given by [3] !%#"& ('*),+.-./10 23-45%687 %:9<;.=?>3@ 2BA +DC >E@ 2FC . In [1] it is argued that both the compactness of the representation and the preserved relevant information are naturally measured by mutual information, hence the above principle can be formulated as a trade-off between these quantities. Specifically, Tishby et al. [1] suggested to introduce a compressed representation of , by defining GHF7 % . The compactness of the representation is then determined by !?"I , while the quality of the clusters, , is measured by the fraction of information they capture about , !%?"& JB!#" $ . The IB problem can be stated as finding a (stochastic) mapping GHF7 % such that the IB-functional KL'M!%?"ONQP!%?"& is minimized, where P is a positive Lagrange multiplier that determines the trade-off between compression and precision. It was shown in [1] that this problem has an exact optimal (formal) solution without any assumption about the origin of the joint distribution IR . The standard statistical approach to clustering is mixture modeling. We assume the measurements for each come from one of 7 $7 possible statistical sources, each with its own parameters (e.g. in Gaussian mixtures). Clustering corresponds to first finding the maximum likelihood estimates of and then using these parameters to calculate the posterior probability that the measurements at were generated by each source. These posterior probabilities define a “soft” clustering of values. While both approaches try to solve the same problem the viewpoints are quite different. In the information-theoretic approach no assumption is made regarding how the data was generated but we assume that the joint distribution is known exactly. In the maximumlikelihood approach we assume a specific generative model for the data and assume we have samples (IR , not the true probability. In spite of these conceptual differences we show that under a proper choice of the generative model, these two problems are strongly related. Specifically, we use the multinomial mixture model (a.k.a the one-sided [4] or the asymmetric clustering model [5]), and provide a simple “mapping” between the concepts of one problem to those of the other. Using this mapping we show that in general, searching for a solution of one problem induces a search in the solution space of the other. Furthermore, for uniform input distribution 6% or for large sample sizes, we show that the problems are mathematically equivalent. Hence, in these cases, any algorithm which solves one problem, induces a solution for the other. 2 Short review of the IB method In the IB framework, one is given as input a joint distribution IR . Given this distribution, a compressed representation of is introduced through the stochastic mapping GHF7 % . The goal is to find GHF7 % such that the IB-functional, K '*!?"I ONP!?" is minimized for a given value of P . The joint distribution over and is defined through the IB Markovian independence relation, . Specifically, every choice of GHF7 % defines a specific joint probability G
%HI$' IGHF7 % . Therefore, the distributions GHI and G 7 HI that are involved in calculating the IB-functional are given by GHI(' ) +.0 2 G
IHI('*) + % GHF7 G87 HI(' @ C ) +(G
IIHI ' @ C ) + 6IR GHF7 (1) In principle every choice of GHF7 is possible but as shown in [1], if GHI and G 7 HI are given, the choice that minimizes K is defined through, GHF7 % ' GHI P(% "! @ >E@ 2BA + C A A @ 2BA CC (2) where P(I is the normalization (partition) function and #%$'&6 67 7 GB1' ) 9 ;= > is the Kullback-Leibler divergence. Iterating over this equation and the !( -step defined in Eq.(1) defines an iterative algorithm that is guaranteed to converge to a (local) fixed point of K [1]. 3 Short review of ML for mixture models In a multinomial mixture model, we assume that takes on discrete values and sample it from a multinomial distribution ) 7 H I , where H % denotes ’s label. In the onesided clustering model [4] [5] we further assume that there can be multiple observations correspondingto a single but they are all sampled from the same multinomial distribution. This model can be described through the following generative process: For each choose a unique label H by sampling from (HI . For ' – choose by sampling from . – choose by sampling from ) 7 H
I and increase ( I by one. Let H ' H <IH A / A denotes the random vector that defines the (typically hidden) labels, or topics for all . The complete likelihood is given by: I H )R8 ' A / A (H ) 7 H I (3) ' A / A (H A / A A 4 A O ) 7 H I! @ +#" 0 2%$ C (4) where ( I is a count matrix. The (true) likelihood is defined through summing over all the possible choices of H , & (IR' )R8(')(+* 6 H, ) 8 (5) Given ( , the goal of ML estimation is to find an assignment for the parameters (HI ) 87 HI and such that the likelihood is (at least locally) maximized. Since it is easy to show that the ML estimate for % is just the empirical counts (J- (where ( ' ) 2 (IR ), we further focus only on estimating ) . A standard algorithm for this purpose is the EM algorithm [6]. Informally, in the . -step we replace the missing value of H % by its distribution H F7 % which we denote by G + HI . In the / -step we use that distribution to reestimate ) . Using standard derivation it is easy to verify that in our context the . -step is defined through G + HI ' 0 %(HI @ +DC )21 @ 2.A +DC-3 4%576 @ 2BA C (6) ' 08%(HI @ +DC:9 ) 1 @ 2BA + C-3 4576 @ 2BA C ) 1 @ 2BA + C-3 4%5 @ 2BA + C!; (7) ' 08%(HI @ + C "! @ @ 2BA + C A A 6 @ 2BA CC (8) where 0 and 0 8 % are normalization factors and (87 ' @ +.0 2 C @ + C . The / -step is simply given by < (HI>=*),+ G + HI ) 7 HI?= ) + (IG + HI (9) Iterating over these EM steps is guaranteed to converge to a local fixed point of the likelihood. Moreover, every fixed point of the likelihood defines a fixed point of this algorithm. An alternative derivation [7] is to define the free energy functional: @ (R'.G:A ) ' NB( 0 + G + HIC 9<;.=D(HIFEG( 2 (9 ;= ) 87 HIIH (10) EJ( 0 + G + HI:9<;.= G + HI (11) The . -step then involves minimizing @ with respect to G while the / -step minimizes it with respect to ) . Since this functional is bounded (under mild conditions), the EM algorithm will converge to a local fixed point of @ which corresponds to a fixed point of the likelihood. At these fixed points, @ will become identical to N 9 ;= & (IR ) . 4 The ML IB mapping As already mentioned, the IB problem and the ML problem stem from different motivations and involve different “settings”. Hence, it is not entirely clear what is the purpose of “mapping” between these problems. Here, we define this mapping to achieve two goals. The first is theoretically motivated: using the mapping we show some mathematical equivalence between both problems. The second is practically motivated, where we show that algorithms designed for one problem are (in some cases) suitable for solving the other. A natural mapping would be to identify each distribution with its corresponding one. However, this direct mapping is problematic. Assume that we are mapping from ML to IB. If we directly map G + HI A(HI )87 HI to GHF7 % GHI G87 HI , respectively, obviously there is no guarantee that the IB Markovian independence relation will hold once we complete the mapping. Specifically, using this relation to extract GHI through Eq.(1) will in general result with a different prior over then by simply defining GHI ' (HI . However, we notice that once we defined GHF7 % and , the other distributions could be extracted by performing the IB-step defined in Eq.(1). Moreover, as already shown in [1], performing this step can only improve (decrease) the corresponding IB-functional. A similar phenomenon is present once we map from IB to ML. Although in principle there are no “consistency” problems by mapping directly, we know that once we defined G + HI and (IR , we can extract and ) by a simple / -step. This step, by definition, will only improve the likelihood, which is our goal in this setting. The only remaining issue is to define a correspondingcomponent in the ML setting for the trade-off parameter P . As we will show in the next section, the natural choice for this purpose is the sample size, 'M)L+.0 2 (R . Therefore, to summarize, we define the / & ! ( mapping by G + HI GHF7 % (IR P (12) where is a positive (scaling) constant and the mapping is completed by performing an IB-step or an / -step according to the mapping direction. Notice that under this mapping, every search in the solution space of the IB problem induces a search in the solution space of the ML problem, and vice versa (see Figure 2). Observation 4.1 When is uniformly distributed (i.e., (% or 6% are constant), the / & !( mapping is equivalent for a direct mapping of each distribution to its corresponding one. This observation is a direct result from the fact that if is uniformly distributed, then the IB-step defined in Eq.(1) and the / -step defined in Eq.(9) are mathematically equivalent. Observation 4.2 When is uniformly distributed, the EM algorithm is equivalent to the IB iterative optimization algorithm under the / & ! ( mapping with ' 7 7 . Again, this observation is a direct result from the equivalence of the IB-step and the / -step for uniform prior over . Additionally, we notice that in this case (1' A / A ' '*P , hence Eq.(6) and Eq.(2) are also equivalent. It is important to emphasize, though, that this equivalence holds only for a specific choice of P*' (% . While clearly the IB iterative algorithm (and problem) are meaningful for any value of P , there is no such freedom (for good or worse) in the ML setting, and the exponential factor in EM must be (% . 5 Comparing ML and IB Claim 5.1 When is uniformly distributed and ' 7 7 , all the fixed points of the likelihood & are mapped to all the fixed points of the IB-functional K with P ' (% . Moreover, at the fixed points, N 9 ;= & = KJE 0 , with 0 constant. Corollary 5.2 When is uniformly distributed, every algorithm which finds a fixed point of & , induces a fixed point of K with P ' (% , and vice versa. When the algorithm finds several fixed points, the solution that maximizes & is mapped to the one that minimizes K . Proof: We prove the direction from ML to IB. the opposite direction is similar. We assume that we are given observations (R where (% is constant, and ) that define a fixed point of the likelihood & . As a result, this is also a fixed point of the EM algorithm (where G + HI is defined through an . -step). Using observation 4.2 it follows that this fixed-point is mapped to a fixed-point of K with P ' ( , as required. Since at the fixed point, N 9<;.= & ' @ , it is enough to show the relationship between @ and K . Rewriting @ from Eq.( 10) we get @ (R'.G:A ) ' ( 0 + G + HI:9 ;= G + HI (HI N ( 0 2 9 ;= ) 7 HI ( + (IRIG + HI (13) Using the / & ! ( mapping and observation 4.1 we get @ ' ( 0 + GHF7 %9 ;= GHF7 % GHI N P ( 0 2 9<;.= G87 HI ( + 6R GHF7 % (14) Multiplying both sides by 6 ' A / A ' and using the IB Markovian independence relation, we find that @ ' ( 0 + 6%IGHF7 %:9<;.= GHF7 % GHI NP ( 0 2 GHI G87 HI:9<;.= G87 HI (15) Reducing a (constant) P # 'MN P ) 0 26GHIIG87 HI9 ;=R to both sides gives: @ N P # $ 'L!?"I 6NP!%" ('LK (16) as required. We emphasize again that this equivalence is for a specific value of P ' (% . Corollary 5.3 When is uniformly distributed and ' 7 7 , every algorithm decreases @ , iff it decreases K with P ' (% . This corollary is a direct result from the above proof that showed the equivalence of the free energy of the model and the IB-functional (up to linear transformations). The previous claims dealt with the special case of uniform prior over . The following claims provide similar results for the general case, when the (or P ) are large enough. Claim 5.4 For (or P ), all the fixed points of & are mapped to all the fixed points of K , and vice versa. Moreover, at the fixed points, N 9<;.= & =LKJE 0 . Corollary 5.5 When every algorithm which finds a fixed point of & , induces a fixed point of K with P , and vice versa. When the algorithm finds several different fixed points, the solution that maximizes & is mapped to the solution that minimize K . A similar result was recently obtained independently in [8] for the special case of “hard”clustering. It is also important to keep in mind that in many clustering applications, a uniform prior over is “forced” during the pre-process to avoid non-desirable bias. In particular this was done in several previous applications of the IB method (see [2] for details). 0 10 20 30 40 50 4. 6 4. 5 4. 4 4. 3 4. 2 Small b (iIB) L IB F/r b H(Y) 0 20 40 60 1.195 1.2 1.205 1.21 1.215 1.22x 10 4 Small N (EM) F r(L IB+b H(Y)) 0 10 20 30 40 50 44.5 44 43.5 43 Large b (iIB) L IB F/r b H(Y) 0 10 20 30 40 2.826 2.827 2.828 2.829 x 10 5 Large N (EM) F r(L IB+b H(Y)) Figure 1: Progress of K and @ for different P and values, while running iIB and EM. Proof: Again, we prove only the direction from ML to IB as the opposite direction is similar. We are given (IR where ' ) +.0 2 (R and ) that define a fixed point of & . Using the . -step in Eq.(6) we extract G + HI , ending up with a fixed point of the EM algorithm. We notice that from follows (% B . Therefore, the mapping G + HI becomes deterministic: G + HI ' H(' # $'&O ( 7 %D7 7 ) 87 H
otherwise. (17) Performing the / & ! ( mapping (including the IB-step), it is easy to verify that we get G87 HI1' ) 87 HI (but GHI ' (HI if the prior over is not uniform). After completing the mapping we try to update GHF7 through Eq.(2). Since now P it follows that GHF7 % will remain deterministic. Specifically, G HF7 % ' H ' # $'& 87 F7 7 G87 H
I otherwise, (18) which is equal to its previous value. Therefore, we are at a fixed point of the IB iterative algorithm, and by that at a fixed point of the IB-functional K , as required. To show that N 9<;.= & = K EG0 we notice again that at the fixed point @ ' N 9 ;= & . From Eq.(13) we see that 9 @ ' N ( 0 2 9<;.= )87 HI ( + (IG + HI (19) Using the / & ! ( mapping and similar algebra as above, we find that 9 @ ' 9 N P!%" $+E P # (' 9 K E P # I (20) Corollary 5.6 When every algorithm decreases @ iff it decreases K with P . How large must (or P ) be? We address this question through numeric simulations. Yet, roughly speaking, we notice that the value of for which the above claims (approximately) hold is related to the “amount of uniformity” in (% . Specifically, a crucial step in the above proof assumed that each (% is large enough such that G + HI becomes deterministic. Clearly, when (% is less uniform, achieving this situation requires larger values. 6 Simulations We performed several different simulations using different IB and ML algorithms. Due to the lack of space, only one example is reported below; In this example we used the IB ~ min DKL(q x,y,t)||Q(x,y,t)) ML ~ min DKL(p(x,y)||L(n(x,y): π,θ)) IB “real” world T ↔X ↔Y ML “ideal” world X ↔T ↔Y + + + + + + ML ↔IB mapping Iterative IB EM ^ Figure 2: In general, ML (for mixture models) and IB operate in different solution spaces. Nonetheless, a sequence of probabilities that is obtained through some optimization routine (e.g., EM) in the “ML space”, can be mapped to a sequence of probabilities in the “IB space”, and vice versa. The main result of this paper is that under some conditions these two sequences are completely equivalent. / H subset of the 20-Newsgroups corpus [9], consisted of documents randomly chosen from different discussion groups. Denoting the documents by and the words by , after pre-processing [10] we have 7 7' 7 7' ' 7 $7' . Since our main goal was to check the differences between IB and ML for different values of (or P ), we further produced another dataset. In this data we randomly choose only about
of the word occurrences for every document B , ending up with ' . For both datasets we clustered the documents into clusters, using both EM and the iterative IB (iIB) algorithm (where we took IR ' (IR P ' ' 7 7 ). For each algorithm we used the / & ! ( mapping to calculate @ and K during the process (e.g., for iIB, after each iteration we mapped from ! ( to / & , including the / -step, and calculated @ ). We repeated this procedure for different initializations, for each dataset. In these runs we found that usually both algorithms improved both functionals monotonically. Comparing the functionals during the process, we see that for the smaller sample size the differences are indeed more evident (Figure 1). Comparing the final values of the functionals (after iterations, which typically yielded convergence), we see that in out of runs iIB converged to a smaller value of @ than EM. In runs, EM converged to a smaller value of K . Thus, occasionally, iIB finds a better ML solution or EM finds a better IB solution. This phenomenon was much more common for the large sample size case. 7 Discussion While we have shown that the ML and IB approaches are equivalent under certain conditions, it is important to keep in mind the different assumptions both approaches make regarding the joint distribution over %H . The mixture model (1) assumes that is independent of given and (2) assumes that 87 % is one of a small number ( 7 $7 ) of possible conditional distributions. For this reason, the marginal probability over (i.e., ) ) is usually different from IR ' (R . Indeed, an alternative view of ML estimation is as minimizing # $'& 6IRF7 7 & ( ) . On the other hand, in the IB framework, is defined through the IB Markovian independence relation: . Therefore, the solution space is the family of distributions for which this relation holds and the marginal distribution over I is consistent with the input. Interestingly, it is possible to give an alternative formulation for the IB problem which also involves KL minimization [11]. In this formulation the IB problem is related to minimizing # $'& G
%HIF7 7 IIHI , where IIHI denotes the family of distributions for which the mixture model assumption holds, . 8 In this sense, we may say that while solving the IB problem, one tries to minimize the KL with respect to the “ideal” world, in which separates from . On the other hand, while solving the ML problem, one assumes an “ideal” world, and tries to minimize the KL with respect to the given marginal distribution IR . Our theoretical analysis shows that under the / & !( mapping, these two procedures are in some cases equivalent (see Figure 2). Once we are able to map between ML and IB, it should be interesting to try and adopt additional concepts from one approach to the other. In the following we provide two such examples. In the IB framework, for large enough P , the quality of a given solution is measured through
@ 4C
@ / 4C [1]. This measure provides a theoretical upper bound, which can be used for purposes of model selection and more. Using the / & ! ( mapping, we can now adopt this measure for the ML estimation problem (for large enough ); In EM, the exponential factor (% in general depends on . However, its analogous component in the IB framework, P , obviously does not. Nonetheless, in principle it is possible to reformulate the IB problem while defining P ' P (without changing the form of the optimal solution). We leave this issue for future research. We have shown that for the multinomial mixture model, ML and IB are equivalent in some cases. It is worth noting that in principle, by choosing a different generative model, one may find further equivalences. Additionally, the IB method was recently extended into the multivariate case, where a new family of IB-like variational problems was presented and solved [11]. A natural question is to look for further generative models that can be mapped to this multivariate IB problems, and we are working in this direction. Acknowledgments Insightful discussions with Nir Friedman, Naftali Tishby and Gal Elidan are greatly appreciated. References [1] N. Tishby, F. Pereira, and W. Bialek. The Information Bottleneck method. In Proc. 37th Allerton Conference on Communication and Computation, 1999. [2] N. Slonim. The Information Bottleneck: theory and applications. Ph.D. thesis, The Hebrew University, 2002. [3] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, New York, 1991. [4] T. Hofmann, J. Puzicha, and M. I. Jordan. Learning from dyadic data. In Proc. of NIPS-11, 1998. [5] J. Puzicha, T. Hofmann, and J. M. Buhmann. Histogram clustering for unsupervised segmentation and image retrieval. In Pattern Recognition Letters 20(9), 899-909, 1999. [6] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum Likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, vol. 39, pp. 1-38, 1977. [7] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan (editor), Learning in Graphical Models, pp. 355-368, 1998. [8] L. Hermes, T. z¨oller, and J. M. Buhmann. Parametric distributional clustering for image segmentation. In Proc. of European Conference on Computer Vision (ECCV), 2002 [9] K. Lang. Learning to filter netnews. In Proc. of the 12th Int. Conf. on Machine Learning, 1995. [10] N. Slonim, N. Friedman, and N. Tishby. Unsupervised document classification using sequential information maximization. In Proc. of SIGIR-25, 2002. [11] N. Friedman, O. Mosenzon, N. Slonim, and N. Tishby. Multivariate Information Bottleneck. In Proc. of UAI-17, 2001. The KL with respect to is defined as the minimum over all the members in . Therefore, here, both arguments of the KL are changing during the process, and the distributions involved in the minimization are over all the three random variables.
|
2002
|
196
|
2,209
|
The Stability of Kernel Principal Components Analysis and its Relation to the Process Eigenspectrum John Shawe-Taylor Royal Holloway University of London john©cs.rhul.ac.uk Christopher K. I. Williams School of Informatics University of Edinburgh c.k.i.williams©ed.ac.uk Abstract In this paper we analyze the relationships between the eigenvalues of the m x m Gram matrix K for a kernel k(·, .) corresponding to a sample Xl, ... ,Xm drawn from a density p(x) and the eigenvalues of the corresponding continuous eigenproblem. We bound the differences between the two spectra and provide a performance bound on kernel peA. 1 Introduction Over recent years there has been a considerable amount of interest in kernel methods for supervised learning (e.g. Support Vector Machines and Gaussian Process prediction) and for unsupervised learning (e.g. kernel peA, Sch61kopf et al. (1998)). In this paper we study the stability of the subspace of feature space extracted by kernel peA with respect to the sample of size m, and relate this to the feature space that would be extracted in the infinite sample-size limit. This analysis essentially "lifts" into (a potentially infinite dimensional) feature space an analysis which can also be carried out for peA, comparing the k-dimensional eigenspace extracted from a sample covariance matrix and the k-dimensional eigenspace extracted from the population covariance matrix, and comparing the residuals from the k-dimensional compression for the m-sample and the population. Earlier work by Shawe-Taylor et al. (2002) discussed the concentration of spectral properties of Gram matrices and of the residuals of fixed projections. However, these results gave deviation bounds on the sampling variability the eigenvalues of the Gram matrix, but did not address the relationship of sample and population eigenvalues, or the estimation problem of the residual of peA on new data. The structure the remainder of the paper is as follows. In section 2 we provide background on the continuous kernel eigenproblem, and the relationship between the eigenvalues of certain matrices and the expected residuals when projecting into spaces of dimension k. Section 3 provides inequality relationships between the process eigenvalues and the expectation of the Gram matrix eigenvalues. Section 4 presents some concentration results and uses these to develop an approximate chain of inequalities. In section 5 we obtain a performance bound on kernel peA, relating the performance on the training sample to the expected performance wrt p(x). 2 Background 2.1 The kernel eigenproblern For a given kernel function k(·,·) the m x m Gram matrix K has entries k(Xi,Xj), i, j = 1, ... ,m, where {Xi: i = 1, ... ,m} is a given dataset. For Mercer kernels K is symmetric positive semi-definite. We denote the eigenvalues of the Gram matrix as Al 2: A2 .. . 2: Am 2: 0 and write its eigendecomposition as K = zAz' where A is a diagonal matrix of the eigenvalues and Z' denotes the transpose of matrix Z. The eigenvalues are also referred to as the spectrum of the Gram matrix. We now describe the relationship between the eigenvalues of the Gram matrix and those of the underlying process. For a given kernel function and density p(x) on a space X, we can also write down the eigenfunction problem Ix k(x,Y)P(X)¢i(X) dx = AiC/Ji(Y)· (1) Note that the eigenfunctions are orthonormal with respect to p(x), i.e. J x (Pi(x)p(x)¢j (x)dx = 6ij. Let the eigenvalues be ordered so that Al 2: A2 2: .... This continuous eigenproblem can be approximated in the following way. Let {Xi: i = 1, . .. , m} be a sample drawn according to p(x). Then as pointed out in Williams and Seeger (2000), we can approximate the integral with weight function p(x) by an average over the sample points, and then plug in Y = Xj for j = 1, ... ,m to obtain the matrix eigenproblem. Thus we see that J.1i d;j ~ Ai is an obvious estimator for the ith eigenvalue of the continuous problem. The theory of the numerical solution of eigenvalue problems (Baker 1977, Theorem 3.4) shows that for a fixed k, J.1k will converge to Ak in the limit as m -+ 00. For the case that X is one dimensional, p(x) is Gaussian and k(x, y) = exp -b(xy)2, there are analytic results for the eigenvalues and eigenfunctions of equation (1) as given in section 4 of Zhu et al. (1998). A plot in Williams and Seeger (2000) for m = 500 with b = 3 and p(x) '" N(O, 1/4) shows good agreement between J.1i and Ai for small i, but that for larger i the matrix eigenvalues underestimate the process eigenvalues. One of the by-products of this paper will be bounds on the degree of underestimation for this estimation problem in a fully general setting. Koltchinskii and Gine (2000) discuss a number of results including rates of convergence of the J.1-spectrum to the A-spectrum. The measure they use compares the whole spectrum rather than individual eigenvalues or subsets of eigenvalues. They also do not deal with the estimation problem for PCA residuals. 2.2 Projections, residuals and eigenvalues The approach adopted in the proofs of the next section is to relate the eigenvalues to the sums of squares of residuals. Let X be a random variable in d dimensions, and let X be a d x m matrix containing m sample vectors Xl, ... , X m . Consider the m x m matrix M = XIX with eigendecomposition M = zAz'. Then taking X = Z VA we obtain a finite dimensional version of Mercer's theorem. To set the scene, we now present a short description of the residuals viewpoint. The starting point is the singular value decomposition of X = UY',Z' , where U and Z are orthonormal matrices and Y', is a diagonal matrix containing the singular values (in descending order). We can now reconstruct the eigenvalue decomposition of M = X'X = Z~U'U~Z' = zAz', where A = ~2. But equally we can construct a d x d matrix N = X X' = U~Z' Z~U' = u Au', with the same eigenvalues as M. We have made a slight abuse of notation by using A to represent two matrices of potentially different dimensions, but the larger is simply an extension of the smaller with O's. Note that N = mCx , where Cx is the sample correlation matrix. Let V be a linear space spanned by k linearly independent vectors. Let Pv(x) (PV(x)) be the projection of x onto V (space perpendicular to V), so that IlxW = IIPv(x)112 + IIPv(x)112. Using the Courant-Fisher minimax theorem it can be proved (Shawe-Taylor et al., 2002, equation 4) that m m m m m k m L )...i(M) L IIxjl12 - L )...i(M) = min L IlPv(xj)112. (2) dim(V)=k i=k+1 j=l i=l j=l Hence the subspace spanned by the first k eigenvectors is characterised as that for which the sum of the squares of the residuals is minimal. We can also obtain similar results for the population case, e.g. L7=1 Ai = maXdim(V)=k lE[IIPv (x) 112]. 2.3 Residuals in feature space Frequently, we consider all of the above as occurring in a kernel defined feature space, so that wherever we have written a vector x we should have put 'l/J(x), where 'l/J is the corresponding feature map 'l/J : x E X f---t 'l/J(x) E F to a feature space F. Hence, the matrix M has entries Mij = ('l/J(Xi),'l/J(Xj)). The kernel function computes the composition of the inner product with the feature maps, k(x, z) = ('l/J(x) , 'l/J(z)) = 'l/J(x)''l/J(z) , which can in many cases be computed without explicitly evaluating the mapping 'l/J. We would also like to evaluate the projections into eigenspaces without explicitly computing the feature mapping 'l/J. This can be done as follows. Let Ui be the i-th singular vector in the feature space, that is the i-th eigenvector of the matrix N, with the corresponding singular value being O"i = ~ and the corresponding eigenvector of M being Zi. The projection of an input x onto Ui is given by 'l/J(X)'Ui = ('l/J(X)'U)i = ('l/J(x)' X Z)W;l = k'ZW;l, where we have used the fact that X = U~Z' and kj = 'l/J(x)''l/J(Xj) = k(x,xj). Our final background observation concerns the kernel operator and its eigenspaces. The operator in question is K(f)(x) = Ix k(x, z)J(z)p(z)dz. Provided the operator is positive semi-definite, by Mercer's theorem we can decompose k(x,z) as a sum of eigenfunctions, k(x,z) = L :1 AiC!Ji(X)¢i(Z) = ('l/J(x), 'l/J(z)), where the functions (¢i(X)) ~l form a complete orthonormal basis with respect to the inner product (j, g)p = Ix J(x)g(x)p(x)dx and 'l/J(x) is the feature space mapping 'l/J : x --+ (1Pi(X)):l = ( A¢i(X)):l E F. Note that ¢i(X) has norm 1 and satisfies Ai¢i(x) = Ix k(x, z)¢i(z)p(z)dz (equation 1), so that Ai = r k(y, Z)¢i(Y)¢i (Z)p(Z)p(y)dydz. iX2 (3) If we let cf>(x) = (cPi(X)):l E F, we can define the unit vector U i E F corresponding to Ai by Ui = Ix cPi(x)cf>(x)p(x)dx. For a general function J(x) we can similarly define the vector f = Ix J(x)cf>(x)p(x)dx. Now the expected square of the norm of the projection Pr(1jJ(x)) onto the vector f (assumed to be of norm 1) of an input 1jJ(x) drawn according to p(x) is given by lE [llPr(1jJ(x)) 112] = L IlPr(1jJ(x))Wp(x)dx = L (f'1jJ(X))2 p(x)dx = L L L J(y) cf>(y)'1jJ (x)p(y)dyJ(z)cf> (z)'1jJ (x)p(z)dzp(x)dx = L3 J(y)J(z) t, A cPj(Y)cPj(x)p(y)dy ~ v>:ecPe(z)cPe(x)p(z)dzp(x)dx = L2 J(y)J(z) j~l AcPj(y)p(y)dyv'):ecPe(z)p(z)dz Ix cPj(x)cPe(x)p(x)dx = L2 J(y)J(z) ~ AjcPj (Y)cPj (z)p(y)dyp(z)dz = r J(y)J(z)k(y,z)p(y)p(z)dydz. iX2 Since all vectors f in the subspace spanned by the image of the input space in F can be expressed in this fashion, it follows using (3) that the sum of the finite case characterisation of eigenvalues and eigenvectors is replaced by an expectation Ak = max min lE[llPv (1jJ(x)) 112], dim(V)=k O#vEV where V is a linear subspace of the feature space F. Similarly, k (4) L:Ai max lE [llPv(1jJ(x)) 112] = lE [111jJ(x)112] min lE [IIPv(1jJ(x))112] , dim(V)=k dim(V)=k i=l 00 (5) where Pv(1jJ(x)) (PV(1jJ(x))) is the projection of 1jJ(x) into the subspace V (the projection of 1jJ(x) into the space orthogonal to V). 2.4 Plan of campaign We are now in a position to motivate the main results ofthe paper. We consider the general case of a kernel defined feature space with input space X and probability density p(x). We fix a sample size m and a draw of m examples S = (Xl, X2 , ... , xm ) according to p. Further we fix a feature dimension k. Let Vk be the space spanned by the first k eigenvectors of the sample kernel matrix K with corresponding eigenvalues '\1, '\2,"" '\k, while Vk is the space spanned by the first k process eigenvectors with corresponding eigenvalues A1 , A2 , ... , Ak' Similarly, let E[J(x)] denote expectation with respect to the sample, E[J(x)] = ~ 2:::1 J(Xi), while as before lE[·] denotes expectation with respect to p. We are interested in the relationships between the following quantities: (i) E [IIPVk (x)112] = ~ 2:7=1 ~i = 2:7=1 ILi , (ii) lE [IIPVk(X)112] = 2:7=1 Ai (iii) lE [IIPVk (x)112] and (iv) IE [IIPVk (x)112]. Bounding the difference between the first and second will relate the process eigenvalues to the sample eigenvalues, while the difference between the first and third will bound the expected performance of the space identified by kernel PCA when used on new data. Our first two observations follow simply from equation (5), k IE [IIPYk (x) 112] 1 l: A A [ 2] (6) Ai ~ lE IIPVk (x) II , m i=l k and lE [IIPVk (x) 112] l: Ai ~ lE [IIPYk (x)112] . (7) i=l Our strategy will be to show that the right hand side of inequality (6) and the left hand side of inequality (7) are close in value making the two inequalities approximately a chain of inequalities. We then bound the difference between the first and last entries in the chain. 3 A veraging over Samples and Population Eigenvalues The sample correlation matrix is ex = ~XXI with eigenvalues ILl ~ IL2··· ~ ILd. In the notation of the section 2 ILi = (l/m),\i ' The corresponding population correlation matrix has eigenvalues Al ~ A2 ... ~ Ad and eigenvectors ul , . .. , U d. Again by the observations above these are the process eigenvalues. Let lE.n [.] denote averages over random samples of size m. The following proposition describes how lE.n [ILl] is related to Al and lE.n [ILd] is related to Ad. It requires no assumption of Gaussianity. Proposition 1 (Anderson, 1963, pp 145-146) lE.n [ILd ~ Al and lE.n[ILd] :s: Ad' Proof: By the results of the previous section we have We now apply the expectation operator lE.n to both sides. On the RHS we get lE.nIE [llFul (x) 112] = lE [llFul (x)112] = Al by equation (5), which completes the proof. Correspondingly ILd is characterized by ILd = mino#c IE [llFc(Xi) 112] (minor components analysis). D Interpreting this result, we see that lE.n [ILl] overestimates AI, while lE.n [ILd] underestimates Ad. Proposition 1 can be generalized to give the following result where we have also allowed for a kernel defined feature space of dimension N F :s: 00. Proposition 2 Using the above notation, for any k, 1 :s: k :s: m, lE.n [L: ~= l ILi] ~ L:~=l Ai and lE.n [L::k+l ILi] :s: L:~k+l Ai· Proof: Let Vk be the space spanned by the first k process eigenvectors. Then from the derivations above we have k l:ILi = v: ::~= k IE [11Fv('I/J(x))W] ~ IE [llFvk('I/J(x ))112]. i=l Again, applying the expectation operator Em to both sides of this equation and taking equation (5) into account, the first inequality follows. To prove the second we turn max into min, Pinto pl. and reverse the inequality. Again taking expectations of both sides proves the second part. 0 Applying the results obtained in this section, it follows that Em [ILl] will overestimate A1, and the cumulative sum 2::=1 Em [ILi] will overestimate 2::=1 Ai. At the other end, clearly for N F ::::: k > m, ILk == 0 is an underestimate of Ak. 4 Concentration of eigenvalues We now make use of results from Shawe-Taylor et al. (2002) concerning the concentration of the eigenvalue spectrum of the Gram matrix. We have Theorem 3 Let K(x, z) be a positive semi-definite kernel function on a space X, and let p be a probability density function on X. Fix natural numbers m and 1 :::; k < m and let S = (Xl, ... ,Xm) E xm be a sample of m points drawn according to p. Then for all t > 0, p{ I ~~~k(S)_Em [~~9(S)] 1 :::::t} :::; 2exp(-~:m), where ~~k (S) is the sum of the largest k eigenvalues of the matrix K(S) with entries K(S)ij = K(Xi,Xj) and R2 = maxxEX K(x, x). This follows by a similar derivation to Theorem 5 in Shawe-Taylor et al. (2002). Our next result concerns the concentration of the residuals with respect to a fixed subspace. For a subspace V and training set S, we introduce the notation Fv(S) = t [llPv('IjJ(x)) 112] . Theorem 4 Let p be a probability density function on X. Fix natural numbers m and a subspace V and let S = (Xl' ... ' Xm) E xm be a sample of m points drawn according to a probability density function p. Then for all t > 0, P{Fv(S) - Em [Fv(S)] 1 ::::: t} :::; 2exp (~~~) . This is theorem 6 in Shawe-Taylor et al. (2002). The concentration results of this section are very tight. In the notation of the earlier sections they show that with high probability and k L Ai ~ t [IIPVk ('IjJ(x))W] , (9) i = l where we have used Theorem 3 to obtain the first approximate equality and Theorem 4 with V = Vk to obtain the second approximate equality. This gives the sought relationship to create an approximate chain of inequalities k ~ IE [IIPVk('IjJ(x))112] = L Ai::::: IE [IIPVk ('IjJ(X)) 112] . (10) i = l This approximate chain of inequalities could also have been obtained using Proposition 2. It remains to bound the difference between the first and last entries in this chain. This together with the concentration results of this section will deliver the required bounds on the differences between empirical and process eigenvalues, as well as providing a performance bound on kernel peA. 5 Learning a projection matrix The key observation that enables the analysis bounding the difference between t [IIPvJ!p(X)) 112] and IE [IIPvJ'I/J(x)) 112] is that we can view the projection norm IIPvJ'I/J(x))112 as a linear function of pairs offeatures from the feature space F. Proposition 5 The projection norm IIPVk ('I/J(X)) 112 is a linear function j in a feature space F for which the kernel function is given by k(x, z) = k(x, Z)2. Furthermore the 2-norm of the function j is Vk. Proof: Let X = Uy:.Z' be the singular value decomposition of the sample matrix X in the feature space. The projection norm is then given by j(x) = IIPVk ('I/J(X)) 112 = 'I/J(x)'UkUk'I/J(x), where Uk is the matrix containing the first k columns of U. Hence we can write NF NF IIPvJ'I/J(x))112 = l: (Xij'I/J(X) i'I/J(X)j = l: (Xij1p(X)ij, ij=l ij=l where 1p is the projection mapping into the feature space F consisting of all pairs of F features and (Xij = (UkUk)ij. The standard polynomial construction gives k(x, z) NF NF l: 'I/J(X)i'I/J(Z)i'I/J(X)j'I/J(z)j = l: ('I/J(X)i'I/J(X)j)('I/J(Z)i'I/J(Z)j) i,j=l i,j=l It remains to show that the norm of the linear function is k. The norm satisfies (note that II . IIF denotes the Frobenius norm and U i the columns of U) Ilill' i~' a1j ~ IIU,U;II} ~ (~",U; , t, Ujuj) F ~ it, (U;Uj)' ~ k as required. D We are now in a position to apply a learning theory bound where we consider a regression problem for which the target output is the square of the norm of the sample point 11'I/J(x)112. We restrict the linear function in the space F to have norm Vk. The loss function is then the shortfall between the output of j and the squared norm. Using Rademacher complexity theory we can obtain the following theorems: Theorem 6 If we perform peA in the feature space defined by a kernel k(x, z) then with probability greater than 1 - 6, for all 1 :::; k :::; m, if we project new data onto the space 11k , the expected squared residual is bounded by ,\,>. :<: IE [ IIPt; ("'(x)) II' 1 < '~'~k [ ~ \>l(S) + 7# ,----------------, +R2 ~ln C:) where the support of the distribution is in a ball of radius R in the feature space and Ai and .xi are the process and empirical eigenvalues respectively. Theorem 7 If we perform peA in the feature space defined by a kernel k(x, z) then with probability greater than 1 - 5, for all 1 :s: k :s: m, if we project new data onto the space 11k , the sum of the largest k process eigenvalues is bounded by A<!,k ;::: lE [IIPVk ("IjJ(x))W] > max [~.x<!'f(S) - 1 + v'£ ! f k(Xi' Xi)2 l <!,f<!,k m Vm m i=l _R2 ~ln C(mt 1)) where the support of the distribution is in a ball of radius R in the feature space and Ai and .xi are the process and empirical eigenvalues respectively. The proofs of these results are given in Shawe-Taylor et al. (2003). Theorem 6 implies that if k « m the expected residuallE [11Pt;, ("IjJ(x)) 112 ] closely matches the average sample residual of IE [11Pt;,("IjJ(x))112] = (1/m)E:k+ 1 .xi , thus providing a bound for kernel peA on new data. Theorem 7 implies a good fit between the partial sums of the largest k empirical and process eigenvalues when Jk/m is small. References Anderson, T. W. (1963). Asymptotic Theory for Principal Component Analysis. Annals of Mathematical Statistics, 34( 1): 122- 148. Baker, C. T. H. (1977). The numerical treatment of integral equations. Clarendon Press, Oxford. Koltchinskii, V. and Gine, E. (2000). Random matrix approximation of spectra of integral operators. Bernoulli,6(1):113- 167. Sch6lkopf, B., Smola, A. , and Miiller, K-R. (1998). Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319. Shawe-Taylor, J., Cristianini, N., and Kandola, J. (2002). On the Concentration of Spectral Properties. In Diettrich, T. G., Becker, S., and Ghahramani, Z., editors, Advances in Neural Information Processing Systems 14. MIT Press. Shawe-Taylor, J., Williams, C. K I., Cristianini, N., and Kandola, J. (2003). On the Eigenspectrum of the Gram Matrix and the Generalisation Error of Kernel PCA. Technical Report NC2-TR-2003-143, Dept of Computer Science, Royal Holloway, University of London. Available from http://www.neurocolt.com/archi ve . html. Williams, C. K I. and Seeger, M. (2000). The Effect of the Input Density Distribution on Kernel-based Classifiers. In Langley, P., editor, Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000). Morgan Kaufmann. Zhu, H., Williams, C. K I., Rohwer, R. J., and Morciniec, M. (1998). Gaussian regression and optimal finite dimensional linear models. In Bishop, C. M., editor, Neural Networks and Machine Learning. Springer-Verlag, Berlin.
|
2002
|
197
|
2,210
|
Adaptive Quantization and Density Estimation in Silicon David Hsu Seth Bridges Miguel Figueroa Chris Diorio Department of Computer Science and Engineering University of Washington 114 Sieg Hall, Box 352350 Seattle, WA 98195-2350 USA {hsud, seth, miguel, diorio}@cs.washington.edu Abstract We present the bump mixture model, a statistical model for analog data where the probabilistic semantics, inference, and learning rules derive from low-level transistor behavior. The bump mixture model relies on translinear circuits to perform probabilistic inference, and floating-gate devices to perform adaptation. This system is low power, asynchronous, and fully parallel, and supports various on-chip learning algorithms. In addition, the mixture model can perform several tasks such as probability estimation, vector quantization, classification, and clustering. We tested a fabricated system on clustering, quantization, and classification of handwritten digits and show performance comparable to the E-M algorithm on mixtures of Gaussians. 1 Introduction Many system-on-a-chip applications, such as data compression and signal processing, use online adaptation to improve or tune performance. These applications can benefit from the low-power compact design that analog VLSI learning systems can offer. Analog VLSI learning systems can benefit immensely from flexible learning algorithms that take advantage of silicon device physics for compact layout, and that are capable of a variety of learning tasks. One learning paradigm that encompasses a wide variety of learning tasks is density estimation, learning the probability distribution over the input data. A silicon density estimator can provide a basic template for VLSI systems for feature extraction, classification, adaptive vector quantization, and more. In this paper, we describe the bump mixture model, a statistical model that describes the probability distribution function of analog variables using low-level transistor equations. We intend the bump mixture model to be the silicon version of mixture of Gaussians [1], one of the most widely used statistical methods for modeling the probability distribution of a collection of data. Mixtures of Gaussians appear in many contexts from radial basis functions [1] to hidden Markov models [2]. In the bump mixture model, probability computations derive from translinear circuits [3] and learning derives from floating-gate device equations [4]. The bump mixture model can perform different functions such as quantization, probability estimation, and classification. In addition this VLSI mixture model can implement multiple learning algorithms using different peripheral circuitry. Because the equations for system operation and learning derive from natural transistor behavior, we can build large bump mixture model with millions of parameters on a single chip. We have fabricated a bump mixture model, and tested it on clustering, classification, and vector quantization of handwritten digits. The results show that the fabricated system performs comparably to mixtures of Gaussians trained with the E-M algorithm [1]. Our work builds upon several trends of research in the VLSI community. The results in this paper are complement recent work on probability propagation in analog VLSI [5-7]. These previous systems, intended for decoding applications in communication systems, model special forms of probability distributions over discrete variables, and do not incorporate learning. In contrast, the bump mixture model performs inference and learning on probability distributions over continuous variables. The bump mixture model significantly extends previous results on floating-gate circuits [4]. Our system is a fully realized floating-gate learning algorithm that can be used for vector quantization, probability estimation, clustering, and classification. Finally, the mixture model’s architecture is similar to many previous VLSI vector quantizers [8, 9]. We can view the bump mixture model as a VLSI vector quantizer with well-defined probabilistic semantics. Computations such as probability estimation and maximum-likelihood classification have a natural statistical interpretation under the mixture model. In addition, because we rely on floating-gate devices, the mixture model does not require a refresh mechanism unlike previous learning VLSI quantizers. 2 The adaptive bump circuit The adaptive bump circuit [4], depicted in Fig.1(a-b), forms the basis of the bump mixture model. This circuit is slightly different from previous versions reported in the literature. Nevertheless, the high level functionality remains the same; the adaptive bump circuit computes the similarity between a stored variable and an input, and adapts to increase the similarity between the stored variable and input. Fig.1(a) shows the computation portion of the circuit. The bump circuit takes as input, a differential voltage signal (+Vin, −Vin) around a DC bias, and computes the similarity between Vin and a stored value, µ. We represent the stored memory µ as a voltage: 2 w w V V µ + − = (1) where Vw+ and Vw− are the gate-offset voltages stored on capacitors C1 and C2. Because C1 and C2 isolate the gates of transistors M1 and M2 respectively, these transistors are floating-gate devices. Consequently, the stored voltages Vw+ and Vw− are nonvolatile. We can express the floating-gate voltages Vfg1 and Vfg2 as Vfg1=Vin+Vw+ and Vfg2=Vw−−Vin, and the output of the bump circuit as [10]: ( )( ) ( ) ( )( ) ( ) 2 2 1 2 cosh 8 / cosh 4 / b b out t in t fg fg I I I SU V SU V V κ µ κ = = − − (2) where Ib is the bias current, κ is the gate-coupling coefficient, Ut is the thermal voltage, and S depends on the transistor sizes. Fig.1(b) shows Iout for three different stored values of µ. As the data show, different µ’s shift the location of the peak response of the circuit. Fig.1(b) shows the circuit that implements learning in the adaptive bump circuit. We implement learning through Fowler-Nordheim tunneling [11] on tunneling junctions M5-M6 and hot electron injection [12] on the floating-gate transistors M3-M4. Transistor M3 and M5 control injection and tunneling on M1’s floating-gate. Transistors M4 and M6 control injection and tunneling on M2’s floating-gate. We activate tunneling and injection by a high Vtun and low Vinj respectively. In the adaptive bump circuit, both processes increase the similarity between Vin and µ. In addition, the magnitude of the update does not depend on the sign of (Vin −µ) because the differential input provides common-mode rejection to the input differential pair. The similarity function, as seen in Fig.1(b), has a Gaussian-like shape. Consequently, we can equate the output current of the bump circuit with the probability of the input under a distribution parameterized by mean µ: ( ) | in out P V I µ = (3) In addition, increasing the similarity between Vin and µ is equivalent to increasing P(Vin |µ). Consequently, the adaptive bump circuit adapts to maximize the likelihood of the present input under the circuit’s probability distribution. 3 The bump mixture model We now describe the computations and learning rule implemented by the bump mixture model. A mixture model is a general class of statistical models that approximates the probability of an analog input as the weighted sum of probability of the input under several simple distributions. The bump mixture model comprises a set of Gaussian-like probability density functions, each parameterized by a mean vector, µµµµi. Denoting the jth dimension of the mean of the ith density as µij, we express the probability of an input vector x as: V in −V in V casc Iout V b V w− V fg1 V fg2 V 1 V 2 V fg1 V fg2 V 2 V 1 C 1 C 2 V b V tun V tun M 1 M 2 M 3 M 4 M 5 M 6 (a) V inj V inj (b) V w+ -0.4 0 8 10 6 4 2 0 -0.2 0.2 0.4 Vin Iout (nA) bump circuit's transfer function for three µ's µ1 µ2 µ3 (c) Figure 1. (a-b) The adaptive bump circuit. (a) The original bump circuit augmented by capacitors C1 and C2, and cascode transistors (driven by Vcasc). (b) The adaptation subcircuit. M3 and M4 control injection on the floating-gates and M5 and M6 control tunneling. (b) Measured output current of a bump circuit for three programmed memories. ( ) ( ) ( ) ( ) ( ) ( ) 1/ | 1/ | j ij i i j P N P i N P x µ = = ∏ x x (4) where N is the number of densities in the model and i denotes the ith density. P(x|i) is the product of one-dimensional densities P(xj|µij) that depend on the jth dimension of the ith mean, µij. We derive each one-dimensional probability distribution from the output current of a single bump circuit. The bump mixture model makes two assumptions: (1) the component densities are equally likely, and (2) within each component density, the input dimensions are independent and have equal variance. Despite these restrictions, this mixture model can, in principle, approximate any probability density function [1]. The bump mixture model adapts all µµµµi to maximize the likelihood of the training data. Learning in the bump mixture model is based on the E-M algorithm, the standard algorithm for training Gaussian mixture models. The E-M algorithm comprises two steps. The E-step computes the conditional probability of each density given the input, P(i|x). The M-step updates the parameters of each distribution to increase the likelihood of the data, using P(i|x) to scale the magnitude of each parameter update. In the online setting, the learning rule is: ( ) ( ) ( ) ( ) log | log | | ( | ) | j ij j ij ij ij ij k P x P x P i P i P k µ µ µ η η µ µ ∂ ∂ ∆ = = ∂ ∂ x x x (5) where η is a learning rate and k denotes component densities. Because the adaptive bump circuit already adapts to increase the likelihood of the present input, we approximate E-M by modulating injection and tunneling in the adaptive bump circuit by the conditional probability: ( ) ( ) | ij j ij P i f x µ η µ ∆ = − x (6) where f() is the parameter update implemented by the bump circuit. We can modulate the learning update in (6) with other competitive factors instead of the conditional probability to implement a variety of learning rules such as online K-means. 4 Silicon implementation We now describe a VLSI system that implements the silicon mixture model. The high level organization of the system detailed in Fig.2, is similar to VLSI vector quantization systems. The heart of the mixture model is a matrix of adaptive bump circuits where the ith row of bump circuits corresponds to the ith component density. In addition, the periphery of the matrix comprises a set of inhibitory circuits for performing probability estimation, inference, quantization, and generating feedback for learning. We send each dimension of an input x down a single column. Unity-gain inverting amplifiers (not pictured) at the boundary of the matrix convert each single ended voltage input into a differential signal. Each bump circuit computes a current that represents (P(xj|µij))σ, where σ is the common variance of the one-dimensional densities. The mixture model computes P(x|i) along the ith row and inhibitory circuits perform inference, estimation, or quantization. We utilize translinear devices [3] to perform all of these computations. Translinear devices, such as the subthreshold MOSFET and bipolar transistor, exhibit an exponential relationship between the gate-voltage and source current. This property allows us to establish a power-law relationship between currents and probabilities (i.e. a linear relationship between gate voltages and log-probabilities). We compute the multiplication of the probabilities in each row of Fig.2 as addition in the log domain using the circuit in Fig.3(a). This circuit first converts each bump circuit’s current into a voltage using a diode (e.g. M1). M2’s capacitive divider computes Vavg as the average of the scalar log probabilities, logP(xj|µij): ( ) ( ) / log | avg j ij j V N P x σ µ = (7) where σ is the variance, N is the number of input dimensions, and voltages are in units of κ/Ut (Ut is the thermal voltage and κ is the transistor-gate coupling coefficient). Transistors M2- M5 mirror Vavg to the gate of M5. We define the drain voltage of M5 as log P(x|i) (up to an additive constant) and compute: ( ) ( ) ( ) ( ) ( ) ( ) 1 2 1 2 1 1 log | log | avg j ij j C C C C C C N P i V P x k σ µ + + = = + x (8) where k is a constant dependent on Vg (the control gate voltage on M5), and C1 and C2 are capacitances. From eq.8 we can derive the variance as: ( ) 1 1 2 / NC C C σ = + (9) The system computes different output functions and feedback signals for learning by operating on the log probabilities of eq.8. Fig.3(b) demonstrates a circuit that computes P(i|x) for each distribution. The circuit is a k-input differential pair where the bias transistor M0 normalizes currents representing the probabilities P(x|i) at the ith leg. Fig.3(c) demonstrates a circuit that computes P(x). The ith transistor exponentiates logP(x|i), and a single wire sums the currents. We can also apply other inhibitory circuits to the log probabilities such as winner-take-all circuits (WTA) [13] and resistive networks [14]. In our fabricated chip, we implemented probability estimation,conditional probability computation, and WTA. The WTA outputs the index of the most likely component distribution for the present input, and can be used to implement vector quantization and to produce feedback for an online K-means learning rule. At each synapse, the system combines a feedback signal, such as the conditional probability P(i|x), computed at the matrix periphery, with the adaptive bump circuit to implement learning. We trigger adaptation at each bump circuit by a rate-coded spike signal generated from the inhibitory circuit’s current outputs. We generate this spike train with a current-to-spike converter based on Lazzaro’s low-powered spiking neuron [15]. This rate-coded signal toggles Vtun and Vinj at each bump circuit. Consequently, adaptation is proportional to the frequency of the spike train, which is in turn a linear function of the inhibitory feedback signal. The alternative to the rate code would be to transform the inhibitory circuit’s output directly into analog Inh() Inh() Output x1 x2 xn P(x|µ11) P(x|µ12) P(x|µ1n) P(x|µ21) P(x|µ22) P(x|µ2n) P(x|µµµµ1) P(x|µµµµ2) Vtun,Vinj Figure 2. Bump mixture model architecture. The system comprises a matrix of adaptive bump circuits where each row computes the probability P(x|µµµµi). Inhibitory circuits transform the output of each row into system outputs. Spike generators also transform inhibitory circuit outputs into rate-coded feedback for learning. Vtun and Vinj signals. Because injection and tunneling are highly nonlinear functions of Vinj and Vtun respectively, implementing updates that are linear in the inhibitory feedback signal is quite difficult using this approach. 5 Experimental Results and Conclusions We fabricated an 8 x 8 mixture model (8 probability distribution functions with 8 dimensions each) in a TSMC 0.35µm CMOS process available through MOSIS, and tested the chip on synthetic data and a handwritten digits dataset. In our tests, we found that due to a design error, one of the input dimensions coupled to the other inputs. Consequently, we held that input fixed throughout the tests, effectively reducing the input to 7 dimensions. In addition, we found that the learning rule in eq.6 produced poor performance because the variance of the bump distributions was too large. Consequently, in our learning experiments, we used the hard winner-take-all circuit to control adaptation, resulting in a K-means learning rule. We trained the chip to perform different tasks on handwritten digits from the MNIST dataset [16]. To prepare the data, we first perform PCA to reduce the 784-pixel images to sevendimensional vectors, and then sent the data on-chip. We first tested the circuit on clustering handwritten digits. We trained the chip on 1000 examples of each of the digits 1-8. Fig.4(a) shows reconstructions of the eight means before and after training. We compute each reconstruction by multiplying the means by the seven principal eigenvectors of the dataset. The data shows that the means diverge to associate with different digits. The chip learns to associate most digits with a single probability distribution. The lone exception is digit 5 which doesn’t clearly associate with one distribution. We speculate that the reason is that 3’s, 5’s, and 8’s are very similar in our training data’s seven-dimensional representation. Gaussian mixture models trained with the E-M algorithm also demonstrate similar results, recovering only seven out of the eight digits. We next evaluated the same learned means on vector quantization of a set of test digits (4400 examples of each digit). We compare the chip’s learned means with means learned by the batch E-M algorithm on mixtures of Gaussians (with σ=0.01), a mismatch E-M algorithm that models chip nonidealities, and a non-adaptive baseline quantizer. The purpose of the mismatch E-M algorithm was to assess the effect of nonuniform injection and tunneling strengths in floating-gate transistors. Because tunneling and injection magnitudes can vary by a large amount on different floatinggate transistors, the adaptive bump circuits can learn a mean that is somewhat offcenter. We measured the offset of each bump circuit when adapting to a constant input and constructed the mismatch E-M algorithm by altering the learned means during the M-step by the measured offset. We constructed the baseline quantizer by selecting, at random, an example of each digit for the quantizer codebook. For each quantizer, we computed the reconstruction error on the digit’s seven-dimensional Vs Vg ... C1 C2 P(x1|µi1)σ (a) (b) (c) M1 M2 M3 M4 Vavg Vavg M5 ... Vb M0 ... P(i|x) logP(x|i) ... P(x) Vs ... logP(x|i) P(xn|µin)σ Figure 3. (a) Circuit for computing logP(x|i). (b) Circuit for computing P(i|x). The current through the ith leg represents P(i|x). (c) Circuit for computing P(x). representation when we represent each test digit by the closest mean. The results in Fig.4(b) show that for most of the digits the chip’s learned means perform as well as the E-M algorithm, and better than the baseline quantizer in all cases. The one digit where the chip’s performance is far from the E-M algorithm is the digit “1”. Upon examination of the E-M algorithm’s results, we found that it associated two means with the digit “1”, where the chip allocated two means for the digit “3”. Over all the digits, the E-M algorithm exhibited a quantization error of 9.98, mismatch E-M gives a quantization error of 10.9, the chip’s error was 11.6, and the baseline quantizer’s error was 15.97. The data show that mismatch is a significant factor in the difference between the bump mixture model’s performance and the E-M algorithm’s performance in quantization tasks. Finally, we use the mixture model to classify handwritten digits. If we train a separate mixture model for each class of data, we can classify an input by comparing the probabilities of the input under each model. In our experiment, we train two separate mixture models: one on examples of the digit 7, and the other on examples of the digit 9. We then apply both mixtures to a set of unseen examples of digits 7 and 9, and record the probability score of each unseen example under each mixture model. We plot the resulting data in Fig.4(c). Each axis represents the probability under a different class. The data show that the model probabilities provide a good metric for classification. Assigning each test example to the class model that outputs the highest probability results in an accuracy of 87% on 2000 unseen digits. Additional software experiments show that mixtures of Gaussians (σ=0.01) trained by the batch E-M algorithm provide an accuracy of 92.39% on this task. Our test results show that the bump mixture model’s performance on several learning tasks is comparable to standard mixtures of Gaussians trained by E-M. These experiments give further evidence that floating-gate circuits can be used to build effective learning systems even though their learning rules derive from silicon physics instead of statistical methods. The bump mixture model also represents a basic building block that we can use to build more complex silicon probability models (a) (b) average squared quantization error digit chip baseline E-M E-M/mismatch 1 2 3 4 5 6 7 8 0 10 20 before after Figure 4. (a) Reconstruction of chip means before and after training with handwritten digits. (b) Comparison of average quantization error on unseen handwritten digits, for the chip’s learned means and mixture models trained by standard algorithms. (c) Plot of probability of unseen examples of 7’s and 9’s under two bump mixture models trained solely on each digit. Probability under 9's model (µA) Probability under 7's model (µA) (c) 0.5 0.5 1 1.5 2 2.5 1 1.5 2 2.5 7 + 9 o over analog variables. This work can be extended in several ways. We can build distributions that have parameterized covariances in addition to means. In addition, we can build more complex, adaptive probability distributions in silicon by combining the bump mixture model with silicon probability models over discrete variables [5-7] and spike-based floating-gate learning circuits [4]. Acknowledgments This work was supported by NSF under grants BES 9720353 and ECS 9733425, and Packard Foundation and Sloan Fellowships. References [1] C. M. Bishop, Neural Networks for Pattern Recognition. Oxford, UK: Clarendon Press, 1995. [2] L. R. Rabiner, "A tutorial on hidden Markov models and selected applications in speech recognition," Proceedings of the IEEE, vol. 77, pp. 257-286, 1989. [3] B. A. Minch, "Analysis, Synthesis, and Implementation of Networks of MultipleInput Translinear Elements," California Institute of Technology, 1997. [4] C.Diorio, D.Hsu, and M.Figueroa, "Adaptive CMOS: from biological inspiration to systems-on-a-chip," Proceedings of the IEEE, vol. 90, pp. 345-357, 2002. [5] T. Gabara, J. Hagenauer, M. Moerz, and R. Yan, "An analog 0.25 µm BiCMOS tailbiting MAP decoder," IEEE International Solid State Circuits Conference (ISSCC), 2000. [6] J. Dai, S. Little, C. Winstead, and J. K. Woo, "Analog MAP decoder for (8,4) Hamming code in subthreshold CMOS," Advanced Research in VLSI (ARVLSI), 2001. [7] M. Helfenstein, H.-A. Loeliger, F. Lustenberger, and F. Tarkoy, "Probability propagation and decoding in analog VLSI," IEEE Transactions on Information Theory, vol. 47, pp. 837-843, 2001. [8] W. C. Fang, B. J. Sheu, O. Chen, and J. Choi, "A VLSI neural processor for image data compression using self-organization neural networks," IEEE Transactions on Neural Networks, vol. 3, pp. 506-518, 1992. [9] J. Lubkin and G. Cauwenberghs, "A learning parallel analog-to-digital vector quantizer," Journal of Circuits, Systems, and Computers, vol. 8, pp. 604-614, 1998. [10] T. Delbruck, "Bump circuits for computing similarity and dissimilarity of analog voltages," California Institute of Technology, CNS Memo 26, 1993. [11] M. Lenzlinger, and E. H. Snow, "Fowler-Nordheim tunneling into thermally grown SiO2," Journal of Applied Physics, vol. 40, pp. 278-283, 1969. [12] E. Takeda, C. Yang, and A. Miura-Hamada, Hot Carrier Effects in MOS Devices. San Diego, CA: Academic Press, 1995. [13] J. Lazzaro, S. Ryckebusch, M. Mahowald, and C. A. Mead, "Winner-take-all networks of O(n) complexity," in Advances in Neural Information Processing, vol. 1, D. Tourestzky, Ed.: MIT Press, 1989, pp. 703-711. [14] K. Boahen and A. Andreou, "A contrast sensitive silicon retina with reciprocal synapses," in Advances in Neural Information Processing Systems 4, S. H. J. Moody, and R. Lippmann, Ed.: MIT Press, 1992, pp. 764-772. [15] J. Lazzaro, "Low-power silicon spiking neurons and axons," IEEE International Symposium on Circuits and Systems, 1992. [16] Y. Lecun, "The MNIST database of handwritten digits, http://yann_lecun.com/exdb/mnist."
|
2002
|
198
|
2,211
|
Improving Transfer Rates in Brain Computer Interfacing: A Case Study Peter Meinicke, Matthias Kaper, Florian Hoppe, Manfred Heumann and Helge Ritter University of Bielefeld Bielefeld, Germany {pmeinick, mkaper, fhoppe, helge} @techfak.uni-bielefeld.de Abstract In this paper we present results of a study on brain computer interfacing. We adopted an approach of Farwell & Donchin [4], which we tried to improve in several aspects. The main objective was to improve the transfer rates based on offline analysis of EEG-data but within a more realistic setup closer to an online realization than in the original studies. The objective was achieved along two different tracks: on the one hand we used state-of-the-art machine learning techniques for signal classification and on the other hand we augmented the data space by using more electrodes for the interface. For the classification task we utilized SVMs and, as motivated by recent findings on the learning of discriminative densities, we accumulated the values of the classification function in order to combine several classifications, which finally lead to significantly improved rates as compared with techniques applied in the original work. In combination with the data space augmentation, we achieved competitive transfer rates at an average of 50.5 bits/min and with a maximum of 84.7 bits/min. 1 Introduction Some neurological diseases result in the so-called locked-in syndrome. People suffering from this syndrom lost control over their muscles, and therefore are unable to communicate. Consequently, their brain-signals should be used for communication. Besides the clinical application, developing such a brain-computer interface (BCI) is in itself an exciting goal as indicated by a growing research interest in this field. Several EEG-based techniques have been proposed for realization of BCIs (see [6, 12], for an overview). There are at least four distinguishable basic approaches, each with its own advantages and shortcomings: 1. In the first approach, participants are trained to control their EEG frequency pattern for binary decisions. Whether specific frequencies (the and rhythms) in the power range are heightened or not results in upward or downward cursor movements. A further version extended this basic approach for 2D-movements. Transfer rates of 20-25 bits/min were reported [12]. 2. Imaginations of movements, resulting in the “Bereitschaftspotential” over sensorimotor cortex areas, are used to transmit information in the device of Pfurtscheller Figure 1: Stimulusmatrix with one column highlighted. et al. [8], which is in use by a tetraplegic patient. Blankertz et al. [2] applied sophisticated methods for data-analysis to this approach and reached fast transfer rates of 23 bits/min when classifying brain signals preceding overt muscle activity. 3. The thought translation device by Birbaumer et al. [5, 1] is based on slow cortical potentials, i.e. large shifts in the EEG-signal. They trained people in a biofeedback scenario to control this component. It is rather slow (<6 bits/min) and requires intensively trained participants but is in practical use. 4. Farwell & Donchin [4, 3, 10] developed a BCI-System by utilizing specific positive deflections (P300) in EEG-signals accompanying rare events (as discussed in detail below). It is moderately fast (up to 12 bits/min) and needs no practice of the participant, but requires visual attention. For BCIs, it is very desirable to have fast transfer rates. In our own studies, we therefore tried to accelerate the fourth approach by using state-of-the-art machine learning techniques and fusing data from different electrodes for data-analysis. For that purpose we utilized the basic setup of Farwell & Donchin (referred to as F&D) [4] who used the well-studied P300-Component to create a BCI-system. They presented a 6 6-matrix (see Fig. 1), filled with letters and digits, and highlighted all rows and columns sequentially in random order. People were instructed to focus on one symbol in the matrix, and mentally count its highlightings. From EEG-research it is known, that counting a rare specific event (oddballstimulus) in a series of background stimuli evokes a P300 for the oddball stimulus. Hence, highlighting the attended symbol in the 6 6-matrix should result in a P300, a characteristic positive deflection with a latency of around 300ms in the EEG-signal. It is therefore possible to infer the selected symbol by detecting the P300 in EEG-signals. Under suitable circumstances, most brains expose a P300. Thus, no training of the participants is necessary. For identification of the right column and row associated with a P300, Farwell & Donchin used the model-based techniques Area and Peak picking (both described in section 2) to detect the P300. In addition, as a data-driven approach, they used Stepwise Discriminant Analysis (SWDA). Using SWDA in a later study [3] resulted in transfer rates between 4.8 and 7.8 symbols per minute at an accuracy of 80% with a temporal distance of 125ms between two highlightings. In our work reported here we could improve several aspects of the F&D-approach by utilizing very recent machine learning techniques and a larger number of EEG-electrodes. First of all, we could increase the transfer rate by using Support Vector Machines (SVM) [11] for classification. Inspired by a recent approach to learning of discriminative densities [7] we utilized the values of the SVM classification function as a measure of confidence which we accumulate over certain classifications in order to speed up the transfer rate. In addition, we enhanced classification rates by augmenting the data-space. While Farwell & Donchin employed only data from a single electrode for classification, we used the data from 10 electrodes simultaneously. 2 Methods In the following we describe the techniques used for acquisition, preprocessing and analysis of the EEG-data. Data acquisition. All results of this paper stem from offline analyses of data acquired during EEG-experiments. The experimental setup was the following: participants were seated in front of a computer screen presenting the matrix (see Fig. 1) and user instructions. EEG-data were recorded with 10 Ag/AgCl electrodes at positions of the extended international 10-20 system (Fz, Cz, Pz, C3, C4, P3, P4, Oz, OL, OR1) sampled at 200Hz and low-pass filtered at 30Hz. The participants had to perform a certain number of trials. For the duration of a trial, they were instructed to focus their attention on a target symbol specified by the program, to mentally count the highlightings of the target symbol, and to avoid any body movement (especially eye moves and blinks). Each trial is subdivided into a certain number of subtrials. During each subtrial, 12 stimuli are presented, i.e. the 6 rows and the 6 columns are highlighted in random order. For different BCI-setups, the time between stimulus onsets, the interstimulus interval (ISI), was either 150, 300 or 500ms, while a highlighting always lasts 150ms. To each stimulus correspondes an epoch, a time frame of 600ms after stimulus onset 2During this interval a P300 should be evoked if the stimulus contains the target symbol. There is no pause between subtrials, but between trials. During the pause, the participants had time to focus on the next target symbol, before they initiated the next trial. The target symbol was chosen randomly from the available set of symbols and was presented by the program in order to create a data set of labelled EEG-signals for the subsequent offline analysis. Data preprocessing. To compensate for slow drifts of the DC potential, in a first step the linear trend of the raw data in each electrode over the duration of a trial was eliminated. In a second step, the data was normalized to zero mean and unit standard deviation. This was separately done for each electrode taking the data of all trials into account. Classification of Epochs. Test- and trainingsets were created by choosing the data according to one symbol as testset, and the data of the other symbols as trainingset in a crossvalidation scheme. The task of classifying a subtrial for the identification of a target symbol has to be distinguished from the classification of a single epoch for detection of a signal, correlated with oddball-stimuli, which we briefly refer to as a “P300 component” in a simplified manner in the following. In case of using a subtrial to select a symbol, two P300 components have to be detected within epochs: one corresponding to a row-, another to a column-stimulus. The detection algorithm works on the data of an epoch and has to compute a score which reflects the presence of a P300 within that epoch. Therefore, 12 epochs have to be evaluated for the selection of one target symbol. For the P300-detection, we utilized two model-based methods which had been proposed by F&D, and one completely data-driven method based on Support Vector Machines (SVMs) [11]. For training of the classifiers, we built up a sets of epochs containing an equal number of positive and negative examples, i.e. epochs with and without a P300 component. 1OL denotes the position halfway between O1 and T5, and OR between O2 and T6 respectively. 2With an ISI shorter than 450ms, there is a time overlap of consecutive epochs. stimulus onsets epoch of 600ms subtrial 1 subtrial 2 subtrial 3 trial time course model−based methods Figure 2: Trials, subtrials and epochs in the course of time (left). Model-based methods for analysis. Area calculates surface in the P300-window, Peak picking calculates differences between peaks. The first model-based method uses as its score as shown in Fig. 2 the area in the P300window (“Area method”, ), while the second model-based method uses the difference between the lowest point before, and the highest point within the P300-window (“Peak picking method”, ). Hyperparameters of the model-based methods were the boundaries of the P300-window. They were selected regarding the average of epochs containing the P300 by taking the boundaries of the largest area. For the completely data-driven approach, SVMs were optimized to distinguish between the two classes (w/o P300) implied by the training set. As compared with many traditional classifiers, such as the SWDA method used by F&D, SVMs can realize Bayes-consistent classifiers under very general conditions without requiring any specific assumptions about the underlying data distributions and decision boundaries. Thereby convergence to the Bayes optimum can be achieved by a suitable choice of hyperparameters. When using SVMs, it is not clear what measure to take as the score of an epoch. The problem is that the SVM has first of all been designed to assign binary class labels to its input without any measure of confidence on the resulting decision. However, a recent approach to learning of discriminative densities [7] suggests an interpretation of the usual discrimination function for SVMs with positive kernels in terms of scaled density differences. This finding provides us with a well-motivated score of an epoch: with as the data vector of an epoch and
as the corresponding class label which is positive/negative for epochs with/without target stimulus the SVM-score is computed as "! !$#%!'& ! )(+* (1) where & -, ! in our case is a Gaussian Kernel function with bandwidth . (selected as the weight / for the soft-margin penalties by 0 -fold crossvalidation) evaluated at the 1 -th data example. The mixing weights # ! were estimated by quadratic optimization for an SVM objective with linear soft-margin penalties where we used the SMO-algorithm [9]. Combination of subtrials. Because EEG-data possess a very poor signal-to-noise ratio (SNR), identification of the target symbol from a single subtrial is usually not reliable enough to achieve a reasonable classification rate. Therefore, several subtrials have to be combined for classification, slowing down the transfer rate. Thus, an important goal is to decrease the amount of subtrials which have to be combined for a satisfactory classification rate. An important constraint for the development of the specific offline-analysis programs was to realize a testing scheme which should be as close as possible to a corresponding online evaluation. Therefore, we tested a method for certain -combinations of subtrials in the following way: different series of successive subtrials were taken out of a test set and the corresponding single classifications were combined as explained below. Thereby, the test series contained only subtrials belonging to identical symbols and these were combined in their original temporal order3. In contrast, Farwell & Donchin randomly chose samples from a test set, built from subtrials taken from different trials and belonging to different symbols. With this procedure, they broke up the time course of the recorded data and did not distinguish between different symbols, i.e. different positions in the matrix on the screen. Based on the data of subtrials, one has to choose a row and a column in order to identify the target symbol, i.e. to classify a trial. Therefore, in a first step, the single scores4 ! of the epoch ! correspondingto the stimulus associated to the 1 -th row of the -th subtrial were summed up to the total score !
! . Then, the target row was chosen as ! ! with 1 . Equivalent steps were performed to choose the target column. Based on these decisions the target symbol was finally selected in accordance to the presented matrix. 3 Experimental Results Before going into details, we outline our investigations about improving the usability of the F&D-BCI. First, the different methods were compared to classify the data of the Pz electrode, which was originally used by Farwell & Donchin. Second, further single electrodes were taken as input source. This revealed information about interesting scalp positions to record a P300 and on the other hand indicated which channels may contain a useful signal. Third, the SVM classification rate with respect to epochs was improved by increasing the data-space. Therefore, the input vector for the classifier was extended by combining data from the same epoch but from different electrodes. These tests indicated that the best classification rates could be achieved using as detection method an SVM with all ten electrodes as input sources. Since the results of the first three steps were established based on the data of one initial experiment with only one participant, we evaluated the generality of these techniques by testing different subjects and BCI parameters. Finally, the BCI performance in terms of attainable communication rates is estimated from these analyses. Method comparison using the Pz electrode as input source. All four methods were applied to the data of one initial experiment with an ISI of 500ms and 3 subtrials per trial. Figure 3 presents the classification rates of up to 10 subtrials. The SVM method achieved best performance, its epoch classification rate was 76.3% (SD=1.0) in a 10-fold crossvalidation with about 380 subtrials samples in the training sets, and about 40 in the test sets. Of each subtrial in the training set, 4 epochs (2 with, 2 without a P300) were taken as training samples, whereas all 12 epochs of the subtrials of the test set were classified. For each training set, hyperparameters were selected by another 3-fold crossvalidation on this set. 3For a higher number of subtrial combinations, subtrials from different trials had to be combined. However, real-world-application of this BCI don’t require such combinations with respect to the finally achieved transfer rates reported in section 3. 4The method index is omitted in the following. Figure 3: (left) Method comparison on the Pz electrode: The three techniques were applied to the data of the initial experiment. (right) Classification rates for different number of electrodes. 0 10 20 30 40 50 60 70 80 90 100 classification rate (%) 6 12 18 24 30 36 42 48 54 60 66 72 78 84 90 time (s) 0 10 20 30 40 50 60 70 80 90 100 classification rate (%) 6 12 18 24 30 36 42 48 54 60 66 72 78 84 90 time (s) Fz Cz Pz C3 C4 P3 P4 OL OR OZ Peak picking SVM Figure 4: Electrode comparison on the data of the initial experiment. Different electrodes as input source. The method comparison tests were repeated for each electrode. The results of the Peak picking and SVM method are shown in Figure 3. The SVM is able to extract useful information from all ten electrodes, whereas the Peak picking performance varies for different scalp positions. Especially, the electrodes over the visual cortex areas OZ, OR and OL are useless for the model-based techniques, as the same characteristics are revealed by tests with the Area method. Higher-dimensional data-space. While Farwell & Donchin used only one electrode for data-analysis, we extended the data-space by using larger numbers of electrodes. We calculated classification rates for Pz alone, three, seven, and ten electrodes. A signal correlated with oddball-stimuli was classified at rates of 76.8%, 76.8%, 90.9%, and 94.5%, respectively for the different data-spaces of 120, 360, 840, and 1200 dimensions. These rates were calculated with 850 positive and 850 negative epoch samples and a 3-fold crossvalidation. This classified signal might be more than solely the traditional P300 component. Applying data-space augmentation for classification to infer symbols in the matrix results in the classification rates depicted in Figure 3 (right) for an ISI of 500ms. Using ten electrodes simultaneously, combined in one data vector, outperforms lower-dimensional data-spaces. Figure 5: Mean-classification rates (left) and transfer rates (right) for different ISIs. Error bars range from best to worst results. Note that a subtrial takes a specific amount of time. Therefore, the time dependend transfer rates are decreasing with the number of subtrials. Reducing the ISI and using more participants. The improved classification rates encouraged further experiments. To accelerate the system, we reduced the ISI to 300ms and 150ms. Additionally, to generalize the results, we recruited four participants. Means, best and worst classification rates are presented in Figure 5, as well as average and best transfer rates. The latter were calculated according to
(
(
where is the number of choices (36 here), the probability for classification, and the time required for classification. Using an ISI of 300ms results in slower transfer rates than using an ISI of 150ms. The latter ISI results on the average in classifying a symbol after 5.4s with an accuracy of 80% (disregarding delays between trials). The poorest performer needs 9s to reach this criterion, the best performer achieves an accuracy of 95.2% already after 3.6s. The transfer rates, with a maximum of 84.7 bits/min and an average of 50.5 bits/min outperform the EEG-based BCI-systems we know. 4 Conclusion With an application of the data-driven SVM-method to classification of single-channel EEG-signals, we could improve transfer rates as compared with model-based techniques. Furthermore, by increasing the number of EEG-channels, even higher classification and transfer rates could be achieved. Accumulating the value of the classification function as measure of confidence proved to be practical to handle series of classifications in order to identify a symbol. This resulted in high transfer rates with a maximum of 84.7 bits/min. 5 Acknowledgements We thank Thorsten Twellmann for supplying the SVM-algorithms and the Department of Cognitive Psychology at the University of Bielefeld for providing the experimental environment. This work was supported by Grant Ne 366/4-1 and the project SFB 360 from the German Research Council (Deutsche Forschungsgemeinschaft). References [1] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. Kübler, J. Perelmouter, E. Taub, and H. Flor. A spelling device for the paralysed. Nature, 398:297–298, 1999. [2] B. Blankertz, G. Curio, and K.-R. Müller. Classifying single trial eeg: Towards brain computer interfacing. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [3] E. Donchin, K.M. Spencer, and R. Wijeshinghe. The mental prosthesis: Assessing the speed of a p300-based brain-computer interface. IEEE Transactions on Rehabilitation Engineering, 8(2):174–179, 2000. [4] L.A. Farwell and E. Donchin. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and clinical Neurophysiology, 70(S2):510–523, 1988. [5] A. Kübler, B. Kotchoubey, T. Hinterberger, N. Ghanayim, J. Perelmouter, M. Schauer, C. Fritsch, E. Taub, and N. Birbaumer. The thought translation device: a neurophysiological approach to commincation in total motor paralysis. Experimental Brain Research, 124:223–232, 1999. [6] A. Kübler, B. Kotchoubey, J. Kaiser, J.R. Wolpaw, and N. Birbaumer. Brain-computer communication: Unlocking the locked in. Psychological Bulletin, 127(3):358–375, 2001. [7] P. Meinicke, T. Twellmann, and H. Ritter. Maximum contrast classifiers. In Proc. of the Int. Conf. on Artificial Neural Networks, Berlin, 2002. Springer. in press. [8] G. Pfurtscheller, C. Neuper, C. Guger, B. Obermaier, M. Pregenzer, H. Ramoser, and A. Schlögl. Current trends in graz brain-computer interface (bci) research. IEEE Transactions On Rehabilitation Engineering, pages 216–219, 2000. [9] J. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods — Support Vector Learning, pages 185–208, Cambridge, MA, 1999. MIT Press. [10] J.B. Polikoff, H.T. Bunnell, and W.J. Borkowski. Toward a p300-based computer interface. RESNA ’95 Annual Conference and RESNAPRESS and Arlington Va., pages 178–180, 1995. [11] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995. [12] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, and T.M. Vaughan. Brain-computer interfaces for communication and control. Clinical Neurophysiology, 113:767–791, 2002.
|
2002
|
199
|
2,212
|
Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach Christopher G. Atkeson Robotics Institute and HCII Carnegie Mellon University Pittsburgh, PA 15213, USA cga@cmu.edu Jun Morimoto ATR Human Information Science Laboratories, Dept. 3 Keihanna Science City Kyoto 619-0288, Japan xmorimo@atr.co.jp Abstract A longstanding goal of reinforcement learning is to develop nonparametric representations of policies and value functions that support rapid learning without suffering from interference or the curse of dimensionality. We have developed a trajectory-based approach, in which policies and value functions are represented nonparametrically along trajectories. These trajectories, policies, and value functions are updated as the value function becomes more accurate or as a model of the task is updated. We have applied this approach to periodic tasks such as hopping and walking, which required handling discount factors and discontinuities in the task dynamics, and using function approximation to represent value functions at discontinuities. We also describe extensions of the approach to make the policies more robust to modeling error and sensor noise. 1 Introduction The widespread application of reinforcement learning is hindered by excessive cost in terms of one or more of representational resources, computation time, or amount of training data. The goal of our research program is to minimize these costs. We reduce the amount of training data needed by learning models, and using a DYNA-like approach to do mental practice in addition to actually attempting a task [1, 2]. This paper addresses concerns about computation time and representational resources. We reduce the computation time required by using more powerful updates that update first and second derivatives of value functions and first derivatives of policies, in addition to updating value function and policy values at particular points [3, 4, 5]. We reduce the representational resources needed by representing value functions and policies along carefully chosen trajectories. This non-parametric representation is well suited to the task of representing and updating value functions, providing additional representational power as needed and avoiding interference. This paper explores how the approach can be extended to periodic tasks such as hopping and walking. Previous work has explored how to apply an early version of this approach to tasks with an explicit goal state [3, 6] and how to simultaneously learn a model and also affiliated with the ATR Human Information Science Laboratories, Dept. 3 use this approach to compute a policy and value function [6]. Handling periodic tasks required accommodating discount factors and discontinuities in the task dynamics, and using function approximation to represent value functions at discontinuities. 2 What is the approach? Represent value functions and policies along trajectories. Our first key idea for creating a more global policy is to coordinate many trajectories, similar to using the method of characteristics to solve a partial differential equation. A more global value function is created by combining value functions for the trajectories. As long as the value functions are consistent between trajectories, and cover the appropriate space, the global value function created will be correct. This representation supports accurate updating since any updates must occur along densely represented optimized trajectories, and an adaptive resolution representation that allocates resources to where optimal trajectories tend to go. Segment trajectories at discontinuities. A second key idea is to segment the trajectories at discontinuities of the system dynamics, to reduce the amount of discontinuity in the value function within each segment, so our extrapolation operations are correct more often. We assume smooth dynamics and criteria, so that first and second derivatives exist. Unfortunately, in periodic tasks such as hopping or walking the dynamics changes discontinuously as feet touch and leave the ground. The locations in state space at which this happens can be localized to lower dimensional surfaces that separate regions of smooth dynamics. For periodic tasks we apply our approach along trajectory segments which end whenever a dynamics (or criterion) discontinuity is reached. We also search for value function discontinuities not collocated with dynamics or criterion discontinuities. We can use all the trajectory segments that start at the discontinuity and continue through the next region to provide estimates of the value function at the other side of the discontinuity. Use function approximation to represent value function at discontinuities. We use locally weighted regression (LWR) to construct value functions at discontinuities [7]. Update first and second derivatives of the value function as well as first derivatives of the policy (control gains for a linear controller) along the trajectory. We can think of this as updating the first few terms of local Taylor series models of the global value and policy functions. This non-parametric representation is well suited to the task of representing and updating value functions, providing additional representational power as needed and avoiding interference. We will derive the update rules. Because we are interested in periodic tasks, we must introduce a discount factor into Bellman’s equation, so value functions remain finite. Consider a system with dynamics
and a one step cost function , where is the state of the system and is a vector of actions or controls. The subscript serves as a time index, but will be dropped in the equations that follow in cases where all time indices are the same or are equal to . A goal of reinforcement learning and optimal control is to find a policy that minimizes the total cost, which is the sum of the costs for each time step. One approach to doing this is to construct an optimal value function, . The value of this value function at a state is the sum of all future costs, given that the system started in state and followed the optimal policy (chose optimal actions at each time step as a function of the state). A local planner or controller can choose globally optimal actions if it knew the future cost of each action. This cost is simply the sum of the cost of taking the action right now and the discounted future cost of the state that the action leads to, which is given by the value function. Thus, the optimal action is given by: !#"%$'&)(* #+-, . / #0 where , is the discount factor. −1 0 1 −1 −0.5 0 0.5 1 G −4 −3 −2 −1 0 1 2 3 4 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Velocity Height Figure 1: Example trajectories where the value function and policy are explicitly represented for a regulator task at goal state G (left), a task with a point goal state G (middle), and a periodic task (right). Suppose at a point we have 1) a local second order Taylor series approximation of the optimal value function: + +
where . 2) a local second order Taylor series approximation of the dynamics, which can be learned using local models of the plant ( and correspond to the usual and of the linear plant model used in linear quadratic regulator (LQR) design):
# + + + + + where - , and 3) a local second order Taylor series approximation of the one step cost, which is often known analytically for human specified criteria (
and correspond to the usual and of LQR design): # + + +
+ + Given a trajectory, one can integrate the value function and its first and second spatial derivatives backwards in time to compute an improved value function and policy. The backward sweep takes the following form (in discrete time): + , +-, (1)
,
+ ,
+
,
+ , + (2) ,
+ , +
(3) "! $# "! (4) %'&)( #*
%+&)(
# (5) After the backward sweep, forward integration can be used to update the trajectory itself: ,.-0/ 1 2 #
,.-0/3 In order to use this approach we have to assume smooth dynamics and criteria, so that first and second derivatives exist. Unfortunately, in periodic tasks such as hopping or walking the dynamics changes discontinuously as feet touch and leave the ground. The locations in state space at which this happens can be localized to lower dimensional surfaces that separate regions of smooth dynamics. For periodic tasks we apply our approach along trajectory segments which end whenever a dynamics (or criterion) discontinuity is reached. We can use all the trajectory segments that start at the discontinuity and continue through the next region to provide estimates of the value function at the other side of the discontinuity. Figure 1 shows our approach applied to several types of problems. On the left we see that a task that requires steady state control about a goal point (a regulator task) can be solved with a single trivial trajectory that starts and ends at the goal and provides a value function and constant linear policy # in the vicinity of the goal. −4 −3 −2 −1 0 1 2 3 4 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Velocity Height −4 −3 −2 −1 0 1 2 3 4 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Velocity Height −4 −3 −2 −1 0 1 2 3 4 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Velocity Height Figure 2: The optimal hopper controller with a range of penalties on usage . #
+ The middle figure of Figure 1 shows the trajectories used to compute the value function for a swing up problem [3]. In this problem the goal requires regulation about the state where the pendulum is inverted and in an unstable equilibrium. However, the nonlinearities of the problem limit the region of applicability of a linear policy, and non-trivial trajectories have to be created to cover a larger region. In this case the region where the value function is less than a target value is filled with trajectories. The neighboring trajectories have consistent value functions and thus the globally optimal value function and policy is found in the explored region [3]. The right figure of Figure 1 shows the trajectories used to compute the value function for a periodic problem, control of vertical hopping in a hopping robot. In this problem, there is no goal state, but a desired hopping height is specified. This problem has been extensively studied in the robotics literature [8] from the point of view of how to manually design a nonlinear controller with a large stability region. We note that optimal control provides a methodology to design nonlinear controllers with large stability regions and also good performance in terms of explicitly specified criteria. We describe later how to also make these controller designs more robust. In this figure the vertical axis corresponds to the height of the hopper, and the horizontal axis is vertical velocity. The robot moves around the origin in a counterclockwise direction. In the top two quadrants the robot is in the air, and in the bottom two quadrants the robot is on the ground. Thus, the horizontal axis is a discontinuity of the robot dynamics, and trajectory segments end and often begin at the discontinuity. We see that while the robot is in the air it cannot change how much energy it has (how high it goes or how fast it is going when it hits the ground), as the trajectories end with the same pattern they began with. When the robot is on the ground it thrusts with its leg to “focus” the trajectories so the set of touchdown positions is mapped to a smaller set of takeoff positions. This funneling effect is characteristic of controllers for periodic tasks, and how fast the funnel becomes narrow is controlled by the size of the penalty on usage (Figure 2). 2.1 How are trajectory start points chosen? In our approach trajectories are refined towards optimality given their fixed starting points. However, an initial trajectory must first be created. For regulator tasks, the trajectory is trivial and simply starts and ends at the known goal point. For tasks with a point goal, trajectories can be extended backwards away from the goal [3]. For periodic tasks, crude trajectories must be created using some other approach before this approach can refine them. We have used several methods to provide initial trajectories. Manually designed controllers sometimes work. In learning from demonstration a teacher provides initial trajectories [6]. In policy optimization (aka “policy search”) a parameterized policy is optimized [9]. Once a set of initial task trajectories are available, the following four methods are used to generate trajectories in new parts of state space. We use all of these methods simultaneously, and locally optimize each of the trajectories produced. The best trajectory of the set is then stored and the other trajectories are discarded. 1) Use the global policy generated by policy optimization, if available. 2) Use the local policy from the nearest point with the same type of dynamics. 3) Use the local value function estimate (and derivatives) from the nearest point with the same type of dynamics. and 4) Use the policy from the nearest trajectory, where the nearest trajectory is selected at the beginning of the forward sweep and kept the same throughout the sweep. Note that methods 2 and 3 can change which stored trajectories they take points from on each time step, while method 4 uses a policy from a single neighboring trajectory. 3 Control of a walking robot As another example we will describe the search for a policy for walking of a simple planar biped robot that walks along a bar. The simulated robot has two legs and a torque motor between the legs. Instead of revolute or telescoping knees, the robot can grab the bar with its foot as its leg swings past it. This is a model of a robot that walks along the trusses of a large structure such as a bridge, much as a monkey brachiates with its arms. This simple model has also been used in studies of robot passive dynamic walking [10]. This arrangement means the robot has a five dimensional state space: left leg angle . , right leg angle , left leg angular velocity . , right leg angular velocity , and stance foot location. A simple policy is used to determine when to grab the bar (at the end of a step when the swing foot passes the bar going downwards). The variable to be controlled is the torque at the hip. The criterion we used is quite complex. We are a long way from specifying an abstract or vague criterion such as “cover a fixed distance with minimum fuel or battery usage” or “maximize the amount of your genes in future gene pools” and successfully finding an optimal or reasonable policy. At this stage we need to include several “shaping” terms in the criterion, that reward keeping the hips at the right altitude with minimal vertical velocity, keeping the leg amplitude within reason, maintaining a symmetric gait, and maintaining the desired hip forward velocity:
+ + + + + ! ! + (6) where the " are weighting factors and are # , - . , , and . The leg length is 1 meter (hence the 1 in ). The desired leg velocity ! %$ '&)( . " provides a measure of how far the left or right leg has gone past its limits * +$ ,.-/0,1 in the forward or backward direction. ' is the product of the leg angles if the legs are both forward or both rearward, and zero otherwise. ! is the hip location. The integration and control time steps are 1 millisecond each. The dynamics of this walker are simulated using a commercial package, SDFAST. Initial trajectories were generated by optimizing the coefficients of a linear policy. When the left leg was in stance: 32 +42#5 +62 +6287 *+62:9 +62:; +628< ! +628= (7) where is the angle between the legs. When the right leg was in stance the same policy was used with the appropriate signs negated. 3.1 Results The trajectory-based approach was able to find a cheaper and more robust policy than the parametric policy-optimization approach. This is not surprising given the flexible and expandable representational capacity of an adaptive non-parametric representation, but it does provide some indication that our update algorithms can usefully harness the additional representation power. Cost: For example, after training the parametric policy, we measured the undiscounted cost over 1 second (roughly one step of each leg) starting in a state along the lowest cost cyclic trajectory. The cost for the optimized parametric policy was 4316. The corresponding cost for the trajectory-based approach starting from the same state was 3502. Robustness: We did a simple assessment of robustness by adding offsets to the same starting state until the optimized linear policy failed. The offsets were in terms of the stance leg and the angle between the legs, and the corresponding angular velocities. The maximum offsets for the linearized optimized parametric policy are +$ +$ , %$ +$ , +$ %$ , and +$
%$ . We did a similar test for the trajectory approach. In each direction the maximum offset the trajectorybased approach was able to handle was equal to or greater than the parametric policy-based approach, extending the range most in the cases of +$ and $ . This is not surprising, since the trajectory-based controller uses the parametric policy as one of the ways to initially generate candidate trajectories for optimization. In cases where the trajectory-based approach is not able to generate an appropriate trajectory, the system will generate a series of trajectories with start points moving from regions it knows how to handle towards the desired start point. Thus, we have not yet discovered situations that are physically possible to recover that the trajectory-based approach cannot handle if it is allowed as much computation time as it needs. Interference: To demonstrate interference in the parametric policy approach, we optimized its performance from a distribution of starting states. These states were the original state, and states with positive offsets. The new cost for the original starting position was 14,747, compared to 4316 before retraining. The trajectory approach has the same cost as before, 3502. 4 Robustness to modeling error and imperfect sensing So far we have addressed robustness in terms of the range of initial states that can be handled. Another form of robustness is robustness to modeling error (changes in masses, friction, and other model parameters) and imperfect sensing, so that the controller does not know exactly what state the robot is in. Since simulations are used to optimize policies, it is relatively easy to include simulations with different model parameters and sensor noise in the training and optimize for a robust parametric controller in policy shaping. How does the trajectory-based approach achieve comparable robustness? We have developed two approaches, a probabilistic approach with maintains distributional information about unknown states and parameters, and a game-based or minimax approach. The probabilistic approach supports actions by the controller to actively minimize uncertainty as well as achieve goals, which is known as dual control. The game-based approach does not reduce uncertainty with experience, and is somewhat paranoid, assuming the world is populated by evil spirits which choose the worst possible disturbance at each time step for the controller. This results in robust, but often overly conservative policies. In the probabilistic case, the state is augmented with any unknown parameters such as masses of parts or friction coefficients, and the covariance of all the original elements of the state as well as the added parameters. An extended Kalman filter is constructed as the new dynamics equation, predicting the new estimates of the means and covariances given the control signals to the system. The one step cost function is restated in terms of the augmented state. The value function is now a function of the augmented state, including covariances of the original state vector elements. These covariances interact with the curvature of the value function, causing additional cost in areas of the value function that have high curvature or second derivatives. Thus the system is rewarded when it moves to areas of the value function that are planar, and uncertainty has no effect on the expected cost. The system is also rewarded when it learns, which reduces the covariances of the estimates, so the system may choose actions that move away from a goal but reduce uncertainty. This probabilistic approach does dramatically increase the dimensionality of the state vector and thus the value function, but in the context of only a quadratic cost on dimensionality this is not as fatal is it would seem. A less expensive approach is to use a game-based uncertainty model with minimax optimization. In this case, we assume an opponent can pick a disturbance to maximally increase our cost. This is closely related to robust nonlinear controller design techniques based on the idea of control [11, 12] and risk sensitive control [13, 14]. We augment the dynamics equation with a disturbance term:
# ,
5
#+ where is a vector of disturbance inputs. To limit the size of the disturbances, we include the disturbance magnitude in a modified one step cost function with a negative sign. The opponent who controls the disturbance wants to increase our cost, so this new term gives an incentive to the opponent to choose the worse direction for the disturbance, and a disturbance magnitude that gives the highest ratio of increased cost to disturbance size: # ,50
# . Initially, is set to globally approximate the uncertainty of the model. Ultimately, should vary with the local confidence in the model. Highly practiced movements or portions of movements should have high , and new movements should have lower . The optimal action is now given by Isaacs’ equation: ! " " $'& (
# + , . / # 0 . How we solve Isaacs’ equation and an application of this method are described in the companion paper [15]. 5 How to cover a volume of state space In tasks with a goal or point attractor, [3] showed that certain key trajectories can be grown backwards from the goal in order to approximate the value function. In the case of a sparse use of trajectories to cover a space, the cost of the approach is dominated by the costs of updating second derivative matrices, and thus the cost of the trajectory-based approach increases quadratically as the dimensionality increases. However, for periodic tasks the approach of growing trajectories backwards from the goal cannot be used, as there is no goal point or set. In this case the trajectories that form the optimal cycle can be used as key trajectories, with each point along them supplying a local linear policy and local quadratic value function. These key trajectories can be computed using any optimization method, and then the corresponding policy and value function estimates along the trajectory computed using the update rules given here. It is important to point out that optimal trajectories need only be placed densely enough to separate regions which have different local optima. The trajectories used in the representation usually follow local valleys of the value function. Also, we have found that natural behavior often lies entirely on a low-dimensional manifold embedded in a high dimensional space. Using these trajectories and creating new trajectories as task demands require it, we expect to be able to handle a range of natural tasks. 6 Contributions In order to accommodate periodic tasks, this paper has discussed how to incorporate discount factors into the trajectory-based approach, how to handle discontinuities in the dynamics (and equivalently, criteria and constraints), and how to find key trajectories for a sparse trajectory-based approach. The trajectory-based approach requires less design skill from humans since it doesn’t need a “good” policy parameterization, produces cheaper and more robust policies, which do not suffer from interference. References [1] Richard S. Sutton. Integrated architectures for learning , planning and reacting based on approximating dynamic programming. In Proceedings 7th International Conference on Machine Learning., 1990. [2] C. Atkeson and J. Santamaria. A comparison of direct and model-based reinforcement learning, 1997. [3] Christopher G. Atkeson. Using local trajectory optimizers to speed up global optimization in dynamic programming. In Jack D. Cowan, Gerald Tesauro, and Joshua Alspector, editors, Advances in Neural Information Processing Systems, volume 6, pages 663–670. Morgan Kaufmann Publishers, Inc., 1994. [4] P. Dyer and S. R. McReynolds. The Computation and Theory of Optimal Control. Academic Press, New York, NY, 1970. [5] D. H. Jacobson and D. Q. Mayne. Differential Dynamic Programming. Elsevier, New York, NY, 1970. [6] Christopher G. Atkeson and Stefan Schaal. Robot learning from demonstration. In Proc. 14th International Conference on Machine Learning, pages 12–20. Morgan Kaufmann, 1997. [7] C. G. Atkeson, A. W. Moore, and S. Schaal. Locally weighted learning. Artificial Intelligence Review, 11:11–73, 1997. [8] W. Schwind and D. Koditschek. Control of forward velocity for a simplified planar hopping robot. In International Conference on Robotics and Automation, volume 1, pages 691–6, 1995. [9] J. Andrew Bagnell and Jeff Schneider. Autonomous helicopter control using reinforcement learning policy search methods. In International Conference on Robotics and Automation, 2001. [10] M. Garcia, A. Chatterjee, and A. Ruina. Efficiency, speed, and scaling of two-dimensional passive-dynamic walking. Dynamics and Stability of Systems, 15(2):75–99, 2000. [11] K. Zhou, J. C. Doyle, and K. Glover. Robust Optimal Control. PRENTICE HALL, New Jersey, 1996. [12] J. Morimoto and K. Doya. Robust Reinforcement Learning. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13, pages 1061–1067. MIT Press, Cambridge, MA, 2001. [13] R. Neuneier and O. Mihatsch. Risk Sensitive Reinforcement Learning. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages 1031–1037. MIT Press, Cambridge, MA, USA, 1998. [14] S. P. Coraluppi and S. I. Marcus. Risk-Sensitive and Minmax Control of Discrete-Time FiniteState Markov Decision Processes. Automatica, 35:301–309, 1999. [15] J. Morimoto and C. Atkeson. Minimax differential dynamic programming: An application to robust biped walking. In Advances in Neural Information Processing Systems 15. MIT Press, Cambridge, MA, 2002.
|
2002
|
2
|
2,213
|
Intrinsic Dimension Estimation Using Packing Numbers Bal´azs K´egl Department of Computer Science and Operations Research University of Montreal CP 6128 succ. Centre-Ville, Montr´eal, Canada H3C 3J7 kegl@iro.umontreal.ca Abstract We propose a new algorithm to estimate the intrinsic dimension of data sets. The method is based on geometric properties of the data and requires neither parametric assumptions on the data generating model nor input parameters to set. The method is compared to a similar, widelyused algorithm from the same family of geometric techniques. Experiments show that our method is more robust in terms of the data generating distribution and more reliable in the presence of noise. 1 Introduction High-dimensional data sets have several unfortunate properties that make them hard to analyze. The phenomenon that the computational and statistical efficiency of statistical techniques degrade rapidly with the dimension is often referred to as the “curse of dimensionality”. One particular characteristic of high-dimensional spaces is that as the volumes of constant diameter neighborhoods become large, exponentially many points are needed for reliable density estimation. Another important problem is that as the data dimension grows, sophisticated data structures constructed to speed up nearest neighbor searches rapidly become inefficient. Fortunately, most meaningful, real life data do not uniformly fill the spaces in which they are represented. Rather, the data distributions are observed to concentrate to nonlinear manifolds of low intrinsic dimension. Several methods have been developed to find low-dimensional representations of high-dimensional data, including Principal Component Analysis (PCA), Self-Organizing Maps (SOM) [1], Multidimensional Scaling (MDS) [2], and, more recently, Local Linear Embedding (LLE) [3] and the ISOMAP algorithm [4]. Although most of these algorithms require that the intrinsic dimension of the manifold be explicitly set, there has been little effort devoted to design and analyze techniques that estimate the intrinsic dimension of data in this context. There are two principal areas where a good estimate of the intrinsic dimension can be useful. First, as mentioned before, the estimate can be used to set input parameters of dimension reduction algorithms. Certain methods (e.g., LLE and the ISOMAP algorithm) also require a scale parameter that determines the size of the local neighborhoods used in the algorithms. In this case, it is useful if the dimension estimate is provided as a function of the scale (see Figure 1 for an intuitive example where the intrinsic dimension of the data depends on the resolution). Nearest neighbor searching algorithms can also profit from a good dimension estimate. The complexity of search data structures (e.g., kd-trees and R-trees) increase exponentially with the dimension, and these methods become inefficient if the dimension is more than about 20. Nevertheless, it was shown by Ch´avez et al. [5] that the complexity increases with the intrinsic dimension of the data rather then with the dimension of the embedding space. Figure 1: Intrinsic dimension D at different resolutions. (a) At very small scale the data looks zero-dimensional. (b) If the scale is comparable to the noise level, the intrinsic dimension seems larger than expected. (c) The “right” scale in terms of noise and curvature. (d) At very large scale the global dimension dominates. PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 In this paper we present a novel method for intrinsic dimension estimation. The estimate is based on geometric properties of the data, and requires no parameters to set. Experimental results on both artificial and real data show that the algorithm is able to capture the scale dependence of the intrinsic dimension. The main advantage of the method over existing techniques is its robustness in terms of the generating distribution. The paper is organized as follows. In Section 2 we introduce the field of intrinsic dimension estimation, and give a short overview of existing approaches. The proposed algorithm is described in Section 3. Experimental results are given in Section 4. 2 Intrinsic dimension estimation Informally, the intrinsic dimension of a random vector X is usually defined as the number of “independent” parameters needed to represent X. Although in practice this informal notion seems to have a well-defined meaning, formally it is ambiguous due to the existence of space-filling curves. So, instead of this informal notion, we turn to the classical concept of topological dimension, and define the intrinsic dimension of X as the topological dimension of the support of the distribution of X . For the definition, we need to introduce some notions. Given a topological space X , the covering of a subset S is a collection C of open subsets in X whose union contains S. A refinement of a covering C of S is another covering C ′ such that each set in C ′ is contained in some set in C. The following definition is based on the observation that a d-dimensional set can be covered by open balls such that each point belongs to maximum (d +1) open balls. Definition 1 A subset S of a topological space X has topological dimension Dtop (also known as Lebesgue covering dimension) if every covering C of S has a refinement C ′ in which every point of S belongs to at most (Dtop +1) sets in C ′, and Dtop is the smallest such integer. The main technical difficulty with the topological dimension is that it is computationally difficult to estimate on a finite sample. Hence, practical methods use various other definitions of the intrinsic dimension. It is common to categorize intrinsic dimension estimating methods into two classes, projection techniques and geometric approaches. Projection techniques explicitly construct a mapping, and usually measure the dimension by using some variants of principal component analysis. Indeed, given a set Sn = {X1,...,Xn},Xi ∈X ,i = 1,...,n of data points drawn independently from the distribution of X, probably the most obvious way to estimate the intrinsic dimension is by looking at the eigenstructure of the covariance matrix C of Sn. In this approach, bDpca is defined as the number of eigenvalues of C that are larger than a given threshold. The first disadvantage of the technique is the requirement of a threshold parameter that determines which eigenvalues are to discard. In addition, if the manifold is highly nonlinear, bDpca will characterize the global (intrinsic) dimension of the data rather than the local dimension of the manifold. bDpca will always overestimate Dtop; the difference depends on the level of nonlinearity of the manifold. Finally, bDpca can only be used if the covariance matrix of Sn can be calculated (e.g., when X = Rd). Although in Section 4 we will only consider Euclidean data sets, there are certain applications where only a distance metric d : X ×X 7→R+ ∪{0} and the matrix of pairwise distances D = [di j] = d(xi,xj) are given. Bruske and Sommer [6] present an approach to circumvent the second problem. Instead of doing PCA on the original data, they first cluster the data, then construct an optimally topology preserving map (OPTM) on the cluster centers, and finally, carry out PCA locally on the OPTM nodes. The advantages of the method are that it works well on non-linear data, and that it can produce dimension estimates at different resolutions. At the same time, the threshold parameter must still be set as in PCA, moreover, other parameters, such as the number of OPTM nodes, must also be decided by the user. The technique is similar in spirit to the way the dimension parameter of LLE is set in [3]. The algorithm runs in O(n2d) time (where n is the number of points and d is the embedding dimension) which is slightly worse than the O(nd bDpca) complexity of the fast PCA algorithm of Roweis [7] when computing bDpca. Another general scheme in the family of projection techniques is to turn the dimensionality reduction algorithm from an embedding technique into a probabilistic, generative model [8], and optimize the dimension as any other parameter by using cross-validation in a maximum likelihood setting. The main disadvantage of this approach is that the dimension estimate depends on the generative model and the particular algorithm, so if the model does not fit the data or if the algorithm does not work well on the particular problem, the estimate can be invalid. The second basic approach to intrinsic dimension estimation is based on geometric properties of the data rather then projection techniques. Methods from this family usually require neither any explicit assumption on the underlying data model, nor input parameters to set. Most of the geometric methods use the correlation dimension from the family of fractal dimensions due to the computational simplicity of its estimation. The formal definition is based on the observation that in a D-dimensional set the number of pairs of points closer to each other than r is proportional to rD. Definition 2 Given a finite set Sn = {x1,...,xn} of a metric space X , let Cn(r) = 2 n(n−1) n ∑ i=1 n ∑ j=i+1 I{∥xi−x j∥<r} where IA is the indicator function of the event A. For a countable set S = {x1,x2,...} ⊂X , the correlation integral is defined as C(r) = limn→∞Cn(r). If the limit exists, the correlation dimension of S is defined as Dcorr = lim r→0 logC(r) logr . For a finite sample, the zero limit cannot be achieved so the estimation procedure usually consists of plotting logC(r) versus logr and measuring the slope ∂logC(r) ∂logr of the linear part of the curve [9, 10, 11]. To formalize this intuitive procedure, we present the following definition. Definition 3 The scale-dependent correlation dimension of a finite set Sn = {x1,...,xn} is bDcorr(r1,r2) = logC(r2)−logC(r1) logr2 −logr1 . It is known that Dcorr ≤Dtop and that Dcorr approximates well Dtop if the data distribution on the manifold is nearly uniform. However, using a non-uniform distribution on the same manifold, the correlation dimension can severely underestimate the topological dimension. To overcome this problem, we turn to the capacity dimension, which is another member of the fractal dimension family. For the formal definition, we need to introduce some more concepts. Given a metric space X with distance metric d(·,·), the r-covering number N(r) of a set S ⊂X is the minimum number of open balls B(x0,r) = {x ∈X |d(x0,x) < r} whose union is a covering of S. The following definition is based on the observation that the covering number N(r) of a D-dimensional set is proportional to r−D. Definition 4 The capacity dimension of a subset S of a metric space X is Dcap = −lim r→0 logN(r) logr . The principal advantage of Dcap over Dcorr is that Dcap does not depend on the data distribution on the manifold. Moreover, if both Dcap and Dtop exist (which is certainly the case in machine learning applications), it is known that the two dimensions agree. In spite of that, Dcap is usually discarded in practical approaches due to the high computational cost of its estimation. The main contribution of this paper is an efficient intrinsic dimension estimating method that is based on the capacity dimension. Experiments on both synthetic and real data confirm that our method is much more robust in terms of the data distribution than methods based on the correlation dimension. 3 Algorithm Finding the covering number even of a finite set of data points is computationally difficult. To tackle this problem, first we redefine Dcap by using packing numbers rather than covering numbers. Given a metric space X with distance metric d(·,·), a set V ⊂X is said to be r-separated if d(x,y) ≥r for all distinct x,y ∈V . The r-packing number M(r) of a set S ⊂X is defined as the maximum cardinality of an r-separated subset of S. The following proposition follows from the basic inequality between packing and covering numbers N(r) ≤M(r) ≤N(r/2). Proposition 1 Dcap = −lim r→0 logM(r) logr . For a finite sample, the zero limit cannot be achieved so, similarly to the correlation dimension, we need to redefine the capacity dimension in a scale-dependent manner. Definition 5 The scale-dependent capacity dimension of a finite set Sn = {x1,...,xn} is bDcap(r1,r2) = −logM(r2)−logM(r1) logr2 −logr1 . Finding M(r) for a data set Sn = {x1,...,xn} is equivalent to finding the cardinality of a maximum independent vertex set MI(Gr) of the graph Gr(V,E) with vertex set V = Sn and edge set E = {(xi,xj)|d(xi,xj) < r}. This problem is known to be NP-hard. There are results that show that for a general graph, even the approximation of MI(G) within a factor of n1−ε, for any ε > 0, is NP-hard [12]. On the positive side, it was shown that for such geometric graphs as Gr, MI(G) can be approximated arbitrarily well by polynomial time algorithms [13]. However, approximating algorithms of this kind scale exponentially with the data dimension both in terms of the quality of the approximation and the running time1 so they are of little practical use for d > 2. Hence, instead of using one of these algorithms, we apply the following greedy approximation technique. Given a data set Sn, we start with an empty set of centers C, and in an iteration over Sn we add to C data points that are at a distance of at least r from all the centers in C (lines 4 to 10 in Figure 2). The estimate b M(r) is the cardinality of C after every point in Sn has been visited. The procedure is designed to produce an r-packing but certainly underestimates the packing number of the manifold, first, because we are using a finite sample, and second, because in general b M(r) < M(r). Nevertheless, we can still obtain a good estimate for bDcap by using b M(r) in the place of M(r) in Definition 5. To see why, observe that, for a good estimate for bDcap, it is enough if we can estimate M(r) with a constant multiplicative bias independent of r. Although we have no formal proof that the bias of b M(r) does not change with r, the simple greedy procedure described above seems to work well in practice. Even though the bias of b M(r) does not affect the estimation of bDcap as long as it does not change with r, the variance of b M(r) can distort the dimension estimate. The main source of the variance is the dependence of b M(r) on the the order of the data points in which they are visited. To eliminate this variance, we repeat the procedure several times on random permutations of the data, and compute the estimate bDpack by using the average of the logarithms of the packing numbers. The number of repetitions depends on r1, r2, and a preset parameter that determines the accuracy of the final estimate (set to 99% in all experiments) . The complete algorithm is given formally in Figure 2. The running time of the algorithm is O nM(r)d where r = min(r1,r2). At smaller scales, where M(r) is comparable with n, it is O n2d . On the other hand, since the variance of the estimate also tends to be smaller at smaller scales, the algorithm iterates less for the same accuracy. 4 Experiments The two main objectives of the four experiments described here is to demonstrate the ability of the method to capture the scale-dependent behavior of the intrinsic dimension, and to underline its robustness in terms of the data generating distribution. In all experiments, the estimate bDpack is compared to the correlation dimension estimate bDcorr. Both dimensions are measured on consecutive pairs of a sequence r1,...,rm of resolutions, and the estimate is plotted halfway between the two parameters (i.e., bD(ri,ri+1) is plotted at (ri +ri+1)/2.) In the first three experiments the manifold is either known or can be approximated easily. In these experiments we use a two-sided multivariate power distribution with density p(x) = I{x∈[−1,1]d} p 2 d d ∏ i=1 1−|x(i)| p−1 (1) 1Typically, the computation of an independent vertex set of G of size at least 1−1 k dMI(G) requires O(nkd) time. PACKINGDIMENSION(Sn,r1,r2,ε) 1 for ℓ←1 to ∞do 2 Permute Sn randomly 3 for k ←1 to 2 do 4 C ←/0 5 for i ←1 to n do 6 for j ←1 to |C| do 7 if d Sn[i],C[j] < rk then 8 j ←n+1 9 if j < n+1 then 10 C ←C ∪{Sn[i]} 11 bLk[ℓ] = log|C| 12 bDpack = −µ(bL2)−µ(bL1) logr2 −logr1 13 if ℓ> 10 and 1.65 √ σ2(bL1)+σ2(bL2) √ ℓ(logr2−logr1) < bDpack ∗(1−ε)/2 then 14 return bDpack Figure 2: The algorithm returns the packing dimension estimate bDpack(r1,r2) of a data set Sn with ε accuracy nine times out of ten. with different exponents p to generate uniform (p = 1) and non-uniform data sets on the manifold. The first synthetic data is that of Figure 1. We generated 5000 points on a spiral-shaped manifold with a small uniform perpendicular noise. The curves in Figure 3(a) reflect the scale-dependency observed in Figure 1. As the distribution becomes uneven, bDcorr severely underestimates bDtop while bDpack remains stable. (a) Spiral 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 1 p = 2 p = 3 p = 3 p = 5 p = 5 p = 8 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 (b) Hypercube 1 2 3 4 5 6 0.05 0.1 0.15 0.2 0.25 0.3 } } { { } PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 Figure 3: Intrinsic dimension of (a) a spiral-shaped manifold and (b) hypercubes of different dimensions. The curves reflect the scale-dependency observed in Figure 1. The more uneven the distribution, the more bDcorr underestimates bDtop while bDpack remains relatively stable. The second set of experiments were designed to test how well the methods estimate the dimension of 5000 data points generated in hypercubes of dimensions two to six (Figure 3(b)). In general, both bDcorr and bDpack underestimates bDtop. The negative bias grows with the dimension, probably due to the fact that data sets of equal cardinality become sparser in a higher dimensional space. To compensate this bias on a general data set, Camastra and Vinciarelli [10] propose to correct the estimate by the bias observed on a uniformly generated data set of the same cardinality. Our experiment shows that, in the case of bDcorr, this calibrating procedure can fail if the distribution is highly non-uniform. On the other hand, the technique seems more reliable for bDpack due to the relative stability of bDpack. We also tested the methods on two sets of image data. Both sets contained 64×64 images with 256 gray levels. The images were normalized so that the distance between a black and a white image is 1. The first set is a sequence of 481 snapshots of a hand turning a cup from the CMU database2 (Figure 4(a)). The sequence of images sweeps a curve in a 4096-dimensional space so its informal intrinsic dimension is one. Figure 5(a) shows that at a small scale, both methods find a local dimension between 1 and 2. At a slightly higher scale the intrinsic dimension increases indicating a relatively high curvature of the image sequence curve. To test the distribution dependence of the estimates, we constructed a polygonal curve by connecting consecutive points of the sequence, and resampled 481 points by using the power distribution (1) with p = 2,3. We also constructed a highlyuniform, lattice-like data set by drawing approximately equidistant consecutive points from the polygonal curve. Our results in Figure 5(a) confirm again that bDcorr varies extensively with the generating distribution on the manifold while bDpack remains remarkably stable. (a) PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 (b) PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 Figure 4: The real datasets. (a) Sequence of snapshots of a hand turning a cup. (b) Faces database from ISOMAP [4]. The final experiment was conducted on the “faces” database from the ISOMAP paper [4] (Figure 4(b)). The data set contained 698 images of faces generated by using three free parameters: vertical and horizontal orientation, and light direction. Figure 5(b) indicates that both estimates are reasonably close to the informal intrinsic dimension. (a) Turning cup 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 original lattice (b) ISOMAP faces 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 original lattice Figure 5: The intrinsic dimension of image data sets. We found in all experiments that at a very small scale bDcorr tends to be higher than bDpack, 2http://vasc.ri.cmu.edu/idb/html/motion/hand/index.html while bDpack tends to be more stable as the scale grows. Hence, if the data contains very little noise and it is generated uniformly on the manifold, bDcorr seems to be closer to the “real” intrinsic dimension. On the other hand, if the data contains noise (in which case at a very small scale we are estimating the dimension of the noise rather than the dimension of the manifold), or the distribution on the manifold is non-uniform, bDpack seems more reliable than bDcorr. 5 Conclusion We have presented a new algorithm to estimate the intrinsic dimension of data sets. The method estimates the packing dimension of the data and requires neither parametric assumptions on the data generating model nor input parameters to set. The method is compared to a widely-used technique based on the correlation dimension. Experiments show that our method is more robust in terms of the data generating distribution and more reliable in the presence of noise. References [1] T. Kohonen, The Self-Organizing Map, Springer-Verlag, 2nd edition, 1997. [2] T. F. Cox and M. A. Cox, Multidimensional Scaling, Chapman & Hill, 1994. [3] S. Roweis and Saul L. K., “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, pp. 2323–2326, 2000. [4] J. B. Tenenbaum, V. de Silva, and Langford J. C., “A global geometric framework for nonlinear dimensionality reduction,” Science, vol. 290, pp. 2319–2323, 2000. [5] E. Ch´avez, G. Navarro, R. Baeza-Yates, and J. Marroqu´ın, “Searching in metric spaces,” ACM Computing Surveys, p. to appear, 2001. [6] J. Bruske and G. Sommer, “Intrinsic dimensionality estimation with optimally topology preserving maps,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 5, pp. 572–575, 1998. [7] S. Roweis, “EM algorithms for PCA and SPCA,”in Advances in Neural Information Processing Systems. 1998, vol. 10, pp. 626–632, The MIT Press. [8] C. M. Bishop, M. Svens´en, and C. K. I. Williams, “GTM: The generative topographic mapping,” Neural Computation, vol. 10, no. 1, pp. 215–235, 1998. [9] P. Grassberger and I. Procaccia, “Measuring the strangeness of strange attractors,” Physica, vol. D9, pp. 189–208, 1983. [10] F. Camastra and A. Vinciarelli, “Estimating intrinsic dimension of data with a fractal-based approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, to appear. [11] A. Belussi and C. Faloutsos, “Spatial join selectivity estimation using fractal concepts,” ACM Transactions on Information Systems, vol. 16, no. 2, pp. 161–201, 1998. [12] J. Hastad, “Clicque is hard to approximate within n 1−ε,” in Proceedings of the 37th Annual Symposium on Foundations of Computer Science FOCS’96, 1996, pp. 627–636. [13] T. Erlebach, K. Jansen, and E. Seidel, “Polynomial-time approximation schemes for geometric graphs,” in Proceedings of the 12th ACM-SIAM Symposium on Discrete Algorithms SODA’01, 2001, pp. 671–679.
|
2002
|
20
|
2,214
|
Discriminative Learning for Label Sequences via Boosting Yasemin Altun, Thomas Hofmann and Mark Johnson* Department of Computer Science *Department of Cognitive and Linguistics Sciences Brown University, Providence, RI 02912 {altun,th}@cs.brown.edu, Mark_Johnson@brown.edu Abstract This paper investigates a boosting approach to discriminative learning of label sequences based on a sequence rank loss function. The proposed method combines many of the advantages of boosting schemes with the efficiency of dynamic programming methods and is attractive both, conceptually and computationally. In addition, we also discuss alternative approaches based on the Hamming loss for label sequences. The sequence boosting algorithm offers an interesting alternative to methods based on HMMs and the more recently proposed Conditional Random Fields. Applications areas for the presented technique range from natural language processing and information extraction to computational biology. We include experiments on named entity recognition and part-of-speech tagging which demonstrate the validity and competitiveness of our approach. 1 Introduction The problem of annotating or segmenting observation sequences arises in many applications across a variety of scientific disciplines, most prominently in natural language processing, speech recognition, and computational biology. Well-known applications include part-of-speech (POS) tagging, named entity classification, information extraction, text segmentation and phoneme classification in text and speech processing [7] as well as problems like protein homology detection, secondary structure prediction or gene classification in computational biology [3]. Up to now, the predominant formalism for modeling and predicting label sequences has been based on Hidden Markov Models (HMMs) and variations thereof. Yet, despite its success, generative probabilistic models - of which HMMs are a special case - have two major shortcomings, which this paper is not the first one to point out. First, generative probabilistic models are typically trained using maximum likelihood estimation (MLE) for a joint sampling model of observation and label sequences. As has been emphasized frequently, MLE based on the joint probability model is inherently non-discriminative and thus may lead to suboptimal prediction accuracy. Secondly, efficient inference and learning in this setting often requires to make questionable conditional independence assumptions. More precisely, in the case of HMMs, it is assumed that the Markov blanket of the hidden label variable at time step t consists of the previous and next labels as well as the t-th observation. This implies that all dependencies on past and future observations are mediated through neighboring labels. In this paper, we investigate the use of discriminative learning methods for learning label sequences. This line of research continues previous approaches for learning conditional models, namely Conditional Random Fields (CRFs) [6], and discriminative re-ranking [1, 2]. CRFs have two main advantages compared to HMMs: They are trained discriminatively by maximizing a conditional (or pseudo-) likelihood criterion and they are more flexible in modeling additional dependencies such as direct dependencies of the t-th label on past or future observations. However, we strongly believe there are two further lines of research that are worth pursuing and may offer additional benefits or improvements. First of all, and this is the main emphasis of this paper, an exponential loss function such as the one used in boosting algorithms [9,4] may be preferable to the logarithmic loss function used in CRFs. In particular we will present a boosting algorithm that has the additional advantage of performing implicit feature selection, typically resulting in very sparse models. This is important for model regularization as well as for reasons of efficiency in high dimensional feature spaces. Secondly, we will also discuss the use of loss functions that explicitly minimize the zer%ne loss on labels, i.e. the Hamming loss, as an alternative to loss functions based on ranking or predicting entire label sequences. 2 Additive Models and Exponential Families Formally, learning label sequences is a generalization of the standard supervised classification problem. The goal is to learn a discriminant function for sequences, i.e. a mapping from observation sequences X = (X1,X2, ... ,Xt, ... ) to label sequences y = (Y1, Y2, ... , Yt, ... ). The availability of a training set of labeled sequences X == {(Xi, yi) : i = 1, ... ,n} to learn this mapping from data is assumed. In this paper, we focus on discriminant functions that can be written as additive models. The models under consideration take the following general form: Fe(X, Y) = L Fe(X, Y; t), with Fe(X, Y; t) = L fh!k(X , Y ; t) (1) k Here fk denotes a (discrete) feature in the language of maximum entropy modeling, or a weak learner in the language of boosting. In the context of label sequences fk will typically be either of the form f~1)(Xt+s,Yt) (with S E {-l,O, l}) or f~2) (Yt-1, Yt). The first type of features will model dependencies between the observation sequence X and the t-th label in the sequence, while the second type will model inter-label dependencies between neighboring label variables. For ease of presentation, we will assume that all features are binary, i.e. each learner corresponds to an indicator function. A typical way of defining a set of weak learners is as follows: (1) ( ) fk Xt+s , Yt (2) ( ) fk Yt-1, Yt J(Yt, y(k))Xdxt+s) J(Yt ,y(k))J(Yt-1 ,y(k)) . (2) (3) where J denotes the Kronecker-J and Xk is a binary feature function that extracts a feature from an observation pattern; y(k) and y(k) refer to the label values for which the weak learner becomes "active". There is a natural way to associate a conditional probability distribution over label sequences Y with an additive model Fo by defining an exponential family for every fixed observation sequence X Po(YIX) == exp~:(~; Y)], Zo(X) == Lexp[Fo(X,Y)]. y (4) This distribution is in exponential normal form and the parameters B are also called natural or canonical parameters. By performing the sum over the sequence index t, we can see that the corresponding sufficient statistics are given by Sk(X, Y) == 2:t h(X, Y; t). These sufficient statistics simply count the number of times the feature fk has been "active" along the labeled sequence (X, Y). 3 Logarithmic Loss and Conditional Random Fields In CRFs, the log-loss of the model with parameters B w.r.t. a set of sequences X is defined as the negative sum of the conditional probabilities of each training label sequence given the observation sequence, Although [6] has proposed a modification of improved iterative scaling for parameter estimation in CRFs, gradient-based methods such as conjugate gradient descent have often found to be more efficient for minimizing the convex loss function in Eq. (5) (cf. [8]). The gradient can be readily computed as (6) where expectations are taken w.r.t. Po(YIX). The stationary equations then simply state that uniformly averaged over the training data, the observed sufficient statistics should match their conditional expectations. Computationally, the evaluation of S(Xi, yi) is straightforward counting, while summing over all sequences Y to compute E [S(X, Y)IX = Xi] can be performed using dynamic programming, since the dependency structure between labels is a simple chain. 4 Ranking Loss Functions for Label Sequences As an alternative to logarithmic loss functions, we propose to minimize an upper bound on the ranking loss [9] adapted to label sequences. The ranking loss of a discriminant function Fo w.r.t. a set of training sequences is defined as 1{rnk(B;X) = L L 8(Fo(Xi,Y) _FO(Xi,yi)), 8(x) == {~ ~~~:r~~e (7) i Y;iY; which is simply the sum of the number of label sequences that are ranked higher than or equal to the true label sequence over all training sequences. It is straightforward to see (based on a term by term comparison) that an upper bound on the rank loss is given by the following exponential loss function 1{exp(B; X) == L L exp [FO(Xi, Y) - FO(Xi, yi)] = L [Po (~iIXi) -1].(8) i Y#Y' i 0 Interestingly this simply leads to a loss function that uses the inverse conditional probability of the true label sequence, if we define this probability via the exponential form in Eq. (4). Notice that compared to [1], we include all sequences and not just the top N list generated by some external mechanism. As we will show shortly, an explicit summation is possible because of the availability of dynamic programming formulation to compute sums over all sequences efficiently. In order to derive gradient equations for the exponential loss we can simply make use of the elementary facts \1 eP(()) 1 \1 eP(()) \le(-logP(()))=P(()) , and\le p (())=- P(())2 \le(-logP(())) P(()) (9) Then it is easy to see that (10) The only difference between Eq. (6) and Eq. (10) is the non-uniform weighting of different sequences by their inverse probability, hence putting more emphasis on training label sequences that receive a small overall (conditional) probability. 5 Boosting Algorithm for Label Sequences As an alternative to a simple gradient method, we now turn to the derivation of a boosting algorithm, following the boosting formulation presented in [9]. Let us introduce a relative weight (or distribution) D(i, Y) for each label sequence Y w.r.t. a training instance (Xi, yi), i.e. L i Ly D(i, Y) = 1, D(i, Y) exp [Fe (Xi, Y) - Fe (Xi, yi)] for Y 1- y i (11) Lj, LY,#Yj exp [Fe(Xj , Y') - Fe (Xj, y j)]' . Pe(YIXi) . _ Pe(yi IXi) - l - 1 D(z) 1 _ Pe(yiIXi) ' D(z) = Lj [Pe(yjIXj)-l _ 1] (12) In addition, we define D(i, y i) = O. Eq. (12) shows how we can split D(i, Y) into a relative weight for each training instance, given by D(i) , and a relative weight of each sequence, given by the re-normalized conditional probability Pe(YIXi). Notice that D(i) --+ 0 as we approach the perfect prediction case of Pe(yiIXi) --+ 1. We define a boosting algorithm which in each round aims at minimizing the partition function or weight normalization constant Zk w.r.t. a weak learner fk and a corresponding optimal parameter increment L,()k Zk(L,()k) == "D(i)" P~~IXli) .) exp [L,()k(Sk(Xi, Y)-Sk(Xi, yi))](13) ~ ~ . 1e Y·X· • Y # Y ' = ~ ( ~ D(i)P(bIXi; k)) exp [bL,()k], (14) where Pe(bIXi; k) = LYEY (b;X i) Pe(YIXi)/(l - Pe(yi IXi)) and Y(b; Xi) == {Y : Y 1- y i 1\ (Sk(Xi,Y) - Sk(Xi,yi)) = b}. This minimization problem is only tractable if the number of features is small, since a dynamic programming run with accumulators [6] for every feature seems to be required in order to compute the probabilities Po(bIXi; k), i.e. the probability for the k-th feature to be active exactly b times, conditioned on the observation sequence Xi. In cases, where this is intractable (and we assume this will be the case in most applications), one can instead minimize an upper bound on every Zk' The general idea is to exploit the convexity of the exponential function and to bound (15) which is valid for every x E [xmin; xmax]. We introduce the following shorthand notation Uik(Y) == Sk(Xi,Y) - SdXi,yi), max (Y) max _ max min -' (Y) minUik = maxy:;tyi Uik , Uk maxi Uik , Uik = mmy:;tyi Uik , Uk = mini u'[kin and 7fi(Y) == Po(YIXi)!(1 - Po(yiIXi) ) which allows us to rewrite Zk(L.Bk) = LD(i) L 7fi(Y) exp [L.BkUik(Y)] (16) y:;tyi < " D(i) " 7fi(Y) [u'[kax - Uik(:) eL:o.Oku,&;n + Uik(Y) u~in eL:o.Oku,&ax] ~ ~ uI?ax uI?m uI?ax uI?m i y:;tyi tk tk tk tk LD(i) (rikeMkU,&;n + (1- rik)eMkU,&aX), where (17) i rik == " 7fi(Y) u'[kax - Uik(:) (18) ~ uI?ax _ u mm y:;tyi tk tk By taking the second derivative w.r.t. L.Bk it is easy to verify that this is a convex function in L.Bk which can be minimized with a simple line search. If one is willing to accept a looser bound, one can instead work with the interval [uk'in; uk'ax] which is the union of the intervals [u'[kin; u'[kax] for every training sequence i and obtain the upper bound Zk(L.Bk) < rkeMkuk';n + (1 _ rk)eL:o.Okuk'ax "D(i) " 7fi(Y) uk'ax - Uik(:) ~ ~ u max _umm i y=/-yi k k Which can be solved analytically L.B 1 10 ( -rkuk'in ) k uk'ax _ uk'in g (1 - rk)Uk'ax but will in general lead to more conservative step sizes. (19) (20) (21) The final boosting procedure picks at every round the feature for which the upper bound on Zk is minimal and then performs an update of Bk +- Bk + L.Bk. Of course, one might also use more elaborate techniques to find the optimal L.Bk, once !k has been selected, since the upper bound approximation may underestimate the optimal step sizes. It is important to see that the quantities involved (rik and rk, respectively) are simple expectations of sufficient statistics that can be computed for all features simultaneously with a single dynamic programming run per sequence. 6 Hamming Loss for Label Sequences In many applications one is primarily interested in the label-by-labelloss or Hamming loss [9]. Here we investigate how to train models by minimizing an upper bound on the Hamming loss. The following logarithmic loss aims at maximizing the log-probability for each individual label and is given by F1og(B;X) == - LL)og Po(y1IXi) = - LLlog L PO(YIXi). (22) v:Yt = Y; Again, focusing on gradient descent methods, the gradient is given by As can be seen, the expected sufficient statistics are now compared not to their empirical values, but to their expected values, conditioned on a given label value Y; (and not the entire sequence Vi). In order to evaluate these expectations, one can perform dynamic programming using the algorithm described in [5], which has (independently of our work) focused on the use of Hamming loss functions in the context of CRFs. This algorithm has the complexity of the forward-backward algorithm scaled by a constant. Similar to the log-loss case, one can define an exponential loss function that corresponds to a margin-like quantity at every single label. We propose minimizing the following loss function ~ ~ ~ exp [F'(X;, Y) -log Y'~": exp [Fo(X" V')] ]<24) L l:vexp [FO(Xi,y)] = LR ( iIXi'B) - l (25) . t l:v Yt=y i exp [FO(Xi, Y)] . t 0 Yt , 2, 't 2 , As a motivation, we point out that for the case of sequences of length 1, this will reduce to the standard multi-class exponential loss. Effectively in this model, the prediction of a label Yt will mimic the probabilistic marginalization, i.e. y; = argmaxy FO(Xi, Y; t), FO(Xi, Y; t) = log l:v:Yt=Y exp [FO(Xi, Y)]. Similar to the log-loss case, the gradient is given by _ "E [S(X, Y)IX = Xi ,Yt = yn ~ E [S(Xi, Y)IX = Xi] (26) it' Po(y:IX') Again, we see the same differences between the log-loss and the exponential loss, but this time for individual labels. Labels for which the marginal probability Po (yf IXi) is small are accentuated in the exponential loss. The computational complexity for computing \7 oFexp and \7 oF1og is practically the same. We have not been able to derive a boosting formulation for this loss function, mainly because it cannot be written as a sum of exponential terms. We have thus resorted to conjugate gradient descent methods for minimizing Fexp in our experiments. 7 Experimental Results 7 .1 Named Entity Recognition Named Entity Recognition (NER), a subtask of Information Extraction, is the task of finding the phrases that contain person, location and organization names, times and quantities. Each word is tagged with the type of the name as well as its position in the name phrase (i.e. whether it is the first item of the phrase or not) in order to represent the boundary information. We used a Spanish corpus which was provided for the Special Session of CoNLL2002 on NER. The data is a collection of news wire articles and is tagged for person names, organizations, locations and miscellaneous names. We used simple binary features to ask questions about the word being tagged, as well as the previous tag (i.e. HMM features). An example feature would be: Is the current word='Clinton' and the tag='Person-Beginning'? We also used features to ask detailed questions (i.e. spelling features) about the current word (e.g.: Is the current word capitalized and the tag='Location-Intermediate'?) and the neighboring words. These questions cannot be asked (in a principled way) in a generative HMM model. We ran experiments comparing the different loss functions optimized with the conjugate gradient method and the boosting algorithm. We designed three sets of features: HMM features (=31), 31 and detailed features of the current word (= 32), and 32 and detailed features of the neighboring words (=33). The results summarized in Table 1 demonstrate the competitiveness of the proposed loss functions with respect to 1{log. We observe that with different sets of features, the ordering of the performance of the loss functions changes. Boosting performs worse than the conjugate gradient when only HMM features are used, since there is not much information in the features other than the identity of the word to be labeled. Consequently, the boosting algorithm needs to include almost all weak learners in the ensemble and cannot exploit feature sparseness. When there are more deFeature Objective Set log exp boost Sl 1{ 6.60 6.95 8.05 :F 6.73 7.33 S2 1{ 6.72 7.03 6.93 :F 6.67 7.49 S3 1{ 6.15 5.84 6.77 :F 5.90 5.10 Table 1: Test error of the Spanish corpus for named entity recognition. tailed features, the boosting algorithm is competitive with the conjugate gradient method, but has the advantage of generating sparser models. The conjugate gradient method uses all of the available features, whereas boosting uses only about 10% of the features. 7.2 Part of Speech Tagging We used the Penn TreeBank corpus for the part-of-speech tagging experiments. The features were similar to the feature sets Sl and S2 described above in the context of NER. Table 2 summarizes the experimental results obtained on this task. It can be seen that the test errors obtained by different loss functions lie within a relatively small range. Qualitatively the behavior of the different optimization methods is comparable to the NER experiments. 7.3 General Comments Feature Objective Set log exp boost Sl 1{ 4.69 5.04 10.58 :F 4.88 4.96 S2 1{ 4.37 4.74 5.09 :F 4.71 4.90 Table 2: Test error of the Penn TreeBank corpus for POS Even with the tighter bound in the boosting formulation, the same features are selected many times, because of the conservative estimate of the step size for parameter updates. We expect to speed up the convergence of the boosting algorithm by using a more sophisticated line search mechanism to compute the optimal step length, a conjecture that will be addressed in future work. Although we did not use real-valued features in our experiments, we observed that including real-valued features in a conjugate gradient formulation is a challenge, whereas it is very natural to have such features in a boosting algorithm. We noticed in our experiments that defining a distribution over the training instances using the inverse conditional probability creates problems in the boosting formulation for data sets that are highly unbalanced in terms of the length of the training sequences. To overcome this problem, we divided the sentences into pieces such that the variation in the length of the sentences is small. The conjugate gradient optimization, on the other hand, did not appear to suffer from this problem. 8 Conclusion and Future Work This paper makes two contributions to the problem of learning label sequences. First, we have presented an efficient algorithm for discriminative learning of label sequences that combines boosting with dynamic programming. The algorithm compares favorably with the best previous approach, Conditional Random Fields, and offers additional benefits such as model sparseness. Secondly, we have discussed the use of methods that optimize a label-by-labelloss and have shown that these methods bear promise for further improving classification accuracy. Our future work will investigate the performance (in both accuracy and computational expenses) of the different loss functions in different conditions (e.g. noise level, size of the feature set). Acknowledgments This work was sponsored by an NSF-ITR grant, award number IIS-0085940. References [1] M. Collins. Discriminative reranking for natural language parsing. In Proceedings 17th International Conference on Machine Learning, pages 175- 182. Morgan Kaufmann, San Francisco, CA, 2000. [2] M. Collins. Ranking algorithms for named- entity extraction: Boosting and the voted perceptron. In Proceedings 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 489- 496, 2002. [3] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press, 1998. [4] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28:337- 374, 2000. [5] S. Kakade, Y.W. Teh, and S. Roweis. An alternative objective function for Markovian fields. In Proceedings 19th International Conference on Machine Learning, 2002. [6] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. 18th International Conf. on Machine Learning, pages 282- 289. Morgan Kaufmann, San Francisco, CA, 200l. [7] C. Manning and H. Schiitze. Foundations of Statistical Natural Language Processing. MIT Press, 1999. [8] T. Minka. Algorithms for maximum-likelihood logistic regression. Technical report, CMU, Department of Statistics, TR 758, 200l. [9] R. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297- 336, 1999.
|
2002
|
200
|
2,215
|
Reinforcement Learning to Play an Optimal Nash Equilibrium in Team Markov Games Xiaofeng Wang ECE Department Carnegie Mellon University Pittsburgh, PA 15213 xiaofeng@andrew.cmu.edu Tuomas Sandholm CS Department Carnegie Mellon University Pittsburgh, PA 15213 sandholm@cs.cmu.edu Abstract Multiagent learning is a key problem in AI. In the presence of multiple Nash equilibria, even agents with non-conflicting interests may not be able to learn an optimal coordination policy. The problem is exaccerbated if the agents do not know the game and independently receive noisy payoffs. So, multiagent reinforfcement learning involves two interrelated problems: identifying the game and learning to play. In this paper, we present optimal adaptive learning, the first algorithm that converges to an optimal Nash equilibrium with probability 1 in any team Markov game. We provide a convergence proof, and show that the algorithm’s parameters are easy to set to meet the convergence conditions. 1 Introduction Multiagent learning is a key problem in AI. For a decade, computer scientists have worked on extending reinforcement learning (RL) to multiagent settings [11, 15, 5, 17]. Markov games (aka. stochastic games) [16] have emerged as the prevalent model of multiagent RL. An approach called Nash-Q [9, 6, 8] has been proposed for learning the game structure and the agents’ strategies (to a fixed point called Nash equilibrium where no agent can improve its expected payoff by deviating to a different strategy). Nash-Q converges if a unique Nash equilibrium exists, but generally there are multiple Nash equilibria. Even team Markov games (where the agents have common interests) can have multiple Nash equilibria, only some of which are optimal (that is, maximize sum of the agents’ discounted payoffs). Therefore, learning in this setting is highly nontrivial. A straightforward solution to this problem is to enforce convention (social law). Boutilier proposed a tie-breaking scheme where agents choose individual actions in lexicographic order[1]. However, there are many settings where the designer is unable or unwilling to impose a convention. In these cases, agents need to learn to coordinate. Claus and Boutilier introduced fictitious play, an equilibrium selection technique in game theory, to RL. Their algorithm, joint action learner (JAL) [2], guarantees the convergence to a Nash equilibrium in a team stage game. However, this equilibrium may not be optimal. The same problem prevails in other equilibrium-selection approaches in game theory such as adaptive play [18] and the evolutionary model proposed in [7]. In RL, the agents usually do not know the environmental model (game) up front and receive noisy payoffs. In this case, even the lexicographic approaches may not work because agents receive noisy payoffs independently and thus may never perceive a tie. Another significant problem in previous research is how a nonstationary exploration policy (required by RL) affects the convergence of equilibrium selection approaches—which have been studied under the assumption that agents either always take the best-response actions or make mistakes at a constant rate. In RL, learning to play an optimal Nash equilibrium in team Markov games has been posed as one of the important open problems [9]. While there have been heuristic approaches to this problm, no existing algorithm has been proposed that is guarenteed to converge to an optimal Nash equilibrium in this setting. In this paper, we present optimal adaptive learning (OAL), the first algorithm that converge to an optimal Nash equilibrium with probability 1 in any team Markov game (Section 3). We prove its convergence, and show that OAL’s parameters are easy to meet the convergence conditions (Section 4). 2 The setting 2.1 MDPs and reinforcement learning (RL) In a Markov decision problem, there is one agent in the environment. A fully observable Markov decision problem (MDP) is a tuple
where is a finite state space; is the space of actions the agent can take; is a payoff function ( is the expected payoff for taking action in state ); and !"#$%&% ' ()+*-, is a transition function ( ./0 213 is the probability of ending in state 41 , given that action is taken in state ). An agent’s deterministic policy (aka. strategy) is a mapping from states to actions. We denote by 5 6 the action that policy 5 prescribes in state . The objective is to find a policy 5 that maximizes 798 :<;=?> :A@ CB :ED 5 , where B : is the payoff at time F , and >HG ' ()E*2 is a discount factor. There exists a deterministic optimal policy 5JI [12]. The Q-function for this policy, K I , is defined by the set of equations KLI / NM/ PO > 7QARCS/T L 0 1 VUXWZY)[ R S\ KLI 1 1 . At any state , the optimal policy chooses W]^UXWZY_[ KLI / [10]. Reinforcement learning can be viewed as a sampling method for estimating KI when the payoff function and/or transition function are unknown. KI / can be approximated by a function K : / calculated from the agent’s experience up to time F . The modelbased approach uses samples to generate models of and , and then iteratively computes K : / V`M : / PO > 7 Q R S/T : / ab 1 VUXWZY [ RcS\ K :edf 1 1 . Based on K : , a learning policy assigns probabilities to actions at each state. If the learning policy has the “Greedy in the Limit with Infinite Exploration” (GLIE) property, then K : will converge to K.I (with either a model-based or model-free approach) and the agent will converge in behavior to an optimal policy [14]. Using GLIE, every state-action pair is visited infinitely often, and in the limit the action selection is greedy with respect to the Q-function w.p.1. One common GLIE policy is Boltzmann exploration [14]. 2.2 Multiagent RL in team Markov games when the game is unknown A natural extension of an MDP to multiagent environments is a Markov game (aka. stochastic game) [16]. In this paper we focus on team Markov games, that are Markov games where each agent receives the same expected payoff (in the presence of noise, different agent may still receive different payoffs at a particular moment.). In other words, there are no conflicts between the agents, but learning the game structure and learning to coordinate are nevertheless highly nontrivial. Definition 1 A team Markov game (aka identical-interest stochastic game) g is a tuple Chi
, where h is a set of n agents; S is a finite state space; jM`lk ;mfonpnpn q is a joint action space of n agents; rs9tuv is the common expected payoff function; and wVxy#z{|' (_E*E, is a transition function. The objective of the } agents is to find a deterministic joint policy (aka. joint strategy aka. strategy profile) 5 M#~ 5 k ;Jfbnpnpn q0 (where 5 /{ and 5 k k ) so as to maximize the expected sum of their discounted payoffs. The Q-function, K / , is the expected sum of discounted payoffs given that the agents play joint action in state and follow policy 5 thereafter. The optimal K -function K.I / V is the K -function for (each) optimal policy 5mI . So, KLI captures the game structure. The agents generally do not know KI in advance. Sometimes, they know neither the payoff structure nor the transition probabilities. A joint policy ~ 5 k ;mfonpnpn q is a Nash equilibrium f if each individual policy is a best response to the others. That is, for all G h , G and any individual policy 5 1 k , KLI /-~ 5 k 2 5 d k 2 KLI /-~ 5 1 k 2 5 d k 2 , where 5 d k is the joint policy of all agents except agent . (Likewise, throughout the paper, we use to denote all agents but , e.g., by d k their joint action, by d k their joint action set.) A Nash equilibrium is strict if the inequality above is strict. An optimal Nash equilibrium 5 I is a Nash equilibrium that gives the agents the maximal expected sum of discounted payoffs. In team games, each optimal Nash equilibrium is an optimal joint policy (and there are no other optimal joint policies). A joint action is optimal in state if K.I / KLI / 1 for all 1 G . If we treat K I / as the payoff of joint action in state , we obtain a team game in matrix form. We call such a game a state game for . An optimal joint action in is an optimal Nash equilibrium of that state game. Thus, the task of optimal coordination in a team Markov game boils down to having all the agents play an optimal Nash equilibrium in state games. However, a coordination problem arises if there are multiple Nash equilibria. The 3-player f f f
f
f
f
[ f 10 -20 -20 -20 -20 5 -20 5 -20 [
-20 -20 5 -20 10 -20 5 -20 -20 [ -20 5 -20 5 -20 -20 -20 -20 10 Table 1: A three-player coordination game game in Table 1 has three optimal Nash equilibria and six sub-optimal Nash equilibria. In this game, no existing equilibrium selection algorithm (e.g.,fictitious play [3]) is guarenteed to learn to play an optimal Nash equilibrium. Furthermore, if the payoffs are only expectations over each agent’s noisy payoffs and unknown to the agents before playing, even identification of these sub-optimal Nash equilibria during learning is nontrivial. 3 Optimal adaptive learning (OAL) algorithm We first consider the case where agents know the game before playing. This enables the learning agents to construct a virtual game (VG) for each state of the team Markov game to eliminate all the strict suboptimal Nash equilibria in that state. Let I be the payoff that the agents receive from the VG in state for a joint action . We let LI / VyM * if 9M WZ] ^`UXW4Y)[ R S\ KLI / 1 and LI M ( otherwise. For example, the VG for the game in Table 1 gives payoff 1 for each optimal Nash equilibrium ( ~60*2*4* , ~6 , and ~6)) ), and payoff 0 to every other joint action. The VG in this example is weakly acyclic. Definition 2 (Weakly acyclic game [18]) Let be an n-player game in matrix form. The best-response graph of takes each joint action G as a vertex and connects two vertices and 1 with a directed edge 1 if and only if 1) M 1 ; 2) there exists exactly one agent such that 1 k is a best response to d k and 1 d k M9 d k . We say the game is weakly acyclic if in its best-response graph, from any initial vertex , there exists a directed path to some vertex I from which there is no outgoing edge. To tackle the equilibrium selection problem for weakly acyclic games, Young [18] proposed a learning algorithm called adaptive play (AP), which works as follows. Let : G be a joint action played at time F over an n-player game in matrix form. Fix integers Throughout the paper, every Nash equilibrium that we discuss is also a subgame perfect Nash equilibrium. (This refinement of Nash equilibrium was first introduced in [13] for different games). and such that * . When F , each agent randomly chooses its actions. Starting from F M O * , each agent looks back at the most recent plays : M :ed :edmf - :ed?f and randomly (without replacement) selects samples from : . Let :
mf C d k be the number of times that a reduced joint action d k G d k (a joint action without agent ’s individual action) appears in the samples at F O{* . Let k be agent ’s payoff given that joint action has been played. Agent calculates its expected payoff w.r.t its individual action k as @ CVkeyM 7 [ S\ k ~6Vk d ke [ ! , and then randomly chooses an action from a set of best responses: " : k M ~2k D VkM WZ] ^`UXW4Y [ R S\ @# 1 k . Young showed that AP in a weakly acyclic game converges to a strict Nash equilibrium w.p.1. Thus, AP on the VG for the game in Table 1 leads to an equilibrium with payoff 1 which is actually an optimal Nash equilibrium for the original game. Unfortunately, this does not extend to all VGs because not all VGs are weakly acyclic: in a VG without any strict Nash equilibrium, AP may not converge to the strategy profile with payoff 1. In order to address more general settings, we now modify the notion of weakly acyclic game and adaptive play to accommodate weak optimal Nash equilibria. Definition 3 (Weakly acyclic game w.r.t a biased set (WAGB)): Let $ be a set containing some of the Nash equilibria of a game (and no other joint policies). Game is a WAGB if, from any initial vertex , there exists a directed path to either a Nash equilibrium inside $ or a strict Nash equilibrium. We can convert any VG to a WAGB by setting the biased set $ to include all joint policies that give payoff 1 (and no other joint policies). To solve such a game, we introduce a new learning algorithm for equilibrium selection. It enables each agent to deterministically select a best-response action once any Nash equilibrium in the biased set is attained (even if there exist several best responses when the Nash equilibrium is not strict). This is different from AP where players randomize their action selection when there are multiple best-response actions. We call our approach biased adaptive play (BAP). BAP works as follows. Let $ be the biased set composed of some Nash equilibria of a game in matrix form. Let : be the set of samples drawn at time F , without replacement, from among the most recent joint actions. If (1) there exists a joint action 1 G $ such that for all G : , d k&% and d k'% 1 , and (2) there exists at least one joint action such that G : and G $ , then agent chooses its best-response action k such that k G : R and F 1 M UXW4Ys~6 D ( G :*) ( G $ . That is, Vk is contained in the most recent play of a Nash equilibrium inside $ . On the other hand, if the two conditions above are not met, then agent chooses its best-response action in the same way as AP. As we will show, BAP (even with GLIE exploration) on a WAGB converges w.p.1 to either a Nash equilibrium in $ or a strict Nash equilibrium. So far we tackled learning of coordination in team Markov games where the game structure is known. Our real interests are in learning when the game is unknown. In multiagent reinforcement learning, K I is asymptotically approximated with K : . Let : be the virtual game w.r.t K : / . Our question is how to construct : so as to assure : LI w.p.1. Our method of achieving this makes use of the notion of + -optimality. Definition 4 Let + be a positive constant. A joint action a is + -optimal at state s and time t if K : / `O + UXW4Y)[ R K : 1 for all 1 G . We denote the set of + -optimal joint actions at state s and time t as &, 2 . The idea is to use a decreasing + -bound + : to estimate &, 2 at state and time F . All the joint actions belonging to the set are treated as optimal Nash equilibria in the virtual game : which give agents payoff 1. If + : converges to zero at a rate slower than K : converges to KLI , then : LI w.p.1. We make + : proportional to a function " : G ' (_E*E, which decreases slowly and monotonically to zero with : , where : is the smallest number of times that any state-action pair has been sampled so far. Now, we are ready to present the entire optimal adaptive learning (OAL) algorithm. As we will present thereafter, we craft " : carefully using an understanding of the convergence rate of a model-based RL algorithm that is used to learn the game structure. Optimal adaptive learning algorithm (for agent k ) 1. Initialization :J; = . For all Q ST and [ S \ do q Q [ ;{f , ( Q [ Q R ; and Q [ ;= . , ; ; \ Q ; \ . ; \ . 2. Learning of coordination policy If :
, randomly select an action, otherwise do (a) Update the virtual game at state Q : Q [ ;yf if [ S \ Q and Q [ ;X= otherwise. Set ; [ Q [ ; f . (b) According to GLIE exploration, with an exploitation probability do i. Randomly select (without replacement) ! records from recent observations of others’ joint actions played at state Q . ii. Calculate expected payoff of individual action [ over the virtual game Q [ at current state Q as follows: Q [ ; 7 "! # %$ '& )( * Q [ + [
. Construct the bestresponse set at state Q and time : : ,- Q ; [ [ ;/.021435.6 R "! Q [ R . iii. If conditions 1) and 2) of BAP are met, choose a best-response action with respect to the biased set . Otherwise, randomly select a best-response action from ,- Q . Otherwise, randomly select an action to explore. 3. Off-policy learning of game structure 798 (a) Observe state transition Q: Q R and payoff ; under the joint action [ . Do i. q Q [ 4< q Q= [ .f . ii. Q [ 4< Q [ > $ '& ( ; d Q [ . iii. ( Q= [ Q R 4< ( Q [ AQ R > $ '& ( f?d ( Q [ Q R
. iv. For all Q ?iS T and Q ?A@ ; Q R do ( Q= [ Q%? 4< fd >)$ %& ( ( Q [ eQ ? . (b) 7 Q= [ 4< Q [ CB 7 R ED ( Q= [ Q R 35.6 R "! 7 Q R [ R . (c) : < :.f . F < 35G H "D & I! q Q [ . (d) If , J , F (see Section 4.2 for the construction of , F ) i. , < , F . ii. Update 7 Q [ for all Q [ using (b). iii. \ Q 4< [K 7 Q= [ , L 35.6 R I! 7 Q= [ R . Here, } : / is the number of times a joint action has been played in state by time F . M is a positive constant (any value works). : d k is the number of times that a joint action d k G d k appears in agent ’s samples (at time F ) from the most recent joint actions taken in state . 4 Proof of convergence of OAL In this section, we prove that OAL converges to an optimal Nash equilibrium. Throughout, we make the common RL assumptions: payoffs are bounded, and the number of states and actions is finite. The proof is organized as follows. In Section 4.1 we show that OAL agents learn optimal coordination if the game is known. Specifically, we show that BAP against a WAGB with known game structure converges to a Nash equilibrium under GLIE exploration. Then in Section 4.2 we show that OAL agents will learn the game structure. Specifically, any virtual game can be converted to a WAGB which will be learned surely. Finally, these two tracks merge in Section 4.3 which shows that OAL agents will learn the game structure and optimal coordination. Due to limited space, we omit most proofs. They can be found at: www.cs.cmu.edu/˜sandholm/oal.ps. 4.1 Learning to coordinate in a known game In this section, we first model our biased adaptive play (BAP) algorithm with best-response action selection as a stationary Markov chain. In the second half of this section we then model BAP with GLIE exploration as a nonstationary Markov chain. 4.1.1 BAP as a stationary Markov chain Consider BAP with randomly selected initial plays. We take the initial F B M C f
+ as the initial state of the Markov chain. The definition of the other states is inductive: A successor of state is any state 1 obtained by deleting the left-most element of and appending a new right-most element. The only exception is that all the states M C0 a 3 with being either a member of the biased set $ or a strict Nash equilibrium are grouped into a unique terminal state ( . Any state directing to G ( is treated as directly connected to ( . Let be the state transition matrix of the above Markov chain. Let 1 be a successor of , and let M ~6 f
- q ( } players) be the new element that was appended to the right of to get 1 . Let R ( be the transition probability from to 1 . Now, R ( if and only if for each agent , there exists a sample of size in to which k is ’s best response according to the action-selection rule of BAP. Because agent chooses such a sample with a probability independent of time F , the Markov chain is stationary. Finally, due to our clustering of multiple states into a terminal state ( , for any state connected to ( , we have M 7 R S R . In the above model, once the system reaches the terminal state, each agent’s best response is to repeat its most recent action. This is straightforward if in the actual terminal state M C00 (which is one of the states that were clustered to form the terminal state), is a strict Nash equilibrium. If is only a weak Nash equilibrium (in this case, G $ ), BAP biases each agent to choose its most recent action because conditions (1) and (2) of BAP are satisfied. Therefore, the terminal state ( is an absorbing state of the finite Markov chain.
On the other hand, the above analysis shows that ( essentially is composed of multiple absorbing states. Therefore, if agents come into ( , they will be stuck in a particular state in ( forever instead of cycling around multiple states in ( . Theorem 1 Let G be a weakly acyclic game w.r.t. a biased set D. Let L(a) be the length of the shortest directed path in the best-response graph of G from a joint action a to either an absorbing vertex or a vertex in D, and let M UXW4Y [ C . If O , then, w.p.1, biased adaptive play in G converges to either a strict Nash equilibrium or a Nash equilibrium in D. Theorem 1 says that the stationary Markov chain for BAP in a WAGB (given O ) has a unique stationary distribution in which only the terminal state appears. 4.1.2 BAP with GLIE exploration as a nonstationary Markov chain Without knowing game structure, the learners need to use exploration to estimate their payoffs. In this section we show that such exploration does not hurt the convergence of BAP. We show this by first modeling BAP with GLIE exploration as a non-stationary Markov chain. With GLIE exploration, at every time step F , each joint action occurs with positive probability. This means that the system transitions from the state it is in to any of the successor states with positive probability. On the other hand, the agents’ action-selection becomes increasingly greedy over time. In the limit, with probability one, the transition probabilities converge to those of BAP with no exploration. Therefore, we can model the learning process with a sequence of transition matrices ~ : 8 :<;Jf such that
U : : 8 : M , where is the transition matrix of the stationary Markov chain describing BAP without exploration. Akin to how we modeled BAP as a stationary Markov chain above, Young modeled adaptive play (AP) as a stationary Markov chain [18]. There are two differences. First, unlike AP’s, BAP’s action selection is biased. Second, in Young’s model, it is possible to have several absorbing states while in our model, at most one absorbing state exists (for any team game, our model has exactly one absorbing state). This is because we cluster all the absorbing states into one. This allows us to prove our main convergence theorem. Our objective here is to show that on a WAGB, BAP with GLIE exploration will converge to the (“clustered”) terminal state. For that, we use the following lemma (which is a combination of Theorems V4.4 and V4.5 from [4]). Lemma 2 Let be the finite transition matrix of a stationary Markov chain with a unique stationary distribution . Let ~ : 8 :<;mf be a sequence of finite transition matrices. Let be a probability vector and denote M . If
U : : 8 : M , then
U : 8 M for all ( . Using this lemma and Theorem 1, we can prove the following theorem. Theorem 3 (BAP with GLIE) On a WAGB G, w.p.1, BAP with GLIE exploration (and O / ) converges to either a strict Nash equilibrium or a Nash equilibrium in D. 4.2 Learning the virtual game So far, we have shown that if the game structure is known in a WAGB, then BAP will converge to the terminal state. To prove optimal convergence of the OAL algorithm, we need to further demonstrate that 1) every virtual game is a WAGB, and 2) in OAL, the “temporary” virtual game : will converge to the “correct” virtual game I w.p.1. The first of these two issues is handled by the following lemma: Lemma 4 The virtual game VG of any n-player team state game is a weakly acyclic game w.r.t a biased set that contains all the optimal Nash equilibria, and no other joint actions. (By the definition of a virtual game, there are no strict Nash equilibria other than optimal ones.) The length of the shortest best-response path } . Lemma 4 implies that BAP in a known virtual game with GLIE exploration will converge to an optimal Nash equilibrium. This is because (by Theorem 3) BAP in a WAGB will converge to either a Nash equilibrium in a biased set $ or a strict Nash equilibrium, and (by Lemma 4) any virtual game is a WAGB with all such Nash equilibria being optimal. The following two lemmas are the last link of our proof chain. They show that OAL will cause agents to obtain the correct virtual game almost surely. Lemma 5 In any team Markov game, (part 3 of) OAL assures that as F , UXW4Y Q S/T [ S\ D K : KLI / V D ' M
1
1 F F for some constant M ( w.p.1. Using Lemma 5, the following lemma is easy to prove. Lemma 6 Consider any team Markov game. Let : be the event that for all F 1 F , : R M LI in the OAL algorithm in a given state. If 1) " : decreases monotonically to zero (
U : : 8 " : `M9( ), and 2)
U : : 8
1
1 F F , F M9( , then
U : : 8 B~ : M#* . Lemma 6 states that if the criterion for including a joint action among the + -optimal joint actions in OAL is not made strict too quickly (quicker than the iterated logarithm), then agents will identify all optimal joint actions with probability one. In this case, they set up the correct virtual game. It is easy to make OAL satisfy this condition. E.g., any function " : NM d : ( f
, will do. 4.3 Main convergence theorem Now we are ready to prove that OAL converges to an optimal Nash equilibrium in any team Markov game, even when the game structure is unknown. The idea is to show that the OAL agents learn the game structure (VGs) and the optimal coordination policy (over these VGs). OAL tackles these two learning problems simultaneously—specifically, it interleaves BAP (with GLIE exploration) with learning of game structure. However, the convergence proof does not make use of this fact. Instead, the proof proceeds by showing that the VGs are learned first, and coordination second (the learning algorithm does not even itself know when the switch occurs, but it does occur w.p.1). Theorem 7 (Optimal convergence) In any team Markov game among } agents, if (1) } O , and (2) " : satisfies Lemma 6, then the OAL algorithm converges to an optimal Nash equilibrium w.p.1. Proof. According to [1], a team Markov game can be decomposed into a sequence of state games. The optimal equilibria of these state games form the optimal policy 5I for the game. By the definition of GLIE exploration, each state in the finite state space will be visited infinitely often w.p.1. Thus, it is sufficient to only prove that the OAL algorithm will converge to the optimal policy over individual state games w.p.1. Let : be the event that : R M LI at that state for all F 1 F . Let + f be any positive constant. If Condition (2) of the theorem is satisfied, by Lemma 6 there exists a time L + f such that B~ :b * + f if F . + f . If : occurs and Condition (1) of the theorem is satisfied, by Theorem 3, OAL will converge to either a strict Nash equilibrium or a Nash equilibrium in the biased set w.p.1. Furthermore, by Lemma 4, we know that the biased set contains all of the optimal Nash equilibria (and nothing else), and there are no strict Nash equilibria outside the biased set. Therefore, if : occurs, then OAL converges to an optimal Nash equilibrium w.p.1. Let +
be any positive constant, and let be the event that the agents play an optimal joint action at a given state for all F 1 . With this notation, we can reword the previous sentence: there exists a time +
F such that if +
F , then B~ D : * +
. Put together, there exists a time + f +
such that if + f +
, then B~ B~ D : B~ : * + f - * +
* + f +
. Because + f and +
are only used in the proof (they are not parameters of the OAL algorithm), we can choose them to be arbitrarily small. Therefore, OAL converges to an optimal Nash equilibrium w.p.1. 5 Conclusions and future research With multiple Nash equilibria, multiagent RL becomes difficult even when agents do not have conflicting interests. In this paper, we present OAL, the first algorithm that converges to an optimal Nash equilibrium with probability 1 in any team Markov game. In the future work, we consider extending the algorithm to some general-sum Markov games. Acknowledgments Wang is supported by NSF grant IIS-0118767, the DARPA OASIS program, and the PASIS project at CMU. Sandholm is supported by NSF CAREER Award IRI-9703122, and NSF grants IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678. References [1] C.Boutilier. Planning, learning and coordination in multi-agent decision processes. In TARK, 1996. [2] C.Claus and C.Boutilier. The dynamics of reinforcement learning in cooperative multi-agent systems. In AAAI, 1998. [3] D.Fudenberg and D.K.Levine. The theory of learning in games. MIT Press, 1998. [4] D.L.Isaacson and R.W.Madsen. Markov chain: theory and applications. John Wiley and Sons, Inc, 1976. [5] G.Wei . Learning to coordinate actions in multi-agent systems. In IJCAI, 1993. [6] J.Hu and W.P.Wellman. Multiagent reinforcement learning: theoretical framework and an algorithm. In ICML, 1998. [7] M.Kandori, G.J.Mailath, and R.Rob. Learning, mutation, and long run equilibria in games. Econometrica, 61(1):29–56, 1993. [8] M.Littman. Friend-or-Foe Q-learning in general sum game. In ICML, 2001. [9] M.L.Littman. Value-function reinforcement learning in markov games. J. of Cognitive System Research, 2:55–66, 2000. [10] M.L.Purterman. Markov decision processes-discrete stochastic dynamic programming. John Wiley, 1994. [11] M.Tan. Multi-agent reinforcement learning: independent vs. cooperative agents. In ICML, 1993. [12] R.A.Howard. Dynamic programming and Markov processes. MIT Press, 1960. [13] R. Selten. Spieltheoretische behandlung eines oligopolmodells mit nachfragetr¨agheit. Zeitschrift f¨ur die gesamte Staatswissenschaft, 12:301–324, 1965. [14] S. Singh, T.Jaakkola, M.L.Littman, and C.Szepesvari. Convergence results for single-step on-policy reinforcement learning algorithms. Machine Learning, 2000. [15] S.Sen, M.Sekaran, and J. Hale. Learning to coordinate without sharing information. In AAAI, 1994. [16] F. Thusijsman. Optimality and equilibrium in stochastic games. Centrum voor Wiskunde en Informatica, 1992. [17] T.Sandholm and R.Crites. Learning in the iterated prisoner’s dilemma. Biosystems, 37:147–166, 1995. [18] H. Young. The evolution of conventions. Econometrica, 61(1):57–84, 1993. Theorem 3 requires L !
. If Condition (1) of our main theorem is satisfied ( L ! q'
), then by Lemma 4, we do have L !
.
|
2002
|
201
|
2,216
|
Using Manifold Structure for Partially Labelled Classification Mikhail Belkin University of Chicago Department of Mathematics misha@math.uchicago.edu Partha Niyogi University of Chicago Depts of Computer Science and Statistics niyogi@cs.uchicago.edu Abstract We consider the general problem of utilizing both labeled and unlabeled data to improve classification accuracy. Under the assumption that the data lie on a submanifold in a high dimensional space, we develop an algorithmic framework to classify a partially labeled data set in a principled manner. The central idea of our approach is that classification functions are naturally defined only on the submanifold in question rather than the total ambient space. Using the Laplace Beltrami operator one produces a basis for a Hilbert space of square integrable functions on the submanifold. To recover such a basis, only unlabeled examples are required. Once a basis is obtained, training can be performed using the labeled data set. Our algorithm models the manifold using the adjacency graph for the data and approximates the Laplace Beltrami operator by the graph Laplacian. Practical applications to image and text classification are considered. 1 Introduction In many practical applications of data classification and data mining, one finds a wealth of easily available unlabeled examples, while collecting labeled examples can be costly and time-consuming. Standard examples include object recognition in images, speech recognition, classifying news articles by topic. In recent times, genetics has also provided enormous amounts of readily accessible data. However, classification of this data involves experimentation and can be very resource intensive. Consequently it is of interest to develop algorithms that are able to utilize both labeled and unlabeled data for classification and other purposes. Although the area of partially labeled classification is fairly new, a considerable amount of work has been done in that field since the early 90 's, see [2, 4, 7]. In this paper we address the problem of classifying a partially labeled set by developing the ideas proposed in [1] for data representation. In particular, we exploit the intrinsic structure of the data to improve classification with unlabeled examples under the assumption that the data resides on a low-dimensional manifold within a high-dimensional representation space. In some cases it seems to be a reasonable assumption that the data lies on or close to a manifold. For example a handwritten digit 0 can be fairly accurately represented as an ellipse, which is completely determined by the coordinates of its foci and the sum of the distances from the foci to any point. Thus the space of ellipses is a five-dimensional manifold. An actual handwritten 0 would require more parameters, but perhaps not more than 15 or 20. On the other hand the dimensionality of the ambient representation space is the number of pixels which is typically far higher. For other types of data the question of the manifold structure seems significantly more involved. While there has been recent work on using manifold structure for data representation ([6, 8]), the only other application to classification problems that we are aware of, was in [7] , where the authors use a random walk on the data adjacency graph for partially labeled classification. 2 Why Manifold Structure IS Useful for Partially Supervised Learning To provide a motivation for using a manifold structure, consider a simple synthetic example shown in Figure l. The two classes consist of two parts of the curve shown in the first panel (row 1). We are given a few labeled points and 500 unlabeled points shown in panels 2 and 3 respectively. The goal is to establish the identity of the point labeled with a question mark. By observing the picture in panel 2 (row 1) we see that we cannot confidently classify"?" by using the labeled examples alone. On the other hand, the problem seems much more feasible given the unlabeled data shown in panel 3. Since there is an underlying manifold, it seems clear at the outset that the (geodesic) distances along the curve are more meaningful than Euclidean distances in the plane. Therefore rather than building classifiers defined on the plane (lR 2) it seems preferable to have classifiers defined on the curve itself. Even though the data has an underlying manifold, the problem is still not quite trivial since the two different parts of the curve come confusingly close to each other. There are many possible potential representations of the manifold and the one provided by the curve itself is unsatisfactory. Ideally, we would like to have a representation of the data which captures the fact that it is a closed curve. More specifically, we would like an embedding of the curve where the coordinates vary as slowly as possible when one traverses the curve. Such an ideal representation is shown in the panel 4 (first panel of the second row). Note that both represent the same underlying manifold structure but with different coordinate functions. It turns out (panel 6) that by taking a two-dimensional representation of the data with Laplacian Eigenmaps [1] , we get very close to the desired embedding. Panel 5 shows the locations of labeled points in the new representation space. We see that "?" now falls squarely in the middle of "+" signs and can easily be identified as a "+". This artificial example illustrates that recovering the manifold and developing classifiers on the manifold itself might give us an advantage in classification problems. To recover the manifold, all we need is unlabeled data. The labeled data is then used to develop a classifier defined on this manifold. However we need a model for the manifold to utilize this structure. The model used here is that of a weighted graph whose vertices are data points. Two data points are connected with an edge if 3 & to 2 r--" ? 00 0 ~\-) + a - 1 - 1 - 1 -2 -2 +8 -2 '''',-) -3 -3 -3 -2 0 -2 2 -2 2 0.5 0 ?+-++- a C) . <a 0 + + C§P -0.5 -1 -1 0 -0.1 0.1 -0.1 0 .1 Figure 1: Top row: Panel l. Two classes on a plane curve. Panel 2. Labeled examples. "?" is a point to be classified. Panel 3. 500 random unlabeled examples. Bottom row: Panel 4. Ideal representation of the curve. Panel 5. Positions of labeled points and "?" after applying eigenfunctions of the Laplacian. Panel 6. Positions of all examples. and only if the points are sufficiently close. To each edge we can associate a distance between the corresponding points. The "geodesic distance" between two vertices is the length of the shortest path between them on the adjacency graph. Once we set up an approximation to the manifold, we need a method to exploit the structure of the model to build a classifier. One possible simple approach would be to use the "geodesic nearest neighbors" . However, while simple and well-motivated, this method is potentially unstable. A related more sophisticated method based on a random walk on the adjacency graph is proposed in [7]. We also note the approach taken in [2] which uses mincuts of certain graphs for partially labeled classifications. Our approach is based on the Laplace-Beltrami operator defined on Riemannian manifolds (see [5]). The eigenfunctions of the Laplace Beltrami operator provide a natural basis for functions on the manifold and the desired classification function can be expressed in such a basis. The Laplace Beltrami operator can be estimated using unlabeled examples alone and the classification function is then approximated using the labeled data. In the next two sections we describe our algorithm and the theoretical underpinnings in some detail. 3 Description of the Algorithm Given k points X l, . . . , X k E IR I , we assume that the first s < k points have labels Ci, where Ci E {- I, I} and the rest are unlabeled. The goal is to label the unlabeled points. We also introduce a straightforward extension of the algorithm for the case of more than two classes. Step 1 [Constructing the Adjacency Graph with n nearest neighbors]. Nodes i and j corresponding to the points Xi and Xj are connected by an edge if i is among n nearest neighbors of j or j is among n nearest neighbors of i. The distance can be the standard Euclidean distance in II{ I or some other appropriately defined distance. We take Wij = 1 if points Xi and Xj are connected and Wij = 0 otherwise. For a discussion about the appropriate choice of weights, and connections to the heat kernel see [1]. Step 2. [Eigenfunctions] Compute p eigenvectors e1 , ... , ep corresponding to the p smallest eigenvalues for the eigenvector problem Le = Ae where L = D - W is the graph Laplacian for the adjacency graph. Here W is the adjacency matrix defined above and D is a diagonal matrix of the same size as W satisfying Dii = 2::j Wij. Laplacian is a symmetric, positive semidefinite matrix which can be thought of as an operator on functions defined on vertices of the graph. Step 3. [Building the classifier] To approximate the class we minimize the error function Err(a) = 2:::=1 (Ci 2::~=1 ajej(i)) 2 where p is the number of eigenfunctions we wish to employ, the sum is taken over all labeled points and the minimization is considered over the space of coefficients a = (a1' ... ,apf. The solution is given by ( T )-1 T a = E 1ab Elab E 1ab C where c = (C1 ,' .. ,Cs f and Elab is an s x p matrix whose i, j entry is ej (i). For the case of several classes, we build a one-against-all classifier for each individual class. Step 4. [Classifying unlabeled points] If Xi, i > s is an unlabeled point we put { I , Ci = -1 , This, of course, is just applying a linear classifier constructed in Step 3. If there are several classes, one-against-all classifiers compete using 2::~ =1 aj ej (i) as a confidence measure. 4 Theoretical Interpretation Let M C II{ k be an n-dimensional compact Riemannian manifold isometrically embedded in II{ k for some k. Intuitively M can be thought of as an n-dimensional "surface" in II{ k. Riemannian structure on M induces a volume form that allows us to integrate functions defined on M. The square integrable functions form a Hilbert space .c2(M). The Laplace-Beltrami operator 6.M (or just 6.) acts on twice differentiable functions on M. There are three important points that are relevant to our discussion here. The Laplacian provides a basis on .c2(M): It can be shown (e.g. , [5]) that 6. is a self-adjoint positive semidefinite operator and that its eigenfunctions form a basis for the Hilbert space .c2(M) . The spectrum of 6. is discrete (provided M is compact) , with the smallest eigenvalue 0 corresponding to the constant eigenfunction. Therefore any f E .c2(M) can be written as f(x) = 2::~o aiei(x) , where ei are eigenfunctions, 6.ei = Ai ei. The simplest nontrivial example is a circle Sl. 6.S1 f( ¢) - d'li,</» . Therefore the eigenfunctions are given by - d:121» = e( if;), where I( if;) is a 7r-periodic function. It is easy to see that all eigenfunctions of 6. are of the form e( if;) = sin( nif;) or e( if;) = cos( nif;) with eigenvalues {l2, 22, ... }. Therefore, we see that any 7rperiodic £2 function 1 has a convergent Fourier series expansion given by I( if;) = 2::~= o an sin( nif;) + bn cos( nif;). In general, for any manifold M , the eigenfunctions of the Laplace-Beltrami operator provide a natural basis for £2(M). However 6. provides more than just a basis, it also yields a measure of smoothness for functions on the manifold. The Laplacian as a snlOothness functional: A simple measure of the degree of smoothness for a function 1 on a unit circle 51 is the "smoothness functional" S(J) = J I/( if;)' 12dif;. If S(J) is close to zero, we think 5' of 1 as being "smooth" . Naturally, constant functions are the most "smooth" . Integration by parts yields S(J) J f'( if;)dl J 16.ldif; = (6./,1)£.2(51)' In 5' 5' general, if I: M ----+ ~, then S(J) d~f J IV/1 2dp = J 16.ldp = (6./, I)£.2(M ) M M where Viis the gradient vector field of f. If the manifold is ~ n then VI = ", n_ 1 !!La a -aa .' In general, for an n-manifold, the expression in a local coordinate L ~ _ X t X t chart involves the coefficients of the metric tensor. Therefore the smoothness of a unit norm eigenfunction ei of 6. is controlled by the corresponding eigenvalue Ai since 5(ei) = (6.ei, ei)£.2(M) = Ai. For an arbitrary 1 = 2::i [ti ei, we can write S(J) as A Reproducing Kernel Hilbert Space can be constructed from S. A1 = 0 is the smallest eigenvalue for which the corresponding eigenfunction is the constant function e1 = 1'(1). It can also be shown that if M is compact and connected there are no other eigenfunctions with eigenvalue O. Therefore approximating a function I( x) :::::: 2::; ai ei (x) in terms of the first p eigenfunctions of 6. is a way of controlling the smoothness of the approximation. The optimal approximation is obtained by minimizing the £2 norm of the error: a = argmin J (/(X) - t aiei(X)) 2 dp. a=(a" ... ,ap ) M , This approximation is given by a projection in £2 onto the span of the first p eigenfunctions ai = J ei(x )/(x)dp = (ei ' I) £.2(M) In practice we only know the M values of 1 at a finite number of points X l, ... , X n and therefore have to solve a discrete version of this problem a = _ a~gmi~ .t (/(Xi) - t O,jej(Xi)) 2 The soa=(a" ... ,ap ),=l ) =1 lution to this standard least squares problem is given by aT = (ET E)- l EyT, where Eij = ei (Xj) and y = (J(xd , · .. , I(xn)). Conection with the Graph Laplacian: As we are approximating a manifold with a graph, we need a suitable measure of smoothness for functions defined on the graph. It turns out that many of the concepts in the previous section have parallels in graph theory (e.g., see [3]). Let G = (V, E) be a weighted graph on n vertices. We assume that the vertices are numbered and use the notation i ~ j for adjacent vertices i and j. The graph Laplacian of G is defined as L = D - W , where W is the weight matrix and D is a diagonal matrix, Dii = I:j Wj i. L can be thought of as an operator on functions defined on vertices of the graph. It is not hard to see that L is a self-adjoint positive semidefinite operator. By the (finite dimensional) spectral theorem any function on G can be decomposed as a sum of eigenfunctions of L. If we think of G as a model for the manifold M it is reasonable to assume that a function on G is smooth if it does not change too much between nearby points. If f = (11 , ... , In) is a function on G, then we can formalize that intuition by defining the smoothness functional SG(f) = I: Wij(Ji - h)2. It is not hard to show that SG(f) = f LfT = (f ,Lf)G = n I: Ai (f , ei) G which is the discrete analogue of the integration by parts from the i =l previous section. The inner product here is the usual Euclidean inner product on the vector space with coordinates indexed by the vertices of G , ei are normalized eigenvectors of L, Lei = Aiei, Ileill = 1. All eigenvalues are non-negative and the eigenfunctions corresponding to the smaller eigenvalues can be thought as "more smooth". The smallest eigenvalue A1 = 0 corresponds to the constant eigenvector e1· 5 Experimental Results 5.1 Handwritten Digit Recognition We apply our techniques to the problem of optical character recognition. We use the popular MNIST dataset which contains 28x28 grayscale images of handwritten digits. 1 We use the 60000 image training set for our experiments. For all experiments we use 8 nearest neighbours to compute the adjacency matrix. The adjacency matrices are very sparse which makes solving eigenvector problems for matrices as big as 60000 by 60000 possible. For a particular trial, we fix the number of labeled examples we wish to use. A random subset of the 60000 images is used with labels to form the labeled set L. The rest of the images are used without labels to form the unlabeled data U. The classification results (for U) are averaged over 20 different random draws for L. Shown in fig. 2 is a summary plot of classification accuracy on the unlabeled set comparing the nearest neighbors baseline with our algorithm that retains the number of eigenvectors by following taking it to be 20% of the number of labeled points. The improvements over the base line are significant, sometimes exceeding 70% depending on the number of labeled and unlabeled examples. With only 100 labeled examples (and 59900 unlabeled examples), the Laplacian classifier does nearly as well as the nearest neighbor classifier with 5000 labeled examples. Similarly, with 500/59500 labeled/unlabeled examples, it does slightly better than the nearest neighbor base line using 20000 labeled examples By comparing the results for the total 60000 point data set, and 10000 and 1000 subsets we see that adding unlabeled data consistently improves classification accuracy. When almost all of the data is labeled, the performance of our classifier is close to that of k-NN. It is not particularly surprising as our method uses the nearest neighbor information. 1 We use the first 100 principal components of the set of all images to represent each image as a 100 dimensional vector. 60 ~-----'------'------'------'--r~~====~~====~ --e- Laplacian 60,000 points total Laplacian 10,000 points total -A- Laplacian 1 ,QOO points total 40 + best k-NN, k=1 ,3,5 20 2 L-____ -L ______ L-____ -L ______ L-____ -L ______ L-____ -" 20 50 100 500 1000 5000 20000 50000 Number of Labeled Points Figure 2: MNIST data set, Percentage error rates for different numbers of labeled and unlabeled points compared to best k-NN base line, 5.2 Text Classification The second application is text classification using the popular 20 Newsgroups data set, This data set contains approximately 1000 postings from each of 20 different newsgroups, Given an article, the problem is to determine to which newsgroup it was posted, We tokenize the articles using the software package Rainbow written by Andrew McCallum, We use a "stop-list" of 500 most common words to be excluded and also exclude headers, which among other things contain the correct identification of the newsgroup, Each document is then represented by the counts of the most frequent 6000 words normalized to sum to L Documents with 0 total count are removed, thus leaving us with 19935 vectors in a 6000-dimensional space, We follow the same procedure as with the MNIST digit data above, A random subset of a fixed size is taken with labels to form L, The rest of the dataset is considered to be U, We average the results over 20 random splits2 , As with the digits, we take the number of nearest neighbors for the algorithm to be 8, In fig, 3 we summarize the results by taking 19935, 2000 and 600 total points respectively and calculating the error rate for different numbers oflabeled points, The number of eigenvectors used is always 20% of the number of labeled points, We see that having more unlabeled points improves the classification error in most cases although when there are very few labeled points, the differences are smalL References [1] M. Belkin , P. Niyogi, Laplacian Eigenmaps for Dimensionality Redu ction and Data Representation, Technical Report, TR-2002-01 , Department of Computer Science, The University of Chicago, 2002. 2In the case of 2000 eigenvectors we take just 10 random splits since the computations are rather time-consuming. 80 ~------'-------'-------'------r~======~7=====~ -e- Laplacian 19,935 points total ~ Laplacian 2,000 pOints total ---A- Laplacian 600 points total + best k-NN, k=1,3,5 60 40 30 22 L-------~-------L------~--------~-------L------~ 50 100 500 1000 5000 10000 18000 Number of Labeled Points Figure 3: 20 Newsgroups data set. Error rates for different numbers of labeled and unlabeled points compared to best k-NN baseline. [2] A. Blum, S. Chawla, Learning from Labeled and Unlabeled Data using Graph Mincuts, ICML, 2001 , [3] Fan R. K. Chung, Spectral Graph Theory, Regional Conference Series in Mathematics, number 92, 1997 [4] K. Nigam, A.K. McCallum, S. Thrun, T. Mitchell, Text Classification from Labeled in Unlabeled Data, Machine Learning 39(2/3),2000, [5] S. Rosenberg, The Laplacian on a Riemmannian Manifold, Cambridge University Press, 1997, [6] Sam T. Roweis, Lawrence K. Saul, N onlinear Dimensionality Reduction by Locally Linear Embedding, Science, vol 290, 22 December 2000, [7] Martin Szummer, Tommi Jaakkola, Partially labeled classification with Markov random walks, Neural Information Processing Systems (NIPS) 2001 , vol 14., [8] Joshua B. Tenenbaum, Vin de Silva, John C. Langford, A Global Geometric Framework for N onlinear Dimensionality Reduction, Science, Vol 290, 22 December 2000,
|
2002
|
202
|
2,217
|
Recovering Intrinsic Images from a Single Image Marshall F Tappen William T Freeman Edward H Adelson MIT Artificial Intelligence Laboratory Cambridge, MA 02139 mtappen@ai.mit.edu, wtf@ai.mit.edu, adelson@ai.mit.edu Abstract We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface’s reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images. 1 Introduction Every image is the product of the characteristics of a scene. Two of the most important characteristics of the scene are its shading and reflectance. The shading of a scene is the interaction of the surfaces in the scene and the illumination. The reflectance of the scene describes how each point reflects light. The ability to find the reflectance of each point in the scene and how it is shaded is important because interpreting an image requires the ability to decide how these two factors affect the image. For example, the geometry of an object in the scene cannot be recovered without being able to isolate the shading of every point. Likewise, segmentation would be simpler given the reflectance of each point in the scene. In this work, we present a system which finds the shading and reflectance of each point in a scene by decomposing an input image into two images, one containing the shading of each point in the scene and another image containing the reflectance of each point. These two images are types of a representation known as intrinsic images [1] because each image contains one intrinsic characteristic of the scene. Most prior algorithms for finding shading and reflectance images can be broadly classified as generative or discriminative approaches. The generative approaches create possible surfaces and reflectance patterns that explain the image, then use a model to choose the most likely surface. Previous generative approaches include modeling worlds of painted polyhedra [11] or constructing surfaces from patches taken out of a training set [3]. In contrast, discriminative approaches attempt to differentiate between changes in the image caused by shading and those caused by a reflectance change. Early algorithms, such as Retinex [8], were based on simple assumptions, such as the assumption that the gradients along reflectance changes have much larger magnitudes than those caused by shading. That assumption does not hold for many real images, so recent algorithms have used more complex statistics to separate shading and reflectance. Bell and Freeman [2] trained a classifier to use local image information to classify steerable pyramid coefficients as being due to shading or reflectance. Using steerable pyramid coefficients allowed the algorithm to classify edges at multiple orientations and scales. However, the steerable pyramid decomposition has a low-frequency residual component that cannot be classified. Without classifying the low-frequency residual, only band-pass filtered copies of the shading and reflectance images can be recovered. In addition, low-frequency coefficients may not have a natural classification. In a different direction, Weiss [13] proposed using multiple images where the reflectance is constant, but the illumination changes. This approach was able to create full frequency images, but required multiple input images of a fixed scene. In this work, we present a system which uses multiple cues to recover full-frequency shading and reflectance intrinsic images from a single image. Our approach is discriminative, using both a classifier based on color information in the image and a classifier trained to recognize local image patterns to distinguish derivatives caused by reflectance changes from derivatives caused by shading. We also address the problem of ambiguous local evidence by using a Markov Random Field to propagate the classifications of those areas where the evidence is clear into ambiguous areas of the image. 2 Separating Shading and Reflectance Our algorithm decomposes an image into shading and reflectance images by classifying each image derivative as being caused by shading or a reflectance change. We assume that the input image, I(x, y), can be expressed as the product of the shading image, S(x, y), and the reflectance image, R(x, y). Considering the images in the log domain, the derivatives of the input image are the sum of the derivatives of the shading and reflectance images. It is unlikely that significant shading boundaries and reflectance edges occur at the same point, thus we make the simplifying assumption that every image derivative is either caused by shading or reflectance. This reduces the problem of specifying the shading and reflectance derivatives to that of binary classification of the image’s x and y derivatives. Labelling each x and y derivative produces estimates of the derivatives of the shading and reflectance images. Each derivative represents a set of linear constraints on the image and using both derivative images results in an over-constrained system. We recover each intrinsic image from its derivatives by using the method introduced by Weiss in [13] to find the pseudo-inverse of the over-constrained system of derivatives. If fx and fy are the filters used to compute the x and y derivatives and Fx and Fy are the estimated derivatives of shading image, then the shading image, S(x, y) is: S(x, y) = g ⋆[(fx(−x, −y) ⋆Fx) + (fy(−x, −y) ⋆Fy)] (1) where ⋆is convolution, f(−x, −y) is a reversed copy of f(x, y), and g is the solution of g ⋆[(fx(−x, −y) ⋆fx(x, y)) + (fy(−x, −y) ⋆fx(x, y))] = δ (2) The reflectance image is found in the same fashion. One nice property of this technique is that the computation can be done using the FFT, making it more computationally efficient. 3 Classifying Derivatives With an architecture for recovering intrinsic images, the next step is to create the classifiers to separate the underlying processes in the image. Our system uses two classifiers, one which uses color information to separate shading and reflectance derivatives and a second classifier that uses local image patterns to classify each derivative. Original Image Shape Image Reflectance Image Figure 1: Example computed using only color information to classify derivatives. To facilitate printing, the intrinsic images have been computed from a gray-scale version of the image. The color information is used solely for classifying derivatives in the gray-scale copy of the image. 3.1 Using Color Information Our system takes advantage of the property that changes in color between pixels indicate a reflectance change [10]. When surfaces are diffuse, any changes in a color image due to shading should affect all three color channels proportionally. Assume two adjacent pixels in the image have values c1 and c2, where c1 and c2 are RGB triplets. If the change between the two pixels is caused by shading, then only the intensity of the color changes and c2 = αc1 for some scalar α. If c2 ̸= αc1, the chromaticity of the colors has changed and the color change must have been caused by a reflectance change. A chromaticity change in the image indicates that the reflectance must have changed at that point. To find chromaticity changes, we treat each RGB triplet as a vector and normalize them to create ˆc1 and ˆc2. We then use the angle between ˆc1 and ˆc2 to find reflectance changes. When the change is caused by shading, (ˆc1 · ˆc2) equals 1. If (ˆc1 · ˆc2) is below a threshold, then the derivative associated with the two colors is classified as a reflectance derivative. Using only the color information, this approach is similar to that used in [6]. The primary difference is that our system classifies the vertical and horizontal derivatives independently. Figure 1 shows an example of the results produced by the algorithm. The classifier marked all of the reflectance areas correctly and the text is cleanly removed from the bottle. This example also demonstrates the high quality reconstructions that can be obtained by classifying derivatives. 3.2 Using Gray-Scale Information While color information is useful, it is not sufficient to properly decompose images. A change in color intensity could be caused by either shading or a reflectance change. Using only local color information, color intensity changes cannot be classified properly. Fortunately, shading patterns have a unique appearance which can be discriminated from most common reflectance patterns. This allows us to use the local gray-scale image pattern surrounding a derivative to classify it. The basic feature of the gray-scale classifier is the absolute value of the response of a linear filter. We refer to a feature computed in this manner as a non-linear filter. The output of a non-linear, F, given an input patch Ip is F = |Ip ⋆w| (3) where ⋆is convolution and w is a linear filter. The filter, w is the same size as the image patch, I, and we only consider the response at the center of Ip. This makes the feature a function from a patch of image data to a scalar response. This feature could also be viewed as the absolute value of the dot product of Ip and w. We use the responses of linear Figure 2: Example images from the training set. The first two are examples of reflectance changes and the last three are examples of shading (a) Original Image (b) Shading Image (c) Reflectance Image Figure 3: Results obtained using the gray-scale classifier. filters as the basis for our feature, in part, because they have been used successfully for characterizing [9] and synthesizing [7] images of textured surfaces. The non-linear filters are used to classify derivatives with a classifier similar to that used by Tieu and Viola in [12]. This classifier uses the AdaBoost [4] algorithm to combine a set of weak classifiers into a single strong classifier. Each weak classifier is a threshold test on the output of one non-linear filter. At each iteration of the AdaBoost algorithm, a new weak classifier is chosen by choosing a non-linear filter and a threshold. The filter and threshold are chosen greedily by finding the combination that performs best on the re-weighted training set. The linear filter in each non-linear filter is chosen from a set of oriented first and second derivative of Gaussian filters. The training set consists of a mix of images of rendered fractal surfaces and images of shaded ellipses placed randomly in the image. Examples of reflectance changes were created using images of random lines and images of random ellipse painted onto the image. Samples from the training set are shown in 2. In the training set, the illumination is always coming from the right side of the image. When evaluating test images, the classifier will assume that the test image is also lit from the right. Figure 3 shows the results of our system using only the gray-scale classifier. The results can be evaluated by thinking of the shading image as how the scene should appear if it were made entirely of gray plastic. The reflectance image should appear very flat, with the the three-dimensional depth cues placed in the shading image. Our system performs well on the image shown in Figure 3. The shading image has a very uniform appearance, with almost all of the effects of the reflectance changes placed in the reflectance image. The examples shown are computed without taking the log of the input image before processing it. The input images are uncalibrated and ordinary photographic tonescale is very similar to a log transformation. Errors from not taking log of the input image first would (a) (b) (c) (d) Figure 4: An example where propagation is needed. The smile from the pillow image in (a) has been enlarged in (b). Figures (c) and (d) contain an example of shading and a reflectance change, respectively. Locally, the center of the mouth in (b) is as similar to the shading example in (c) as it is to the example reflectance change in (d). (a) Original Image (b) Shading Image (c) Reflectance Image Figure 5: The pillow from Figure 4. This is found by combining the local evidence from the color and gray-scale classifiers, then using Generalized Belief Propagation to propagate local evidence. cause one intrinsic image to modulate the local brightness of the other. However, this does not occur in the results. 4 Propagating Evidence While the classifier works well, there are still areas in the image where the local information is ambiguous. An example of this is shown in Figure 4. When compared to the example shading and reflectance change in Figure 4(c) and 4(d), the center of the mouth in Figure 4(b) is equally well classified with either label. However, the corners of the mouth can be classified as being caused by a reflectance change with little ambiguity. Since the derivatives in the corner of the mouth and the center all lie on the same image contour, they should have the same classification. A mechanism is needed to propagate information from the corners of the mouth, where the classification is clear, into areas where the local evidence is ambiguous. This will allow areas where the classification is clear to disambiguate those areas where it is not. In order to propagate evidence, we treat each derivative as a node in a Markov Random Field with two possible states, indicating whether the derivative is caused by shading or caused by a reflectance change. Setting the compatibility functions between nodes correctly will force nodes along the same contour to have the same classification. 4.1 Model for the Potential Functions Each node in the MRF corresponds to the classification of a derivative. We constrain the compatibility functions for two neighboring nodes, xi and xj, to be of the form ψ(xi, xj) = β 1 −β 1 −β β (4) with 0 ≤β ≤1. The term β controls how much the two nodes should influence each other. Since derivatives along an image contour should have the same classification, β should be close to 1 when two neighboring derivatives are along a contour and should be 0.5 when no contour is present. Since β depends on the image at each point, we express it as β(Ixy), where Ixy is the image information at some point. To ensure β(Ixy) between 0 and 1, it is modelled as β(Ixy) = g(z(Ixy)), where g(·) is the logistic function and z(Ixy) has a large response along image contours. 4.2 Learning the Potential Functions The function z(Ixy) is based on two local image features, the magnitude of the image and the difference in orientation between the gradient and the orientation of the graph edge. These features reflect our heuristic that derivatives along an image contour should have the same classification. The difference in orientation between a horizontal graph edge and image contour, ˆφ, is found from the orientation of the image gradient, φ. Assuming that −π/2 ≤φ ≤π/2, the angle between a horizontal edge and the image gradient,ˆφ, is ˆφ = |φ|. For vertical edges, ˆφ = |φ| −π/2. To find the values of z(·) we maximize the probability of a set of the training examples over the parameters of z(·). The examples are taken from the same set used to train the gray-scale classifiers. The probability of training samples is P = 1 Z Y (i,j) ψ(xi, xj) (5) where all (i, j) are the indices of neighboring nodes in the MRF and Z is a normalization constant. Note that each ψ(·) is a function of z(Ixy). The function relating the image features to ψ(·), z(·), is chosen to be a linear function and is found by maximizing equation 5 over a set of training images similar to those used to train the local classifier. In order to simplify the training process, we approximate the true probability in Equation 5 by assuming that Z is constant. Doing so leads to the following value of z(·): z(ˆφ, |∇I|) = −1.2 × ˆφ + 1.62 × |∇I| + 2.3 (6) where |∇I| is the magnitude of the image gradient and both ˆφ and |∇I| have been normalized to be between 0 and 1. These measures break down in areas with a weak gradient, so we set β(Ixy) to 0.5 for regions of the image with a gradient magnitude less than 0.05. Combined with the values learned for z(·), this effectively limits β to the range 0.5 ≤β ≤1. Larger values of z(·) correspond to a belief that the derivatives connected by the edge should have the same value, while negative values signify that the derivatives should have (a) Original Image (b) Shading Image (c) Reflectance Image Figure 6: Example generated by combining color and gray-scale information, along with using propagation. a different value. The values in equation 6 correspond with our expected results; two derivatives are constrained to have the same value when they are along an edge in the image that has a similar orientation to the edge in the MRF connecting the two nodes. 4.3 Inferring the Correct Labelling Once the compatibility functions have been learned, the label of each derivative can be inferred. The local evidence for each node in the MRF is obtained from the results of the color classifier and from the gray-scale classifier by assuming that the two are statistically independent. It is necessary to use the color information because propagation cannot help in areas where the gray-scale classifier misses an edge altogether. In Figure 5, the cheek patches on the pillow, which are pink in the color image, are missed by the gray-scale classifier, but caught by the color classifier. For the results shown, we used the results of the AdaBoost classifier to classify the gray-scale images and used the method suggested by Friedman et al. to obtain the probability of the labels [5]. We used the Generalized Belief Propagation algorithm [14] to infer the best label of each node in the MRF because ordinary Belief Propagation performed poorly in areas with both weak local evidence and strong compatibility constraints. The results of using color, grayscale information, and propagation can be seen in Figure 5. The ripples on the pillow are correctly identified as being caused by shading, while the face is correctly identified as having been painted on. In a second example, shown in Figure 6, the algorithm correctly identifies the change in reflectance between the sweatshirt and the jersey and correctly identifies the folds in the clothing as being caused by shading. There are some small shading artifacts in the reflectance image, especially around the sleeves of the sweatshirt, presumably caused by particular shapes not present in the training set. All of the examples were computed using ten non-linear filters as input for the AdaBoost gray-scale classifier. 5 Discussion We have presented a system that is able to use multiple cues to produce shading and reflectance intrinsic images from a single image. This method is also able to produce satisfying results for real images. The most computationally intense steps for recovering the shading and reflectance images are computing the local evidence, which takes about six minutes on a 700MHz Pentium for a 256 × 256 image, and running the Generalized Belief Propagation algorithm. Belief propagation was used on both the x and y derivative images and took around 6 minutes to run 200 iterations on each image. The pseudo-inverse process took under 5 seconds. The primary limitation of this method lies in the classifiers. For each type of surface, the classifiers must incorporate knowledge about the structure of the surface and how it appears when illuminated. The present classifiers operate at a single spatial scale, however the MRF framework allows the integration of information from multiple scales. Acknowledgments Portions of this work were completed while W.T.F was a Senior Research Scientist and M.F.T was a summer intern at Mitsubishi Electric Research Labs. This work was supported by an NDSEG fellowship to M.F.T, by NIH Grant EY11005-04 to E.H.A., by a grant from NTT to E.H.A., and by a contract with Unilever Research. References [1] H. G. Barrow and J. M. Tenenbaum. Recovering intrinsic scene characteristics from images. In Computer Vision Systems, pages 3–26. Academic Press, 1978. [2] M. Bell and W. T. Freeman. Learning local evidence for shading and reflection. In Proceedings International Conference on Computer Vision, 2001. [3] W. T. Freeman, E. C. Pasztor, and O. T. Carmichael. Learning low-level vision. International Journal of Computer Vision, 40(1):25–47, 2000. [4] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119– 139, 1997. [5] J. Friedman, T. Hastie, and R. Tibshirami. Additive logistic regression: A statistical view of boosting. The Annals of Statistics, 38(2):337–374, 2000. [6] B. V. Funt, M. S. Drew, and M. Brockington. Recovering shading from color images. In G. Sandini, editor, ECCV-92: Second European Conference on Computer Vision, pages 124–132. Springer-Verlag, May 1992. [7] D. Heeger and J. Bergen. Pyramid-based texture analysis/synthesis. In Computer Graphics Proceeding, SIGGRAPH 95, pages 229–238, August 1995. [8] E. H. Land and J. J. McCann. Lightness and retinex theory. Journal of the Optical Society of America, 61:1–11, 1971. [9] T. Leung and J. Malik. Recognizing surfaces using three-dimensional textons. In IEEE International Conference on Computer Vision, 1999. [10] J. M. Rubin and W. A. Richards. Color vision and image intensities: When are changes material. Biological Cybernetics, 45:215–226, 1982. [11] P. Sinha and E. H. Adelson. Recovering reflectance in a world of painted polyhedra. In Fourth International Conference on Computer Vision, pages 156–163. IEEE, 1993. [12] K. Tieu and P. Viola. Boosting image retrieval. In Proceedings IEEE Computer Vision and Pattern Recognition, volume 1, pages 228–235, 2000. [13] Y. Weiss. Deriving intrinsic images from image sequences. In Proceedings International Conference on Computer Vision, Vancouver, Canada, 2001. IEEE. [14] J. Yedidia, W. T. Freeman, and Y. Weiss. Generalized belief propagation. In Advances in Neural Information Processing Systems 13, pages 689–695, 2001.
|
2002
|
203
|
2,218
|
A Note on the Representational Incompatibility of Function Approximation and Factored Dynamics Eric Allender Computer Science Department Rutgers University allender@cs.rutgers.edu Sanjeev Arora Computer Science Department Princeton University arora@cs.princeton.edu Michael Kearns Department of Computer and Information Science University of Pennsylvania mkearns@cis.upenn.edu Cristopher Moore Department of Computer Science University of New Mexico moore@santafe.edu Alexander Russell Department of Computer Science and Engineering University of Connecticut acr@cse.uconn.edu Abstract We establish a new hardness result that shows that the difficulty of planning in factored Markov decision processes is representational rather than just computational. More precisely, we give a fixed family of factored MDPs with linear rewards whose optimal policies and value functions simply cannot be represented succinctly in any standard parametric form. Previous hardness results indicated that computing good policies from the MDP parameters was difficult, but left open the possibility of succinct function approximation for any fixed factored MDP. Our result applies even to policies which yield a polynomially poor approximation to the optimal value, and highlights interesting connections with the complexity class of Arthur-Merlin games. 1 Introduction While a number of different representational approaches to large Markov decision processes (MDPs) have been proposed and studied over recent years, relatively little is known about the relationships between them. For example, in function approximation, a parametric form is proposed for the value functions of policies. Presumably, for any assumed parametric form (for instance, linear value functions), rather strong constraints on the underlying stochastic dynamics and rewards may be required to meet the assumption. However, a precise characterization of such constraints seems elusive. Similarly, there has been recent interest in making parametric assumptions on the dynamics and rewards directly, as in the recent work on factored MDPs. Here it is known that the problem of computing an optimal policy from the MDP parameters is intractable (see [7] and the references therein), but exactly what the representational constraints on such policies are has remained largely unexplored. In this note, we give a new intractability result for planning in factored MDPs that exposes a noteworthy conceptual point missing from previous hardness results. Prior intractability results for planning in factored MDPs established that the problem of computing optimal policies from MDP parameters is hard, but left open the possibility that for any fixed factored MDP, there might exist a compact, parametric representation of its optimal policy. This would be roughly analogous to standard NP-complete problems such as graph coloring — any 3-colorable graph has a “compact” description of its 3-coloring, but it is hard to compute it from the graph. Here we dismiss even this possibility. Under a standard and widely believed complexitytheoretic assumption (that is even weaker than the assumption that NP does not have polynomial size Boolean circuits), we prove that a specific family of factored MDPs does not even possess “succinct” policies. By this we mean something extremely general — namely, that for each MDP in the family, it cannot have an optimal policy represented by an arbitrary Boolean circuit whose size is bounded by a polynomial in the size of the MDP description. Since such circuits can represent essentially any standard parametric functional form, we are showing that there exists no “reasonable” representation of good policies in factored MDPs, even if we ignore the problem of how to compute them from the MDP description. This result holds even if we ask only for policies whose expected return approximates the optimal within a polynomial factor. (With a slightly stronger complexity-theoretic assumption, it follows that obtaining an approximation even within an exponential factor is impossible.) Thus, while previous results established that there was at least a computational barrier to going from factored MDP parameters to good policies, here we show that the barrier is actually representational, a considerably worse situation. The result highlights the fact that even when making strong and reasonable assumptions about one representational aspect of MDPs (such as value functions or dynamics), there is no reason in general for this to lead to any nontrivial restrictions on the others. The construction in our result is ultimately rather simple, and relies on powerful results developed in complexity theory over the last decade. In particular, we exploit striking results on the complexity class associated with computational protocols known as ArthurMerlin games. We note that recent and independent work by Liberatore [5] establishes results similar to ours. The primary differences between our work and Liberatore’s is that our results prove intractability of approximation and rely on different proof techniques. 2 DBN-Markov Decision Processes A Markov decision process is a tuple where is a set of states, is a set of actions, is a family of probability distributions on , one for each , and is a reward function. We will denote by the probability that action in state results in state . When started in a state , and provided with a sequence of actions the MDP traverses a sequence of states , where each is a random sample from the distribution . Such a state sequence is called a path. The -discounted return associated with such a path is . A policy is a mapping from states to actions. When the action sequence is generated according to this policy, we denote by the state sequence produced as above. A policy is optimal if for all policies and all , we have We consider MDPs where the transition law is represented as a dynamic Bayes net, or DBN-MDPs. Namely, if the state space has size , then is represented by a -layer Bayes net. There are variables in the first layer, representing the state variables at any given time , along with the action chosen at time . There are variables in the second layer, representing the state variables at time . All directed edges in the Bayes net go from variables in the first layer to variables in the second layer; for our result, it suffices to consider Bayes nets in which the indegree of every second-layer node is bounded by some constant. Each second layer node has a conditional probability table (CPT) describing its conditional distribution for every possible setting of its parents in the Bayes net. Thus the stochastic dynamics of the DBN-MDP are entirely described by the Bayes net in the standard way; the next-state distribution for any state is given by simply fixing the first layer nodes to the settings given by the state. Any given action choice then yields the nextstate distribution according to standard Bayes net semantics. We shall assume throughout that the rewards are a linear function of state. 3 Arthur-Merlin Games The complexity class AM is a probabilistic extension of the familiar class NP, and is typically described in terms of Arthur–Merlin games (see [2]). An Arthur–Merlin game for a language is played by two players (Turing machines), (the Verifier, often referred to as Arthur in the literature), who is equipped with a random coin and only modest (polynomialtime bounded) computing power; and (the Prover, often referred to as Merlin), who is computationally unbounded. Both are supplied with the same input of length bits. For instance, might be some standard encoding of an undirected graph , and might be interested in proving to that is 3-colorable. Thus, seeks to prove that ; is skeptical but willing to listen. At each step of the conversation, flips a fair coin, perhaps several times, and reports the resulting bits to ; this is interpreted as a “question” or “challenge” to . In the graph coloring example, it might be reasonable to interpret the random bits generated by as identifying a random edge in , with the challenge to being to identify the colors of the nodes on each end of this edge (which had better be different, and consistent with any previous responses of , if is to be convinced). Thus responds with some number of bits, and the protocol proceeds to the next round. After poly steps, decides, based upon the conversation, whether to accept that or reject. We say that the language is in the class AM poly if there is a (polynomial-time) algorithm such that: When , there is always a strategy for to generate the responses to the random challenges that causes to accept. When , regardless of how responds to the random challenges, with probability at least , rejects. Here the probability is taken over the random challenges. In other words, we ask that there be a polynomial time algorithm such that if , there is always some response to the random challenge sequence that will convince of this fact; but if , then every way of responding to the random challenge sequence has an overwhelming probability of being “caught” by . What is the power of the class AM poly ? From the definition, it should be clear that every language in NP has an (easy) AM poly protocol in which , the prover, ignores the random challenges, and simply presents with the standard NP witness to (e.g., a specific 3-coloring of the graph ). More surprisingly, every language in the class PSPACE (the class of all languages that can be recognized in deterministic polynomial space, conjectured to be much larger than NP) also has an AM poly protocol, a beautiful and important result due to [6, 9]. (For definitions of classes such as P, NP, and PSPACE, see [8, 4].) If a language has an Arthur-Merlin game where Arthur asks only a constant number of questions, we say that AM . NP corresponds to Arthur-Merlin games where Arthur says nothing, and thus clearly NP AM . Restricting the number of questions seems to put severe limitations on the power of Arthur-Merlin games. Though AM poly PSPACE, it is generally believed that NP AM PSPACE 4 DBN-MDPs Requiring Large Policies In this section, we outline our construction proving that factored MDPs may not have any succinct representation for (even approximately) optimal policies, and conclude this note with a formal statement of the result. Let us begin by drawing a high-level analogy with the MDP setting. Let be a language in PSPACE, and let and be the Turing machines for the AM protocol for . Since is simply a Turing machine, it has some internal configuration (sufficient to completely describe the tape contents, read/write head position, abstract computational state, and so on) at any given moment in the protocol with . Since we assume is allpowerful (computationally unbounded), we can assume that has complete knowledge of this internal state of at all times. The protocol at round can thus be viewed: is in some state/configuration ; a random bit sequence (the challenge) is generated; based on and , computes some response or action ; and based on and , enters its next configuration . From this description, several observations can be made: ’s internal configuration constitutes state in the Markovian sense — combined with the action , it entirely determines the next-state distribution. The dynamics are probabilistic due to the influence of the random bit sequence . We can thus view as implementing a policy in the MDP determined by (the internal configuration of) — ’s actions, together with the stochastic , determine the evolution of the . Informally, we might imagine defining the total return to to be 1 if causes to accept, and 0 if rejects. The MDP so defined in this manner is not arbitrarily complex — in particular, the transition dynamics are defined by the polynomial-time Turing machine . At a high level, then, if every MDP so defined by a language in AM poly had an “efficient” policy , then something remarkable would occur: the arbitrary power allowed to in the definition of the class would have been unnecessary. We shall see that this would have extraordinary and rather implausible complexity-theoretic implications. For the moment, let us simply sketch the refinements to this line of thought that will allow us to make the connection to factored MDPs: we will show that the MDPs defined above can actually be represented by DBN-MDPs with only constant indegree and a linear reward function. As suggested, this will allow us to assert rather strong negative results about even the existence of efficient policies, even when we ask for rather weak approximation to the optimal return. We now turn to the problem of planning in a DBN-MDP. Typically, one might like to have a “general-purpose” planning procedure — a procedure that takes as input a description of a DBN-MDP , and returns a description of the optimal policy . This is what is typically meant by the term planning, and we note that it demands a certain kind of uniformity — a single planning algorithm that can efficiently compute a succinct representation of the optimal policy for any DBN-MDP. Note that the existence of such a planning algorithm would certainly imply that every DBN-MDP has a succinct representation of its optimal policy — but the converse does not hold. It could be that the difficulty of planning in DBN-MDPs arises from the demand of uniformity — that is, that every DBNMDP possesses a succinct optimal policy, but the problem of computing it from the MDP parameters is intractable. This would be analogous to problems in NP — for example, every 3-colorable graph obviously has a succinct description of a 3-coloring, but it is difficult to compute it from the graph. As mentioned in the introduction, it has been known for some time that planning in this uniform sense is computationally intractable. Here we establish the stronger and conceptually important result that it is not the uniformity giving rise to the difficulty, but rather that there simply exist DBN-MDPs in which the optimal policy does not possess a succinct representation in any natural parameterization. We will present a specific family of DBNMDPs (where has states with components), and show that, under a standard complexity-theoretic assumption, the corresponding family of optimal policies cannot be represented by arbitrary Boolean circuits of size polynomial in . We note that such circuits constitute a universal representation of efficiently computable functions, and all of the standard parametric forms in wide use in AI and statistics can be computed by such circuits. We now provide the details of the construction. Let be any language in PSPACE, and let be a polynomial-time Turing machine running in time on inputs of length , implementing the algorithm of “Arthur” in the AM protocol for . Let be the maximum number of bits needed to write down a complete configuration of that may arise during computation on an input of length (so , since no computation taking time can consume more than space). Each state of our DBN-MDP will have components, each corresponding to one bit of the encoding of a configuration. No states will have rewards, except for the accepting states, which have reward . (Without loss of generality, we may assume that never enters an accepting state other than at time time .) Note that we can encode configurations so that there is one bit position (say, the first bit of the state vector) that records if the current state of is accepting or not. Thus the reward function is obviously linear (it is simply times the first component). There are two actions: . Each action advances the simulation of the AM game by one time step. There are three types of steps: 1. Steps where is choosing a bit to send to ; action corresponds to choosing to send a “ ” to . 2. Steps where is flipping a coin; each action yields probability of having the coin come up “heads”. 3. Steps where is doing deterministic computation; each action moves the computation ahead one step. It is straightforward to encode this as a DBN-MDP. Note that each bit of the next move relation of a Turing machine depends on only bits of the preceding configuration (i.e., on the bits encoding the contents of the neighboring cells, the bits encoding the presence or absence of the input head in one of those cells, and the bits encoding the finite state information of the Turing machine). Thus the DBN-MDP describing on inputs of length has constant indegree; each bit is connected to the bits on which it depends. Note that a path in this MDP corresponding to an accepting computation of on an input of length has total reward ; a rejecting path has reward . A routine calculation shows that the expected reward of the optimal policy is equal to the fraction of coin flip sequences that cause to accept when communicating with an optimal . That is, Prob accepts Optimal expected reward With the construction above, we can now describe our result: Theorem 1. If PSPACE is not contained in P/POLY, then there is a family of DBN-MDPs , , such that for any two polynomials, and , there exist infinitely many such that no circuit of size can compute a policy having expected reward greater than times the optimum. Before giving the formal proof, we remark that the assumption that PSPACE is not contained in P/POLY is standard and widely believed, and informally asserts that not everything that can be computed in polynomial space can be computed by a non-uniform family of small circuits. Proof. Let be any language in PSPACE that is not in P/POLY, and let be as described above. Suppose, contrary to the statement of Theorem, that for large enough there is indeed a circuit of size computing a policy for whose return is within a factor of optimal. We now consider the probabilistic circuit that operates as follows. takes a string as input, and estimates the expected return of the policy given by (which is the same as the probability that the prover associated with is able to convince that ). Specifically, builds the state corresponding to the start state of protocol on input , and then repeats the following procedure times: Given state , if is a state encoding a configuration in which it is ’s turn, use to compute the message sent by and set to the new state of the AM protocol. Otherwise, if is a state encoding a configuration in which it is ’s turn, flip a coin at random and set to the new state of the AM protocol. Repeat until an accept or reject state is encountered. If any of these repetitions result in an accept, accepts; otherwise rejects. Note now that if , then the probability that rejects is no more than since in this case we are guaranteed that each iteration will accept with probability at least . On the other hand, if , then accepts with probability no more than , since each iteration accepts with probability at most . As has polynomial size and a probabilistic circuit can be simulated by a deterministic one of essentially the same size, it follows that is in P/POLY, a contradiction. It is worth mentioning that, by the worst-case-to-average-case reduction of [1], if PSPACE is not in P/POLY then we can select such a language so that the circuit will perform badly on a non-negligible fraction of the states of . That is, not only is it hard to find an optimal policy, it will be the case that every policy that can be expressed as a polynomial size circuit will perform very badly on very many inputs. Finally, we remark that by coupling the above construction with the approximate lower bound protocol of [3], one can prove (under a stronger assumption) that there are no succinct policies for the DBN-MDPs which even approximate the optimum return to within an exponential factor. Theorem 2. If PSPACE is not contained in AM , then there is a family of DBN-MDPs , , such that for any polynomial there exist infinitely many such that no circuit of size can compute a policy having expected reward greater than times the optimum. References [1] L. Babai, L. Fortnow, N. Nisan, and A. Wigderson. BPP has subexponential time simulations unless EXPTIME has publishable proofs. Complexity Theory, 3:307–318, 1993. [2] L. Babai and S. Moran. Arthur-merlin games: a randomized proof system, and a hierarchy of complexity classes. Journal of Computer and System Sciences, 36(2):254– 276, 1988. [3] S. Goldwasser and M. Sipser. Private coins versus public coins in interactive proof systems. Advances in Computing Research, 5:73–90, 1989. [4] D. Johnson. A catalog of complexity classes. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume A. The MIT Press, 1990. [5] P. Liberatore. The size of MDP factored policies. In Proceedings of AAAI 2002. AAAI Press, 2002. [6] C. Lund, L. Fortnow, H. Karloff, and N. Nisan. Algebraic methods for interactive proof systems. Journal of the ACM, 39(4):859–868, 1992. [7] M. Mundhenk, J. Goldsmith, C. Lusena, and E. Allender. Complexity of finite-horizon Markov decision process problems. Journal of the ACM, 47(4):681–720, 2000. [8] C. Papadimitriou. Computational Complexity. Addison-Wesley, 1994. [9] A. Shamir. IP = PSPACE. Journal of the ACM, 39(4):869–877, 1992.
|
2002
|
204
|
2,219
|
A Prototype for Automatic Recognition of Spontaneous Facial Actions M.S. Bartlett, G. Littlewort, B. Braathen, T.J. Sejnowski , and J.R. Movellan Institute for Neural Computation and Department of Biology University of California, San Diego and Howard Hughes Medical Institute at the Salk Institute Email: marni, gwen, bjorn, terry, javier @inc.ucsd.edu Abstract We present ongoing work on a project for automatic recognition of spontaneous facial actions. Spontaneous facial expressions differ substantially from posed expressions, similar to how continuous, spontaneous speech differs from isolated words produced on command. Previous methods for automatic facial expression recognition assumed images were collected in controlled environments in which the subjects deliberately faced the camera. Since people often nod or turn their heads, automatic recognition of spontaneous facial behavior requires methods for handling out-of-image-plane head rotations. Here we explore an approach based on 3-D warping of images into canonical views. We evaluated the performance of the approach as a front-end for a spontaneous expression recognition system using support vector machines and hidden Markov models. This system employed general purpose learning mechanisms that can be applied to recognition of any facial movement. The system was tested for recognition of a set of facial actions defined by the Facial Action Coding System (FACS). We showed that 3D tracking and warping followed by machine learning techniques directly applied to the warped images, is a viable and promising technology for automatic facial expression recognition. One exciting aspect of the approach presented here is that information about movement dynamics emerged out of filters which were derived from the statistics of images. 1 Introduction Much of the early work on computer vision applied to facial expressions focused on recognizing a few prototypical expressions of emotion produced on command (e.g. ”smile”). These examples were collected under controlled imaging conditions with subjects deliberately facing the camera. Extending these systems to spontaneous facial behavior is a critical step forward for applications of this technology. Spontaneous facial expressions differ substantially from posed expressions, similar to how continuous, spontaneous speech differs from isolated words produced on command. Spontaneous facial expressions are mediated by a distinct neural pathway from posed expressions. The pyramidal motor system, originating in the cortical motor strip, drives voluntary facial actions, whereas involuntary, emotional facial expressions appear to originate in a subcortical motor circuit involving the basal ganglia, limbic system, and the cingulate motor area (e.g. [15]). Psychophysical work has shown that spontaneous facial expressions differ from posed expressions in a number of ways [6]. Subjects often contract different facial muscles when asked to pose an emotion such as fear versus when they are actually experiencing fear. (See Figure 1b.) In addition, the dynamics are different. Spontaneous expressions have a fast and smooth onset, with apex coordination, in which muscle contractions in different parts of the face peak at the same time. In posed expressions, the onset tends to be slow and jerky, and the muscle contractions typically do not peak simultaneously. Spontaneous facial expressions often contain much information beyond what is conveyed by basic emotion categories, such as happy, sad, or surprised. Faces convey signs of cognitive state such as interest, boredom, and confusion, conversational signals, and blends of two or more emotions. Instead of classifying expressions into a few basic emotion categories, the work presented here attempts to measure the full range of facial behavior by recognizing facial animation units that comprise facial expressions. The system is based on the Facial Action Coding System (FACS) [7]. FACS [7] is the leading method for measuring facial movement in behavioral science. It is a human judgment system that is presently performed without aid from computer vision. In FACS, human coders decompose facial expressions into action units (AUs) that roughly correspond to independent muscle movements in the face (see Figure 1). Ekman and Friesen described 46 independent facial movements, or ”facial actions” (Figure 1). These facial actions are analogous to phonemes for facial expression. Over 7000 distinct combinations of such movements have been observed in spontaneous behavior. 1+2 1+2+4 1+4 AU1 Inner Brow Raiser (Central Frontalis) AU2 (Lateral Frontalis) Outer Brow Raiser AU4 Brow Lower Depressor Glaballae) Depressor Supercilli, (Corrugator, Figure 1: The Facial Action Coding System decomposes facial expressions into component actions. The three individual brow region actions and selected combinations are illustrated. When subjects pose fear they often perform 1+2 (top right), whereas spontaneous fear reliably elicits 1+2+4 (bottom right) [6]. Advantages of FACS include (1) Objectivity. It does not apply interpretive labels to expressions but rather a description of physical changes in the face. This enables studies of new relationships between facial movement and internal state, such as the facial signals of stress or fatigue. (2) Comprehensiveness. FACS codes for all independent motions of the face observed by behavioral psychologists over 20 years of study. (3) Robust link with ground truth. There is over 20 years of behavioral data on the relationships between FACS movement parameters and underlying emotional or cognitive states. Automated facial action coding would be effective for human-computer interaction tools and low bandwidth facial animation coding, and would have a tremendous impact on behavioral science by making objective measurement more accessible. There has been an emergence of groups that analyze facial expressing into elementary movements. For example, Essa and Pentland [8] and Yacoob and Davis [16] proposed methods to analyze expressions into elementary movements using an animation style coding system inspired by FACS. Eric Petajan’s group has also worked for many years on methods for automatic coding of facial expressions in the style of MPEG4 [5], which codes movement of a set of facial feature points. While coding standards like MPEG4 are useful for animating facial avatars, they are of limited use for behavioral research since, for example, MPEG4 does not encode some behaviorally relevant facial movements such as the muscle that circles the eye (the orbicularis oculi, which differentiates spontaneous from posed smiles [6]). It also does not encode the wrinkles and bulges that are critical for distinguishing some facial muscle activations that are difficult to differentiate using motion alone yet can have different behavioral implications (e.g. see Figure 1b.) One other group has focused on automatic FACS recognition as a tool for behavioral research, lead by Jeff Cohn and Takeo Kanade. They present an alternative approach based on traditional computer vision techniques, including edge detection and optic flow. A comparative analysis of our approaches is available in [1, 4, 10]. 2 Factorizing rigid head motion from nonrigid facial deformations The most difficult technical challenge that came with spontaneous behavior was the presence of out-of-plane rotations due to the fact that people often nod or turn their head as they communicate with others. Our approach to expression recognition is based on statistical methods applied directly to filter bank image representations. While in principle such methods may be able to learn the invariances underlying out-of-plane rotations, the amount of data needed to learn such invariances is likely to be impractical. Instead, we addressed this issue by means of deformable 3D face models. We fit 3D face models to the image plane, texture those models using the original image frame, then rotate the model to frontal views, warp it to a canonical face geometry, and then render the model back into the image plane. (See Figures 2,3,4). This allowed us to factor out image variation due to rigid head rotations from variations due to nonrigid face deformations. The rigid transformations were encoded by the rotation and translation parameters of the 3D model. These parameters are retained for analysis of the relation of rigid head dynamics to emotional and cognitive state. Since our goal was to explore the use of 3D models to handle out-of-plane rotations for expression recognition, we first tested the system using hand-labeling to give the position of 8 facial landmarks. However the approach can be generalized in a straightforward and principled manner to work with automatic 3D trackers, which we are presently developing [9]. Although human labeling can be highly precise, the labels employed here had substantial error due to inattention when the face moved. Mean deviation between two labelers was 4 pixels 8.7. Hence it may be realistic to suppose that a fully automatic head pose tracker may achieve at least this level of accuracy. a. b. Figure 2: Head pose estimation. a. First camera parameters and face geometry are jointly estimated using an iterative least squares technique b. Next head pose is estimated in each frame using stochastic particle filtering. Each particle is a head model at a particular orientation and scale. When landmark positions in the image plane are known, the problem of 3D pose estimation is relatively easy to solve. We begin with a canonical wire-mesh face model and adapt it to the face of a particular individual by using 30 image frames in which 8 facial features have been labeled by hand. Using an iterative least squares triangulation technique, we jointly estimate camera parameters and the 3D coordinates of these 8 features. A scattered data interpolation technique is then used to modify the canonical 3D face model so that it fits the 8 feature positions [14]. Once camera parameters and 3D face geometry are known, we use a stochastic particle filtering approach [11] to estimate the most likely rotation and translation parameters of the 3D face model in each video frame. (See [2]). 3 Action unit recognition Database of spontaneous facial expressions. We employed a dataset of spontaneous facial expressions from freely behaving individuals. The dataset consisted of 300 Gigabytes of 640 x 480 color images, 8 bits per pixels, 60 fields per second, 2:1 interlaced. The video sequences contained out of plane head rotation up to 75 degrees. There were 17 subjects: 3 Asian, 3 African American, and 11 Caucasians. Three subjects wore glasses. The facial behaviors in one minute of video per subject were scored frame by frame by 2 teams experts on the FACS system, one lead by Mark Frank at Rutgers, and another lead by Jeffrey Cohn at U. Pittsburgh. While the database we used was rather large for current digital video storage standards, in practice the number of spontaneous examples of each action unit in the database was relatively small. Hence, we prototyped the system on the three actions which had the most examples: Blinks (AU 45 in the FACS system) for which we used 168 examples provided by 10 subjects, Brow raises (AU 1+2) for which we had 48 total examples provided by 12 subjects, and Brow lower (AU 4) for which we had 14 total examples provided by 12 subjects. Negative examples for each category consisted of randomly selected sequences matched by subject and sequence length. These three facial actions have relevance to applications such as monitoring of alertness, anxiety, and confusion. The system presented here employs general purpose learning mechanisms that can be applied to recognition of any facial action once sufficient training data is available. There is no need to develop special purpose feature measures to recognize additional facial actions. HMM Decoder SVM Bank Figure 3: Flow diagram of recognition system. First, head pose is estimated, and images are warped to frontal views and canonical face geometry. The warped images are then passed through a bank of Gabor filters. SVM’s are then trained to classify facial actions from the Gabor representation in individual video frames. The output trajectories of the SVM’s for full video sequences are then channeled to hidden Markov models. Recognition system. An overview of the recognition system is illustrated in Figure 3. Head pose was estimated in the video sequences using a particle filter with 100 particles. Face images were then warped onto a face model with canonical face geometry, rotated to frontal, and then projected back into the image plane. This alignment was used to define and crop a subregion of the face image containing the eyes and brows. The vertical position of the eyes was 0.67 of the window height. There were 105 pixels between the eyes and 120 pixels from eyes to mouth. Pixel brightnesses were linearly rescaled to [0,255]. Soft histogram equalization was then performed on the image gray-levels by applying a logistic filter with parameters chosen to match the mean and variance of the gray-levels in the neutral frame [13]. The resulting images were then convolved with a bank of Gabor kernels at 5 spatial frequencies and 8 orientations. Output magnitudes were normalized to unit length and then downsampled by a factor of 4. The Gabor representations were then channeled to a bank of support vector machines (SVM’s). Nonlinear SVM’s were trained to recognize facial actions in individual video frames. The training samples for the SVM’s were the action peaks as identified by the FACS experts, and negative examples were randomly selected frames matched by subject. Generalization to novel subjects was tested using leave-oneout cross-validation. The SVM output was the margin (distance along the normal to the class partition). Trajectories of SVM outputs for the full video sequence of test subjects were then channeled to hidden Markov models (HMM’s). The HMM’s were trained to classify facial actions without using information about which frame contained the action peak. Generalization to novel subjects was again tested using leave-one-out cross-validation. Figure 4: User interface for the FACS recognition system. The face on the bottom right is an original frame from the dataset. Top right: Estimate of head pose. Center image: Warped to frontal view and conical geometry. The curve shows the output of the blink detector for the video sequence. This frame is in the relaxation phase of a blink. 4 Results Classifying individual frames with SVM’s. SVM’s were first trained to discriminate images containing the peak of blink sequences from randomly selected images containing no blinks. A nonlinear SVM applied to the Gabor representations obtained 95.9% correct for discriminating blinks from non-blinks for the peak frames. The nonlinear kernel was of the form where is Euclidean distance, and is a constant. Here . Recovering FACS dynamics. Figure 5a shows the time course of SVM outputs for complete sequences of blinks. Although the SVM was not trained to measure the amount of eye opening, it is an emergent property. In all time courses shown, the SVM outputs are test outputs (the SVM was not trained on the subject shown). Figure 5b shows the SVM trajectory when tested on a sequence with multiple peaks. The SVM outputs provide information about FACS dynamics that was previously unavailable by human coding due to time constraints. Current coding methods provide only the beginning and end of the action, along with the location and magnitude of the action unit peak. This information about dynamics may be useful for future behavioral studies. a. b. Output Frame * * * c. Output Frame B B C C D C D Figure 5: a. Blink trajectories of SVM outputs for four different subjects. Star indicates the location of the AU peak as coded by the human FACS expert. b. SVM output trajectory for a blink with multiple peaks (flutter). c. Brow raise trajectories of SVM outputs for one subject. Letters A-D indicate the intensity of the AU as coded by the human FACS expert, and are placed at the peak frame. HMM’s were trained to classify action units from the trajectories of SVM outputs. HMM’s addressed the case in which the frame containing the action unit peak is unknown. Two hidden Markov models, one for Blinks and one for random sequences matched by subject and length, were trained and tested using leave-one-out cross-validation. A mixture of Gaussians model was employed. Test sequences were assigned to the category for which the probability of the sequence given the model was greatest. The number of states was varied from 1-10, and the number of Gaussian mixtures was varied from 1-7. Best performance of 98.2% correct was obtained using 6 states and 7 Gaussians. Brow movement discrimination. The goal was to discriminate three action units localized around the eyebrows. Since this is a 3-category task and SVMs are originally designed for binary classification tasks, we trained a different SVM on each possible binary decision task: Brow Raise (AU 1+2) versus matched random sequences, Brow Lower (AU 4) versus another set of matched random sequences, and Brow Raise versus Brow Lower. The output of these three SVM’s was then fed to an HMM for classification. The input to the HMM consisted of three values which were the outputs of each of the three 2-category SVM’s. As for the blinks, the HMM’s were trained on the “test” outputs of the SVM’s. The HMM’s achieved 78.2% accuracy using 10 states, 7 Gaussians and including the first derivatives of the observation sequence in the input. Separate HMM’s were also trained to perform each of the 2-category brow movement discriminations in image sequences. These results are summarized in Table 1. Figure 5c shows example output trajectories for the SVM trained to discriminate Brow Raise from Random matched sequences. As with the blinks, we see that despite not being trained to indicate AU intensity, an emergent property of the SVM output was the magnitude of the brow raise. Maximum SVM output for each sequence was positively correlated with action unit intensity, as scored by the human FACS expert
. The contribution of Gabors was examined by comparing linear and nonlinear SVM’s applied directly to the difference images versus to Gabor outputs. Consistent with our previous findings [12], Gabor filters made the space more linearly separable than the raw difference images. For blink detection, a linear SVM on the Gabors performed significantly better (93.5%) than a linear SVM applied directly to difference images (78.3%). Using a nonlinear SVM with difference images improved performance substantially to 95.9%, whereas the nonlinear SVM on Gabors gave only a small increment in performance, also Action % Correct N (HMM) Blink vs. Non-blink 98.2 168 Brow Raise vs. Random 90.6 48 Brow Lower vs. Random 75.0 14 Brow Raise vs. Brow Lower 93.5 31 Brow Raise vs. Lower vs. Random 78.2 62 Table 1: Summary of results. All performances are for generalization to novel subjects. Random: Random sequences matched by subject and length. N: Total number of positive (and also negative) examples. to 95.9%. A similar pattern was obtained for the brow movements, except that nonlinear SVMs applied directly to difference images did not perform as well as nonlinear SVM’s applied to Gabors. The details of this analysis, and also an analysis of the contribution of SVM’s to system performance, are available in [1]. 5 Conclusions We explored an approach for handling out-of-plane head rotations in automatic recognition of spontaneous facial expressions from freely behaving individuals. The approach fits a 3D model of the face and rotates it back to a canonical pose (e.g., frontal view). We found that machine learning techniques applied directly to the warped images is a promising approach for automatic coding of spontaneous facial expressions. This approach employed general purpose learning mechanisms that can be applied to the recognition of any facial action. The approach is parsimonious and does not require defining a different set of feature parameters or image operations for each facial action. While the database we used was rather large for current digital video storage standards, in practice the number of spontaneous examples of each action unit in the database was relatively small. We therefore prototyped the system on the three actions which had the most examples. Inspection of the performance of our system shows that 14 examples was sufficient to successfully learn an action, an order of 50 examples was sufficient to achieve performance over 90%, and an order of 150 examples was sufficient to achieve over 98% accuracy and learn smooth trajectories. Based on these results, we estimate that a database of 250 minutes of coded, spontaneous behavior would be sufficient to train the system on the vast majority of facial actions. One exciting finding is the observation that important measurements emerged out of filters derived from the statistics of the images. For example, the output of the SVM filter matched to the blink detector could be potentially used to measure the dynamics of eyelid closure, even though the system was not designed to explicitly detect the contours of the eyelid and measure the closure. (See Figure 5.) The results presented here employed hand-labeled feature points for the head pose tracking step. We are presently developing a fully automated head pose tracker that integrates particle filtering with a system developed by Matthew Brand for automatic real-time 3D tracking based on optic flow [3]. All of the pieces of the puzzle are ready for the development of automated systems that recognize spontaneous facial actions at the level of detail required by FACS. Collection of a much larger, realistic database to be shared by the research community is a critical next step. Acknowledgments Support for this project was provided by ONR N00014-02-1-0616, NSF-ITR IIS-0220141 and IIS0086107, DCI contract No.2000-I-058500-000, and California Digital Media Innovation Program DiMI 01-10130. References [1] M.S. Bartlett, B. Braathen, G. Littlewort-Ford, J. Hershey, I. Fasel, T. Marks, E. Smith, T.J. Sejnowski, and J.R. Movellan. Automatic analysis of of spontaneous facial behavior: A final project report. Technical Report UCSD MPLab TR 2001.08, University of California, San Diego, 2001. [2] B. Braathen, M.S. Bartlett, G. Littlewort-Ford, and J.R. Movellan. 3-D head pose estimation from video by nonlinear stochastic particle filtering. In Proceedings of the 8th Joint Symposium on Neural Computation, 2001. [3] M. Brand. Flexible flow for 3d nonrigid tracking and shape recovery. In CVPR, 2001. [4] J.F. Cohn, T. Kanade, T. Moriyama, Z. Ambadar, J. Xiao, J. Gao, and H. Imamura. A comparative study of alternative FACS coding algorithms. Technical Report CMU-RI-TR-02-06, Robotics Institute, Carnegie-Mellon Univerisity, 2001. [5] P. Doenges, F. Lavagetto, J. Ostermann, I.S. Pandzic, and E. Petajan. Mpeg-4: Audio/video and synthetic graphics/audio for real-time, interactive media delivery. Image Communications Journal, 5(4), 1997. [6] P. Ekman. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage. W.W. Norton, New York, 3rd edition, 2001. [7] P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, CA, 1978. [8] I. Essa and A. Pentland. Coding, analysis, interpretation, and recognition of facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):757–63, 1997. [9] I.R. Fasel, M.S. Bartlett, and J.R. Movellan. A comparison of gabor filter methods for automatic detection of facial landmarks. In Proceedings of the 5th International Conference on Face and Gesture Recognition, 2002. Accepted. [10] M.G. Frank, P. Perona, and Y. Yacoob. Automatic extraction of facial action codes. final report and panel recommendations for automatic facial action coding. Unpublished manuscript, Rutgers University, 2001. [11] G. Kitagawa. Monte carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1):1–25, 1996. [12] G. Littlewort-Ford, M.S. Bartlett, and J.R. Movellan. Are your eyes smiling? detecting genuine smiles with support vector machines and gabor wavelets. In Proceedings of the 8th Joint Symposium on Neural Computation, 2001. [13] J.R. Movellan. Visual speech recognition with stochastic networks. In G. Tesauro, D.S. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 851–858. MIT Press, Cambridge, MA, 1995. [14] Fr´ed´eric Pighin, Jamie Hecker, Dani Lischinski, Richard Szeliski, and David H. Salesin. Synthesizing realistic facial expressions from photographs. Computer Graphics, 32(Annual Conference Series):75–84, 1998. [15] W. E. Rinn. The neuropsychology of facial expression: A review of the neurological and psychological mechanisms for producing facial expressions. Psychological Bulletin, 95(1):52– 77, 1984. [16] Y. Yacoob and L. Davis. Recognizing human facial expressions from long image sequences using optical flow. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(6):636– 642, 1996.
|
2002
|
205
|
2,220
|
Exact MAP Estimates by (Hyper)tree Agreement Martin J. Wainwright, Department of EECS, UC Berkeley, Berkeley, CA 94720 martinw@eecs.berkeley.edu Tommi S. Jaakkola and Alan S. Willsky, Department of EECS, Massachusetts Institute of Technology, Cambridge, MA, 02139 tommi,willsky @mit.edu Abstract We describe a method for computing provably exact maximum a posteriori (MAP) estimates for a subclass of problems on graphs with cycles. The basic idea is to represent the original problem on the graph with cycles as a convex combination of tree-structured problems. A convexity argument then guarantees that the optimal value of the original problem (i.e., the log probability of the MAP assignment) is upper bounded by the combined optimal values of the tree problems. We prove that this upper bound is met with equality if and only if the tree problems share an optimal configuration in common. An important implication is that any such shared configuration must also be the MAP configuration for the original problem. Next we develop a tree-reweighted max-product algorithm for attempting to find convex combinations of tree-structured problems that share a common optimum. We give necessary and sufficient conditions for a fixed point to yield the exact MAP estimate. An attractive feature of our analysis is that it generalizes naturally to convex combinations of hypertree-structured distributions. 1 Introduction Integer programming problems arise in various fields, including machine learning, statistical physics, communication theory, and error-correcting coding. In many cases, such problems can be formulated in terms of undirected graphical models [e.g., 1], in which the cost function corresponds to a graph-structured probability distribution, and the problem of interest is to find the maximum a posteriori (MAP) configuration. In previous work [2], we have shown how to use convex combinations of tree-structured distributions in order to upper bound the log partition function. In this paper, we apply similar ideas to upper bound the log probability of the MAP configuration. As we show, this upper bound is met with equality whenever there is a configuration that is optimal for all trees, in which case it must also be a MAP configuration for the original problem. The work described here also makes connections with the max-product algorithm [e.g., 3, 4, 5], a well-known method for attempting to compute the MAP configuration, one which is exact for trees but approximate for graphs with cycles. In the context of coding problems, Frey and Koetter [4] developed an attenuated version of max-product, which is guaranteed to find the MAP codeword if it converges. One contribution of this paper is to develop a tree-reweighted max-product algorithm that attempts to find a collection of tree-structured problems that share a common optimum. This algorithm, though similar to both the standard and attenuated max-product updates [4], differs in key ways. The remainder of this paper is organized as follows. The next two subsections provide background on exponential families and convex combinations. In Section 2, we introduce the basic form of the upper bounds on the log probability of the MAP assignment, and then develop necessary and sufficient conditions for it to tight (i.e., met with equality). In Section 3, we develop tree-reweighted max-product algorithms for attempting to find a convex combination of trees that yields a tight bound. We prove that for positive compatibility functions, the algorithm always has at least one fixed point; moreover, if a key uniqueness condition is satisfied, the configuration specified by a fixed point must be MAP optimal. We also illustrate how the algorithm, like the standard max-productalgorithm [5], can fail if the uniqueness condition is not satisfied. We conclude in Section 4 with pointers to related work, and extensions of the current work. 1.1 Notation and set-up Consider an undirected (simple) graph . For each vertex
, let be a random variable taking values in the discrete space !#"$ . We use the letters %'& to denote particular elements of the sample space . The overall random vector ( *)
+! takes values in the Cartesian product space -,./1032544462 , , where 7 ) ) . We make use of the following exponential representation of a graph-structured distribution 89 ( . For some index set : , we let ;. =<> )@? !: denote a collection of potential functions defined on the cliques of , and let A! A > )B? C: be a vector of real-valued weights on these potential functions. The exponential family determined by ; is the collection of distributions 8D (FE AGIHKJ@LM#N >PO=Q A >R<S> ( T . In a minimal exponential representation, the functions =<> are affinely independent. For example, one minimal representation of a binary process (i.e., U1 for all
) using pairwise potential functions is the usual Ising model, in which the collection of potentials ;V W )
XY [Z WR\ ) ]
^ _ `a . In this case, the index set is given by :bcYZd . In most of our analysis, we use an overcomplete representation, in which there are linear dependencies among the potentials =<> . In particular, we use indicator functions as potentials: < e f g h i e f g h
j` E %k! (1a) < l\e f'm n \ h i e f g oi \e m g \ K
_ pq E r%'&P1` 2` \ (1b) where the indicator function i e f g is equal to one if $% , and zero otherwise. In this case, the index set : consists of the union of :
E %s )
tu E %`v with the edge indices : 5
_ E %s&P ) ]
_ #! E r%6&Pp!pw2`x\ . Of interest to us is the maximum a posteriori configuration y (zp{S| I}^~ t}J Ox
8D (FE A . Equivalently, we can express this MAP configuration as the solution of the integer program gAIk}J O
- (FE A , where (FE Ah gARB; ( O f A e f < e fgW ' \g O^ f m Al\e f'm < l\e f'mRnW= R\ (2) Note that the function nA is the maximum of a collection of linear functions, and hence is convex [6] as a function of A , which is a key property for our subsequent development. 1.2 Convex combinations of trees Let y A be a particular parameter vector for which we are interested in computing y A . In this section, we show how to derive upper bounds via the convexity of . Let denote a particular spanning tree of , and let Un j denote the set of all spanning trees. For each spanning tree , let AR be an exponential parameter vector of the same dimension as A that respects the structure of . To be explicit, if is defined by an edge set Y , then ARn must have zeros in all elements corresponding to edges not in . However, given an edge belonging to two trees 0 and , the quantity A n 0 can be different than A n . For compactness, let
ARn5 ) denote the full collection, where the notation ARn specifies those subelements of corresponding to spanning tree . In order to define a convex combination, we require a probability distribution over the set of spanning trees — that is, a vector
nK ) such that N O n5` . For any distribution , we define its support, denoted by PLPL3 , to be the set of trees to which it assigns strictly positive probability. In the sequel, we will also be interested in the probability ~ 5! that a given edge appears in a spanning tree chosen randomly under . We let "! # ) +! represent a vector of edge appearance probabilities, which must belong to the spanning tree polytope [see 2]. We say that a distribution (or the vector $! ) is valid if &% for every edge j! . A convex combination of exponential parameter vectors is defined via the weighted sum N O n5oARnt , which we denote compactly as '& &( ARn *) . Of particular importance are collections of exponential parameters for which there exists a convex combination that is equal to y A . Accordingly, we define the set + y A,
M - E . . '& "( AR*) y A T . For any valid distribution , it can be seen that there exist pairs * E p+ y As . Example 1 (Single cycle). To illustrate these definitions, consider a binary distribution ( #w for all nodes
d ) defined by a single cycle on 4 nodes. Consider a target distribution in the minimal Ising form 89 (FE y A` HJsL 0 /j.//0j./021j.21 0 " 3 y As ; otherwise stated, the target distribution is specified by the minimal parameter y AY ( dud udu4) , where the zeros represent the fact that y A` for all
ua . The 5 6 5 6 5 6 7 8 9 8 9 : 8 9 ; < ; < = ; < > ? > ? > ? @ Figure 1. A convex combination of four distributions ACBEDGFIH BKJMLONPN , each defined by a spanning tree JL , is used to approximate the target distribution AQBEDRFS HTN on the single-cycle graph. four possible spanning trees v 2U )WV Y^YX on a single cycle on four nodes are illustrated in Figure 1. We define a set of associated exponential parameters Y ARn U as follows: ARn 0 X Z ( / .. ) ARn= X Z ( / . [) ARn0=h X Z ( .. /4) ARn\1h X Z ( .. / /4) Finally, we choose nU bW]WX for all U1^ . With this uniform distribution over trees, we have 1 Z ]_X for each edge, and '" `( ABnP)S y A , so that - E pa+ y A . 2 Optimal upper bounds With the set-up of the previous section, the basic form of the upper bounds follows by applying Jensen’s inequality [6]. In particular, for any pair - E t + y As , we have the upper bound y A '" "( gABn *) . The goal of this section is to examine this bound, and understand when it is met with equality. In more explicit terms, the upper bound can be written as: y As n gAR v 5*k}J O
nARn B;* ( (3) Now suppose that there exists an y ( k, that attains the maximum defining nARn for each tree YPLRLD . In this case, it is clear that the bound (3) is met with equality. An important implication is that the configuration y ( also attains the maximum defining y A , so that it is an optimal solution to the original problem. In fact, as we show below, the converse to this statement also holds. More formally, for any exponential parameter vector ARn , let `*nARn be the collection of configurations ( that attain the maximum defining gAR , defined as follows: `nARnt ( ! , ) nARn B; ( gABnKB;* ( ~k}
( , (4) With this notation, the critical property is that the intersection `+-*
`nARn of configurations optimal for all tree-structured problems is non-empty. We thus have the following result: Proposition 1 (Tightness of bound). The bound of equation (3) is tight if and only if there exists a configuration y ( k, that for each PLPL3 achieves the maximum defining gAR . In other words, y ( `*** . Proof. Consider some pair - E ^+ y A . Let y ( be a configuration that attains the maximum defining y A . We write the difference of the RHS and the LHS of equation (3) as follows: gABn *" y As n nARn *"$ y ARB;gy ( n gAR " nARn B;ny ( Now for each PLPL3 , the term gAR W"ugABntB;y ( is non-negative, and equal to zero only when y ( belongs to `*gABnt . Therefore, the bound is met with equality if and only if y ( achieves the maximum defining gAR for all trees PLPL3 . Proposition 1 motivates the following strategy: given a spanning tree distribution , find a collection of exponential parameters . A^n such that the following holds: (a) Admissibility: The pair * E satisfies N noA^n# y A . (b) Mutual agreement: The intersection `*gA^ of tree-optimal configurations is non-empty. If (for a fixed ) we are able to find a collection satisfying these two properties, then Proposition 1 guarantees that all configurations in the (non-empty) intersection `nAn achieve the maximum defining y A . As discussed above, assuming that assigns strictly positive probability to every edge in the graph, satisfying the admissibility condition is not difficult. It is the second condition of mutual optimality on all trees that poses the challenge. 3 Mutual agreement via equal max-marginals We now develop an algorithm that attempts to find, for a given spanning tree distribution , a collection A^n satisfying both of these properties. Interestingly, this algorithm is related to the ordinary max-product algorithm [3, 5], but differs in several key ways. While this algorithm can be formulated in terms of reparameterization [e.g., 5], here we present a set of message-passing updates. 3.1 Max-marginals The foundation of our development is the fact [1] that any tree-structured distribution 8D (FE AR can be factored in terms of its max-marginals. In particular, for each node
+! , the corresponding single node max-marginal is defined as follows: =gW k}J
89 ( E ABn (5) In words, for each Sw # , =nWK is the maximum probability over the subset of configurations ( with element fixed to S . For each edge
_ #! , the pairwise max-marginal is defined analogously as l\ g \ $k}J
89 ( E AR . With these definitions, the max-marginal tree factorization [1] is given by: 89 (FE ARnt G O =nW ' \g O^ l\ n \ =nWK \nR\ (6) One interpretation of the ordinary max-product algorithm for trees, as shown in our related work [5], is as computing this alternative representation. Suppose moreover that for each node
j! , the following uniqueness condition holds: Uniqueness Condition: For each
+! , the max-marginal has a unique optimum . In this case, the vector ( )
+ is the MAP configuration for the tree-structured distribution [see 5]. 3.2 Tree-reweighted max-product The tree-reweighted max-product method is a message-passing algorithm, with fixed points that specify a collection of tree exponential parameters t A satisfying the admissibility condition. The defining feature of is that the associated tree distributions 8D (FE A5 all share a common set l\ of max-marginals. In particular, for a given tree with edge set 5 , the distribution 8D (FE A n5 is specified compactly by the subcollection
)
+! xZ l\ )
_ p` as follows: ACBEDGF*HTBKJ"NPNaA BEDGF N ! #" $%'&)( $ B+* $ N " , $.+/%'0 , / ( $. B+* $21 * . N ( $ B+* $ N ( . B+* . N (7) where 3 is a constant1 independent of ( . As long as satisfies the Uniqueness Condition, the configuration ( )
b must be the MAP configuration for each treestructured distribution 8D (FE A^n . This mutual agreement on trees, in conjunction with the admissibility of , implies that ( is also the MAP configuration for 8D (FE y A . For each valid ! , there exists a tree-reweighted max-product algorithm designed to find the requisite set of max-marginals via a sequence of message-passing operations. For each edge
_ !Y , let 4v\n=nWK be the message passed from node _ to node
. It is a vector of length ! , with one element for each state %/Y . We use < gS E y A as a 1We use this notation throughout the paper, where the value of may change from line to line. shorthand for N f y A e f < e fgSK , with the quantity < \KgW R\ E y Al\o similarly defined. We use the messages 4Xl\ to specify a set of functions a l\ as follows: ( $ B+* $ N $ B+* $ F S H $ N " %
, $ / $ B+* $ N (8a) ( $. B+* $ 1 * . N $. B+* $ 1 * . FYS HTN %
, $ / . $ B+* $ N . $ B+* $ N , / %
, . /$ . B+* . N $. B+* . N , / (8b) where l\KgS= B\ E y AIHKJ@L ! 0 " < \KgW R\ E y Al\o3 < nW E y A3 y A=\ < gB\ E y A=\o# . For each tree , the subcollection 5 can be used to define a tree-structured distribution 8 (FE , in a manner analogous to equation (7). By expanding the expectation ' &( 8 (FE 5*) and making use of the definitions of and l\ , we can prove the following: Lemma 1 (Admissibility). Given any collection l\ defined by a set of messages as in equations (8a) and (8b), the convex combination N ^38 (FE 5 is equivalent to ^89 (FE y As up to an additive constant. We now need to ensure that l\ are a consistent set of max-marginals for each tree-distribution 8 (FE 5 . It is sufficient [1, 5] to impose, for each edge
_ , the edgewise consistency condition k}J O l\nW= \ 3 =gW . In order to enforce this condition, we update the messages in the following manner: Algorithm 1 (Tree reweighted max-product). 1. Initialize the messages %$w 4&$ l\ with arbitrary positive real numbers. 2. For iterations '` (P , update the messages as follows: *),+ . $ B+* $ N.-0/1 2 %3 4 6587 9 $. $. B+* $ 1 *;: . F S H $ . N=<> . B+*;: . F S H . N@? %
, .+/ $ ) . B+* : . N ) $. B+* : . N , / A (9) Using the definitions of and l\ , as well as the message update equation (9), the following result can be proved: Lemma 2 (Edgewise consistency). Let be a fixed point of the message update equation (9), and let j l\ be defined via as in equations (8a) and (8b) respectively. Then the edgewise consistency condition is satisfied. The message update equation (9) is similar to the standard max-product algorithm [3, 5]. Indeed, if is actually a tree, then we must have l\ for every edge
_ , in which case equation (9) is precisely equivalent to the ordinary max-product update. However, if has cycles, then it is impossible to have l\p for every edge ]
^ _ w , so that the updates in equation (9) differ from ordinary max-product in some key ways. First of all, the weight y Al\ on the potential function < l\ is scaled by the (inverse of the) edge appearance probability W] l\^ . Secondly, for each neighbor B!>Cg_ D
, the incoming message 4FE \ is scaled by the corresponding edge appearance probability E \ . Third of all, in sharp contrast to standard [3] and attenuated [4] max-product updates, the update of message 4v\n — that is, from _ to
along edge
_ — depends on the reverse direction message 4l\ from
to _ along the same edge. Despite these differences, the messages can be updated synchronously as in ordinary max-product. It also possible to perform reparameterization updates over spanning trees, analogous to but distinct from those for ordinary max-product [5]. Such tree-based updates can be terminated once the trees agree on a common configuration, which may happen prior to message convergence [7]. 3.3 Analysis of fixed points In related work [5], we established the existence of fixed points for the ordinary maxproduct algorithm for positive compatibility functions on an arbitrary graph. The same proof can be adapted to show that the tree-reweighted max-product algorithm also has at least one fixed point . Any such fixed point defines pseudo-max-marginals via equations (8a) and (8b), which (by design of the algorithm) have the following property: Theorem 1 (Exact MAP). If satisfies the Uniqueness Condition, then the configuration ( with elements }^~ Fk}J O n is a MAP configuration for 89 (FE y A . Proof. For each spanning tree a n , the fixed point defines a tree-structured distribution 89 (FE A5 via equation (7). By Lemma 2, the elements of are edgewise consistent. By the equivalence of edgewise and global consistency for trees [1], the subcollection )
+! xZ \ ) ]
^ _ #! n5 are exact max-marginals for the tree-structured distribution 8D (FE A . As a consequence, the configuration ( must belong to `*gAn5 for each tree , so that mutual agreement is satisfied. By Lemma 1, the convex combination '& "( ^D8D (FE A^n *) is equal to ^89 (FE y A , so that admissibility is satisfied. Proposition 1 then implies that ( is a MAP configuration for 8D (FE y A . 3.4 Failures of tree-reweighted max-product In all of our experiments so far, the message updates of equation (9), if suitably relaxed, have always converged.2 Rather than convergence problems, the breakdown of the algorithm appears to stem primarily from failure of the Uniqueness Condition. If this assumption is not satisfied, we are no longer guaranteed that the mutual agreement condition is satisfied (i.e., `* may be empty). Indeed, a configuration ( belongs to `*- if and only if the following conditions hold: Node optimality: The element must achieve t}J n for every
+ . Edge optimality: The pair g \ must achieve k}J \ n \ for all
_ #! . For a given fixed point that fails the Uniqueness Condition, it may or may not be possible to satisfy these conditions, as the following example illustrates. Example 2. Consider the single cycle on three vertices, as illustrated in Figure 2. We define a distribution 8D (FE y A in an indirect manner, by first defining a set of pseudo-maxmarginals in panel (a). Here ( 4) is a parameter to be specified. Observe that the symmetry of this construction ensures that satisfies the edgewise consistency condition (Lemma 2) for any v ( [) . For each of the three spanning trees of this graph, the collection defines a tree-structured distribution 8 (FE as in equation (7). We define the underlying distribution via 8D (FE y As# ' `( ^38 (FE P)P , where is the uniform distribution (weight W] Z on each tree). In the case ^% , illustrated in panel (b), it can be seen that two configurations — namely ( vv ) and ( vv[) — satisfy the node and edgewise optimality conditions. Therefore, each of these configurations are global maxima for the cost function '` &( ^89 (FE P) . On the other hand, when , as illustrated in panel (c), any configuration ( that is edgewise optimal for all three edges must satisfy \ for all ]
_ p` . This is clearly impossible, so that the fixed point cannot be used to specify a MAP assignment. Of course, it should be recognized that this example was contrived to break down the algorithm. It should also be noted that, as shown in our related work [5], the standard max2In a relaxed message update, we take an -step towards the new (log) message, where B
1 7 is the step size parameter. To date, we have not been able to prove that relaxed updates will always converge. ( $ 7 7 ( $. 7 7
! "
! (a) (b) (c) Figure 2. Cases where the Uniqueness Condition fails. (a) Specification of pseudo-maxmarginals . (b) For $# &% ' , both ( and ( 7 7 7 are node and edgewise optimal. (c) For *) +% ' , no configurations are node and edgewise optimal on the full graph. product algorithm can also break down when this Uniqueness Condition is not satisfied. 4 Discussion This paper demonstrated the utility of convex combinations of tree-structured distributions in upper bounding the log probability of the MAP configuration. We developed a family of tree-reweighted max-product algorithms for computing optimal upper bounds. In certain cases, the optimal upper bound is met with equality, and hence yields an exact MAP configuration for the original problem on the graph with cycles. An important open question is to characterize the range of problems for which the upper bound is tight. For problems involving a binary-valued random vector, we have isolated a class of problems for which the upper bound is guaranteed to be tight. We have also investigated the Lagrangian dual associated with the upper bound (3). The dual has a natural interpretation as a tree-relaxed linear program, and has been applied to turbo decoding [7]. Finally, the analysis and upper bounds of this paper can be extended in a straightforward manner to hypertrees of of higher width. In this context, hypertree-reweighted forms of generalized max-product updates [see 5] can again be used to find optimal upper bounds, which (when they are tight) again yield exact MAP configurations. References [1] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. Probablistic Networks and Expert Systems. Statistics for Engineering and Information Science. Springer-Verlag, 1999. [2] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. In Proc. Uncertainty in Artificial Intelligence, volume 18, pages 536–543, August 2002. [3] W. T. Freeman and Y. Weiss. On the optimality of solutions of the max-product belief propagation algorithm in arbitrary graphs. IEEE Trans. Info. Theory, 47:736–744, 2001. [4] B. J. Frey and R. Koetter. Exact inference using the attenuated max-product algorithm. In Advanced mean field methods: Theory and Practice. MIT Press, 2000. [5] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. Tree consistency and bounds on the maxproduct algorithm and its generalizations. LIDS Tech. report P-2554, MIT; Available online at http://www.eecs.berkeley.edu/ , martinw, July 2002. [6] D.P. Bertsekas. Nonlinear programming. Athena Scientific, Belmont, MA, 1995. [7] J. Feldman, M. J. Wainwright, and D. R. Karger. Linear programming-based decoding and its relation to iterative approaches. In Proc. Allerton Conf. Comm. Control and Computing, October 2002.
|
2002
|
206
|
2,221
|
Prediction of Protein Topologies Using Generalized IOHMMs and RNNs Gianluca Pollastri and Pierre Baldi Department of Information and Computer Science University of California, Irvine Irvine, CA 92697-3425 gpollast,pfbaldi@ics.uci.edu Alessandro Vullo and Paolo Frasconi Dipartimento di Sistemi e Informatica Universit`a di Firenze Via di Santa Marta 3, 50139 Firenze, ITALY vullo,paolo@dsi.unifi.it Abstract We develop and test new machine learning methods for the prediction of topological representations of protein structures in the form of coarse- or fine-grained contact or distance maps that are translation and rotation invariant. The methods are based on generalized input-output hidden Markov models (GIOHMMs) and generalized recursive neural networks (GRNNs). The methods are used to predict topology directly in the fine-grained case and, in the coarsegrained case, indirectly by first learning how to score candidate graphs and then using the scoring function to search the space of possible configurations. Computer simulations show that the predictors achieve state-of-the-art performance. 1 Introduction: Protein Topology Prediction Predicting the 3D structure of protein chains from the linear sequence of amino acids is a fundamental open problem in computational molecular biology [1]. Any approach to the problem must deal with the basic fact that protein structures are translation and rotation invariant. To address this invariance, we have proposed a machine learning approach to protein structure prediction [4] based on the prediction of topological representations of proteins, in the form of contact or distance maps. The contact or distance map is a 2D representation of neighborhood relationships consisting of an adjacency matrix at some distance cutoff(typically in the range of 6 to 12 ˚A), or a matrix of pairwise Euclidean distances. Fine-grained maps are derived at the amino acid or even atomic level. Coarse maps are obtained by looking at secondary structure elements, such as helices, and the distance between their centers of gravity or, as in the simulations below, the minimal distances between their Cα atoms. Reasonable methods for reconstructing 3D coordinates from contact/distance maps have been developed in the NMR literature and elsewhere H H O i i i I i B F Figure 1: Bayesian network for bidirectional IOHMMs consisting of input units, output units, and both forward and backward Markov chains of hidden states. [14] using distance geometry and stochastic optimization techniques. Thus the main focus here is on the more difficult task of contact map prediction. Various algorithms for the prediction of contact maps have been developed, in particular using feedforward neural networks [6]. The best contact map predictor in the literature and at the last CASP prediction experiment reports an average precision [True Positives/(True Positives + False Positives)] of 21% for distant contacts, i.e. with a linear distance of 8 amino acid or more [6] for fine-grained amino acid maps. While this result is encouraging and well above chance level by a factor greater than 6, it is still far from providing sufficient accuracy for reliable 3D structure prediction. A key issue in this area is the amount of noise that can be tolerated in a contact map prediction without compromising the 3D-reconstruction step. While systematic tests in this area have not yet been published, preliminary results appear to indicate that recovery of as little as half of the distant contacts may suffice for proper reconstruction, at least for proteins up to 150 amino acid long (Rita Casadio and Piero Fariselli, private communication and oral presentation during CASP4 [10]). It is important to realize that the input to a fine-grained contact map predictor need not be confined to the sequence of amino acids only, but may also include evolutionary information in the form of profiles derived by multiple alignment of homologue proteins, or structural feature information, such as secondary structure (alpha helices, beta strands, and coils), or solvent accessibility (surface/buried), derived by specialized predictors [12, 13]. In our approach, we use different GIOHMM and GRNN strategies to predict both structural features and contact maps. 2 GIOHMM Architectures Loosely speaking, GIOHMMs are Bayesian networks with input, hidden, and output units that can be used to process complex data structures such as sequences, images, trees, chemical compounds and so forth, built on work in, for instance, [5, 3, 7, 2, 11]. In general, the connectivity of the graphs associated with the hidden units matches the structure of the data being processed. Often multiple copies of the same hidden graph, but with different edge orientations, are used in the hidden layers to allow direct propagation of information in all relevant directions. Output Plane Input Plane 4 Hidden Planes NE NW SW SE Figure 2: 2D GIOHMM Bayesian network for processing two-dimensional objects such as contact maps, with nodes regularly arranged in one input plane, one output plane, and four hidden planes. In each hidden plane, nodes are arranged on a square lattice, and all edges are oriented towards the corresponding cardinal corner. Additional directed edges run vertically in column from the input plane to each hidden plane, and from each hidden plane to the output plane. To illustrate the general idea, a first example of GIOHMM is provided by the bidirectional IOHMMs (Figure 1) introduced in [2] to process sequences and predict protein structural features, such as secondary structure. Unlike standard HMMs or IOHMMS used, for instance in speech recognition, this architecture is based on two hidden markov chains running in opposite directions to leverage the fact that biological sequences are spatial objects rather than temporal sequences. Bidirectional IOHMMs have been used to derive a suite of structural feature predictors [12, 13, 4] available through http://promoter.ics.uci.edu/BRNN-PRED/. These predictors have accuracy rates in the 75-80% range on a per amino acid basis. 2.1 Direct Prediction of Topology To predict contact maps, we use a 2D generalization of the previous 1D Bayesian network. The basic version of this architecture (Figures 2) contains 6 layers of units: input, output, and four hidden layers, one for each cardinal corner. Within each column indexed by i and j, connections run from the input to the four hidden units, and from the four hidden units to the output unit. In addition, the hidden units in each hidden layer are arranged on a square or triangular lattice, with all the edges oriented towards the corresponding cardinal corner. Thus the parameters of this two-dimensional GIOHMMs, in the square lattice case, are the conditional probability distributions: P(Oi|Ii,j, HNE i,j , HNW i,j , HSW i,j , HSE i,j, ) P(HNE i,j |Ii,j, HNE i−1,j, HNE i,j−1) P(HNW i,j |Ii,j, HNW i+1,j, HNW i,j−1) P(HSW i,j |Ii,j, HSW i+1,j, HSW i,j+1) P(HSE i,j |Ii,j, HSE i−1,j, HSE i,j+1) (1) In a contact map prediction at the amino acid level, for instance, the (i, j) output represents the probability of whether amino acids i and j are in contact or not. This prediction depends directly on the (i, j) input and the four-hidden units in the same column, associated with omni-directional contextual propagation in the hidden planes. In the simulations reported below, we use a more elaborated input consisting of a 20 × 20 probability matrix over amino acid pairs derived from a multiple alignment of the given protein sequence and its homologues, as well as the structural features of the corresponding amino acids, including their secondary structure classification and their relative exposure to the solvent, derived from our corresponding predictors. It should be clear how GIOHMM ideas can be generalized to other data structures and problems in many ways. In the case of 3D data, for instance, a standard GIOHMM would have an input cube, an output cube, and up to 8 cubes of hidden units, one for each corner with connections inside each hidden cube oriented towards the corresponding corner. In the case of data with an underlying tree structure, the hidden layers would correspond to copies of the same tree with different orientations and so forth. Thus a fundamental advantage of GIOHMMs is that they can process a wide range of data structures of variable sizes and dimensions. 2.2 Indirect Prediction of Topology Although GIOHMMs allow flexible integration of contextual information over ranges that often exceed what can be achieved, for instance, with fixed-input neural networks, the models described above still suffer from the fact that the connections remain local and therefore long-ranged propagation of information during learning remains difficult. Introduction of large numbers of long-ranged connections is computationally intractable but in principle not necessary since the number of contacts in proteins is known to grow linearly with the length of the protein, and hence connectivity is inherently sparse. The difficulty of course is that the location of the long-ranged contacts is not known. To address this problem, we have developed also a complementary GIOHMM approach described in Figure 3 where a candidate graph structure is proposed in the hidden layers of the GIOHMM, with the two different orientations naturally associated with a protein sequence. Thus the hidden graphs change with each protein. In principle the output ought to be a single unit (Figure 3b) which directly computes a global score for the candidate structure presented in the hidden layer. In order to cope with long-ranged dependencies, however, it is preferable to compute a set of local scores (Figure 3c), one for each vertex, and combine the local scores into a global score by averaging. More specifically, consider a true topology represented by the undirected contact graph G∗= (V, E∗), and a candidate undirected prediction graph G = (V, E). A global measure of how well E approximates E∗is provided by the informationretrieval F1 score defined by the normalized edge-overlap F1 = 2|E ∩E∗|/(|E| + |E∗|) = 2PR/(P + R), where P = |E ∩E∗|/|E| is the precision (or specificity) and R = |E ∩E∗|/|E∗| is the recall (or sensitivity) measure. Obviously, 0 ≤F1 ≤1 and F1 = 1 if and only if E = E∗. The scoring function F1 has the property of being monotone in the sense that if |E| = |E′| then F1(E) < F1(E′) if and only if |E ∩E∗| < |E′ ∩E∗|. Furthermore, if E′ = E ∪{e} where e is an edge in E∗but not in E, then F1(E′) > F1(E). Monotonicity is important to guide the search in the space of possible topologies. It is easy to check that a simple search algorithm based on F1 takes on the order of O(|V |3) steps to find E∗, basically by trying all possible edges one after the other. The problem then is to learn F1, or rather a good approximation to F1. To approximate F1, we first consider a similar local measure Fv by considering the (a) (b) (c) O I(v) I(v) I(v) O(v) H (v) H (v) H (v) H (v) F B F B Figure 3: Indirect prediction of contact maps. (a) target contact graph to be predicted. (b) GIOHMM with two hidden layers: the two hidden layers correspond to two copies of the same candidate graph oriented in opposite directions from one end of the protein to the other end. The single output O is the global score of how well the candidate graph approximates the true contact map. (c) Similar to (b) but with a local score O(v) at each vertex. The local scores can be averaged to produce a global score. In (b) and (c) I(v) represents the input for vertex v, and H F (v) and HB(v) are the corresponding hidden variables. set Ev of edges adjacent to vertex v and Fv = 2|Ev ∩E∗ v|/(|Ev| + |E∗ v|) with the global average ¯F = P v Fv/|V |. If n and n∗are the average degrees of G and G∗, it can be shown that: F1 = 1 |V | X v 2|Ev ∩E∗| n + n∗ and ¯F = 1 |V | X v 2|Ev ∩E∗| n + ϵv + n∗+ ϵ∗v (2) where n+ϵv (resp. n∗+ϵ∗ v) is the degree of v in G (resp. in G∗). In particular, if G and G∗are regular graphs, then F1(E) = ¯F(E) so that ¯F is a good approximation to F1. In the contact map regime where the number of contacts grows linearly with the length of the sequence, we should have in general |E| ≈|E∗| ≈(1 + α)|V | so that each node on average has n = n∗= 2(1 + α) edges. The value of α depends of course on the neighborhood cutoff. As in reinforcement learning, to learn the scoring function one is faced with the problem of generating good training sets in a high dimensional space, where the states are the topologies (graphs), and the policies are algorithms for adding a single edge to a given graph. In the simulations we adopt several different strategies including static and dynamic generation. Within dynamic generation we use three exploration strategies: random exploration (successor graph chosen at random), pure exploitation (successor graph maximizes the current scoring function), and semi-uniform exploitation to find a balance between exploration and exploitation [with probability ϵ (resp. 1 −ϵ) we choose random exploration (resp. pure exploitation)]. 3 GRNN Architectures Inference and learning in the protein GIOHMMs we have described is computationally intensive due to the large number of undirected loops they contain. This problem can be addressed using a neural network reparameterization assuming that: (a) all the nodes in the graphs are associated with a deterministic vector (note that in the case of the output nodes this vector can represent a probability distribution so that the overall model remains probabilistic); (b) each vector is a deterministic function of its parents; (c) each function is parameterized using a neural network (or some other class of approximators); and (d) weight-sharing or stationarity is used between similar neural networks in the model. For example, in the 2D GIOHMM contact map predictor, we can use a total of 5 neural networks to recursively compute the four hidden states and the output in each column in the form: Oij = NO(Iij, HNW i,j , HNE i,j , HSW i,j , HSE i,j ) HNE i,j = NNE(Ii,j, HNE i−1,j, HNE i,j−1) HNW i,j = NNW (Ii,j, HNW i+1,j, HNW i,j−1) HSW i,j = NSW (Ii,j, HSW i+1,j, HSW i,j+1) HSE i,j = NSE(Ii,j, HSE i−1,j, HSE i,j+1) (3) In the NE plane, for instance, the boundary conditions are set to H NE ij = 0 for i = 0 or j = 0. The activity vector associated with the hidden unit H NE ij depends on the local input Iij, and the activity vectors of the units HNE i−1,j and HNE i,j−1. Activity in NE plane can be propagated row by row, West to East, and from the first row to the last (from South to North), or column by column South to North, and from the first column to the last. These GRNN architectures can be trained by gradient descent by unfolding the structures in space, leveraging the acyclic nature of the underlying GIOHMMs. 4 Data Many data sets are available or can be constructed for training and testing purposes, as described in the references. The data sets used in the present simulations are extracted from the publicly available Protein Data Bank (PDB) and then redundancy reduced, or from the non-homologous subset of PDB Select (ftp://ftp.emblheidelberg.de/pub/databases/). In addition, we typically exclude structures with poor resolution (less than 2.5-3 ˚A), sequences containing less than 30 amino acids, and structures containing multiple sequences or sequences with chain breaks. For coarse contact maps, we use the DSSP program [9] (CMBI version) to assign secondary structures and we remove also sequences for which DSSP crashes. The results we report for fine-grained contact maps are derived using 424 proteins with lengths in the 30-200 range for training and an additional non-homologous set of 48 proteins in the same length range for testing. For the coarse contact map, we use a set of 587 proteins of length less than 300. Because the average length of a secondary structure element is slightly above 7, the size of a coarse map is roughly 2% the size of the corresponding amino acid map. 5 Simulation Results and Conclusions We have trained several 2D GIOHMM/GRNN models on the direct prediction of fine-grained contact maps. Training of a single model typically takes on the order of a week on a fast workstation. A sample of validation results is reported in Table 1 for four different distance cutoffs. Overall percentages of correctly predicted contacts Table 1: Direct prediction of amino acid contact maps. Column 1: four distance cutoffs. Column 2, 3, and 4: overall percentages of amino acids correctly classified as contacts, non-contacts, and in total. Column 5: Precision percentage for distant contacts (|i −j| ≥8) with a threshold of 0.5. Single model results except for last line corresponding to an ensemble of 5 models. Cutoff Contact Non-Contact Total Precision (P) 6 ˚A .714 .998 .985 .594 8 ˚A .638 .998 .970 .670 10 ˚A .512 .993 .931 .557 12 ˚A .433 .987 .878 .549 12 ˚A .445 .990 .883 .717 and non-contacts at all linear distances, as well as precision results for distant contacts (|i −j| ≥8) are reported for a single GIOHMM/GRNN model. The model has k = 14 hidden units in the hidden and output layers of the four hidden networks, as well as in the hidden layer of the output network. In the last row, we also report as an example the results obtained at 12˚A by an ensemble of 5 networks with k = 11, 12, 13, 14 and 15. Note that precision for distant contacts exceeds all previously reported results and is well above 50%. For the prediction of coarse-grained contact maps, we use the indirect GIOHMM/GRNN strategy and compare different exploration/exploitation strategies: random exploration, pure exploitation, and their convex combination (semiuniform exploitation). In the semi-uniform case we set the probability of random uniform exploration to ϵ = 0.4. In addition, we also try a fourth hybrid strategy in which the search proceeds greedily (i.e. the best successor is chosen at each step, as in pure exploitation), but the network is trained by randomly sub-sampling the successors of the current state. Eight numerical features encode the input label of each node: one-hot encoding of secondary structure classes; normalized linear distances from the N to C terminus; average, maximum and minimum hydrophobic character of the segment (based on the Kyte-Doolittle scale with a moving window of length 7). A sample of results obtained with 5-fold cross-validation is shown in Table 2. Hidden state vectors have dimension k = 5 with no hidden layers. For each strategy we measure performances by means of several indices: micro and macroaveraged precision (mP, MP), recall (mR, MR) and F1 measure (mF1, MF1). Micro-averages are derived based on each pair of secondary structure elements in each protein, whereas macro-averages are obtained on a per-protein basis, by first computing precision and recall for each protein, and then averaging over the set of all proteins. In addition, we also measure the micro and macro averages for specificity in the sense of percentage of correct prediction for non-contacts (mP(nc), MP(nc)). Note the tradeoffs between precision and recall across the training methods, the hybrid method achieving the best F1 results. Table 2: Indirect prediction of coarse contact maps with dynamic sampling. Strategy mP mP(nc) mR mF1 MP MP(nc) MR MF1 Random exploration .715 .769 .418 .518 .767 .709 .469 .574 Semi-uniform .454 .787 .631 .526 .507 .767 .702 .588 Pure exploitation .431 .806 .726 .539 .481 .793 .787 .596 Hybrid .417 .834 .790 .546 .474 .821 .843 .607 We have presented two approaches, based on a very general IOHMM/RNN framework, that achieve state-of-the-art performance in the prediction of proteins contact maps at fine and coarse-grained levels of resolution. In principle both methods can be applied to both resolution levels, although the indirect prediction is computationally too demanding for fine-grained prediction of large proteins. Several extensions are currently under development, including the integration of these methods into complete 3D structure predictors. While these systems require long training periods, once trained they can rapidly sift through large proteomic data sets. Acknowledgments The work of PB and GP is supported by a Laurel Wilkening Faculty Innovation award and awards from NIH, BREP, Sun Microsystems, and the California Institute for Telecommunications and Information Technology. The work of PF and AV is partially supported by a MURST grant. References [1] D. Baker and A. Sali. Protein structure prediction and structural genomics. Science, 294:93–96, 2001. [2] P. Baldi and S. Brunak and P. Frasconi and G. Soda and G. Pollastri. Exploiting the past and the future in protein secondary structure prediction. Bioinformatics, 15(11):937–946, 1999. [3] P. Baldi and Y. Chauvin. Hybrid modeling, HMM/NN architectures, and protein applications. Neural Computation, 8(7):1541–1565, 1996. [4] P. Baldi and G. Pollastri. Machine learning structural and functional proteomics. IEEE Intelligent Systems. Special Issue on Intelligent Systems in Biology, 17(2), 2002. [5] Y. Bengio and P. Frasconi. Input-output HMM’s for sequence processing. IEEE Trans. on Neural Networks, 7:1231–1249, 1996. [6] P. Fariselli, O. Olmea, A. Valencia, and R. Casadio. Prediction of contact maps with neural networks and correlated mutations. Protein Engineering, 14:835–843, 2001. [7] P. Frasconi, M. Gori, and A. Sperduti. A general framework for adaptive processing of data structures. IEEE Trans. on Neural Networks, 9:768–786, 1998. [8] Z. Ghahramani and M. I. Jordan. Factorial hidden Markov models Machine Learning, 29:245–273, 1997. [9] W. Kabsch and C. Sander. Dictionary of protein secondary structure: pattern recognition of hydrogen-bonded and geometrical features. Biopolymers, 22:2577–2637, 1983. [10] A. M. Lesk, L. Lo Conte, and T. J. P. Hubbard. Assessment of novel fold targets in CASP4: predictions of three-dimensional structures, secondary structures, and interresidue contacts. Proteins, 45, S5:98–118, 2001. [11] G. Pollastri and P. Baldi. Predition of contact maps by GIOHMMs and recurrent neural networks using lateral propagation from all four cardinal corners. Proceedings of 2002 ISMB (Intelligent Systems for Molecular Biology) Conference. Bioinformatics, 18, S1:62–70, 2002. [12] G. Pollastri, D. Przybylski, B. Rost, and P. Baldi. Improving the prediction of protein secondary structure in three and eight classes using recurrent neural networks and profiles. Proteins, 47:228–235, 2002. [13] G. Pollastri, P. Baldi, P. Fariselli, and R. Casadio. Prediction of coordination number and relative solvent accessibility in proteins. Proteins, 47:142–153, 2002. [14] M. Vendruscolo, E. Kussell, and E. Domany. Recovery of protein structure from contact maps. Folding and Design, 2:295–306, 1997.
|
2002
|
207
|
2,222
|
Mismatch String Kernels for SVM Protein Classification Christina Leslie Department of Computer Science Columbia University cleslie@cs.columbia.edu Eleazar Eskin Department of Computer Science Columbia University eeskin@cs.columbia.edu Jason Weston Max-Planck Institute Tuebingen, Germany weston@tuebingen.mpg.de William Stafford Noble Department of Genome Sciences University of Washington noble@gs.washington.edu Abstract We introduce a class of string kernels, called mismatch kernels, for use with support vector machines (SVMs) in a discriminative approach to the protein classification problem. These kernels measure sequence similarity based on shared occurrences of -length subsequences, counted with up to mismatches, and do not rely on any generative model for the positive training sequences. We compute the kernels efficiently using a mismatch tree data structure and report experiments on a benchmark SCOP dataset, where we show that the mismatch kernel used with an SVM classifier performs as well as the Fisher kernel, the most successful method for remote homology detection, while achieving considerable computational savings. 1 Introduction A fundamental problem in computational biology is the classification of proteins into functional and structural classes based on homology (evolutionary similarity) of protein sequence data. Known methods for protein classification and homology detection include pairwise sequence alignment [1, 2, 3], profiles for protein families [4], consensus patterns using motifs [5, 6] and profile hidden Markov models [7, 8, 9]. We are most interested in discriminative methods, where protein sequences are seen as a set of labeled examples — positive if they are in the protein family or superfamily and negative otherwise — and we train a classifier to distinguish between the two classes. We focus on the more difficult problem of remote homology detection, where we want our classifier to detect (as positives) test sequences that are only remotely related to the positive training sequences. One of the most successful discriminative techniques for protein classification – and the best performing method for remote homology detection – is the Fisher-SVM [10, 11] approach of Jaakkola et al. In this method, one first builds a profile hidden Markov model Formerly William Noble Grundy: see http://www.cs.columbia.edu/˜noble/name-change.html (HMM) for the positive training sequences, defining a log likelihood function
for any protein sequence . If
is the maximum likelihood estimate for the model parameters, then the gradient vector
assigns to each (positive or negative) training sequence an explicit vector of features called Fisher scores. This feature mapping defines a kernel function, called the Fisher kernel, that can then be used to train a support vector machine (SVM) [12, 13] classifier. One of the strengths of the Fisher-SVM approach is that it combines the rich biological information encoded in a hidden Markov model with the discriminative power of the SVM algorithm. However, one generally needs a lot of data or sophisticated priors to train the hidden Markov model, and because calculating the Fisher scores requires computing forward and backward probabilities from the Baum-Welch algorithm (quadratic in sequence length for profile HMMs), in practice it is very expensive to compute the kernel matrix. In this paper, we present a new string kernel, called the mismatch kernel, for use with an SVM for remote homology detection. The "! -mismatch kernel is based on a feature map to a vector space indexed by all possible subsequences of amino acids of a fixed length ; each instance of a fixed -length subsequence in an input sequence contributes to all feature coordinates differing from it by at most mismatches. Thus, the mismatch kernel adds the biologically important idea of mismatching to the computationally simpler spectrum kernel presented in [14]. In the current work, we also describe how to compute the new kernel efficiently using a mismatch tree data structure; for values of "! useful in this application, the kernel is fast enough to use on real datasets and is considerably less expensive than the Fisher kernel. We report results from a benchmark dataset on the SCOP database [15] assembled by Jaakkola et al. [10] and show that the mismatch kernel used with an SVM classifier achieves performance equal to the Fisher-SVM method while outperforming all other methods tested. Finally, we note that the mismatch kernel does not depend on any generative model and could potentially be used in other sequence-based classification problems. 2 Spectrum and Mismatch String Kernels The basis for our approach to protein classification is to represent protein sequences as vectors in a high-dimensional feature space via a string-based feature map. We then train a support vector machine (SVM), a large-margin linear classifier, on the feature vectors representing our training sequences. Since SVMs are a kernel-based learning algorithm, we do not calculate the feature vectors explicitly but instead compute their pairwise inner products using a mismatch string kernel, which we define in this section. 2.1 Feature Maps for Strings The "! # -mismatch kernel is based on a feature map from the space of all finite sequences from an alphabet $ of size $%&(' to the ') -dimensional vector space indexed by the set of -length subsequences (“ -mers”) from $ . (For protein sequences, $ is the alphabet of amino acids, '*&,+- .) For a fixed -mer ./&,01204365758590 ) , with each 0;: a character in $ , the "! -neighborhood generated by . is the set of all -length sequences < from $ that differ from . by at most mismatches. We denote this set by =?> )A@ BDC E. . We define our feature map FG> )A@ BDC as follows: if . is a -mer, then FH> )A@ BDC E.&IKJMLNE.9 L;OPQ (1) where JRLE.S&UT if < belongs to =V> )A@ BDC E. , and JLW.X&Y- otherwise. Thus, a -mer contributes weight to all the coordinates in its mismatch neighborhood. For a sequence Z of any length, we extend the map additively by summing the feature vectors for all the -mers in Z : FH> ) @ B*C KZ" & ) -mers in FH> )A@ BDC W. Note that the < -coordinate of FG> )A@ BDC KZR is just a count of all instances of the -mer < occurring with up to mismatches in Z . The "! # -mismatch kernel > ) @ B*C is the inner product in feature space of feature vectors: > )A@ BDC KZ ! &KFH> ) @ B*C KZ" ! FH> )A@ BDC 25 For &/- , we retrieve the -spectrum kernel defined in [14]. 2.2 Fisher Scores and the Spectrum Kernel While we define the spectrum and mismatch feature maps without any reference to a generative model for the positive class of sequences, there is some similarity between the -spectrum feature map and the Fisher scores associated to an order T Markov chain model. More precisely, suppose the generative model for the positive training sequences is given by KZ
& 1 58575 )
1
E ) 1 57575 )
1 !
"58575 E
") 1 575859
1 !
for a string ZV& 1 3658575 , with parameters &7
") 1 585759
1 & 1 57585 )
1 !
&
! " " Q# for characters ! 1 ! 58575 ! )
1 in alphabet $ . Denote by
the maximum likelihood estimate for
on the positive training set. To calculate the Fisher scores for this model, we follow [10] and define independent variables
@ ! " " Q!# & $&% ' ( ( ( ' Q# ) $+* $ * % ' ( ( ( ' Q!# satisfying
@ " " Q!# &/
" " Q!# , , *
* @ " " Q# & T . Then the Fisher scores are given by
@ " " Q# WZ
& . ! " " Q!# / T
@ " " Q!#
@ " " Q!# 0 1 32 . 1 " " Q!# & . " " Q!#
" " Q!# . " " Q# ! where . " " Q!# is the number of instances of the -mer 1 57585 )
1 in Z , and . 3! " " Q# is the number of instances of the 4 T -mer M157585 )
1 . Thus the Fisher score captures the degree to which the -mer 158575! )5
16 is over- or under-represented relative to the positive model. For the -spectrum kernel, the corresponding feature coordinate looks similar but simply uses the unweighted count: J ! " " Q!# KZ" &7. " " Q!# 5 3 Efficient Computation of the Mismatch Kernel Unlike the Fisher vectors used in [10], our feature vectors are sparse vectors in a very high dimensional feature space. Thus, instead of calculating and storing the feature vectors, we directly and efficiently compute the kernel matrix for use with an SVM classifier. 3.1 Mismatch Tree Data Structure We use a mismatch tree data structure (similar to a trie or suffix tree [16, 17]) to represent the feature space (the set of all -mers) and perform a lexical traversal of all -mers occurring in the sample dataset match with up to of mismatches; the entire kernel matrix KZA: ! Z , ! & T 58575 for the sample of sequences is computed in one traversal of the tree. A ! # -mismatch tree is a rooted tree of depth where each internal node has $ "& ' branches and each branch is labeled with a symbol from $ . A leaf node represents a fixed -mer in our feature space – obtained by concatenating the branch symbols along the path from root to leaf – and an internal node represents the prefix for those -mer features which are its descendants in the tree. We use a depth-first search of this tree to store, at each node that we visit, a set of pointers to all instances of the current prefix pattern that occur with mismatches in the sample data. Thus at each node of depth , we maintain pointers to all substrings from the sample data set whose -length prefixes are within mismatches from the -length prefix represented by the path down from the root. Note that the set of valid substrings at a node is a subset of the set of valid substrings of its parent. When we encounter a node with an empty list of pointers (no valid occurrences of the current prefix), we do not need to search below it in the tree. When we reach a leaf node, we sum the contributions of all instances occurring in each source sequence to obtain feature values corresponding to the current -mer, and we update the kernel matrix entry WZ ! Z for each pair of source sequences Z and Z having non-zero feature values. (a) A L A K V V L A V A L A L L K V L L K V L A A L 0 0 0 (b) A L A K V V L A V A L A L L K V L L K V L A A L 0 0 0 V A L A L K V L L K V A A L L K V L A A L 0 1 1 A (c) V A L A L K V L L K V A A L L K V L A A L 0 1 1 A L A K V V L A V A L A L L K V L L K V L A A L 0 0 0 A L A L K V L K V A A L 1 1 A L Figure 1: An
-mismatch tree for a sequence AVLALKAVLL, showing valid instances at each node down a path: (a) at the root node; (b) after expanding the path ; and (c) after expanding the path . The number of mismatches for each instance is also indicated. 3.2 Efficiency of the Kernel Computation Since we compute the kernel in one depth-first traversal, we do not actually need to store the entire mismatch tree but instead compute the kernel using a recursive function, which makes more efficient use of memory and allows kernel computations for large datasets. The number of -mers within mismatches of any given fixed -mer is N "! ! 'K& , B : E' TA : & B 'WB . Thus the effective number of -mer instances that we need to traverse grows as E= B 'WB , where = is the total length of the sample data. At a leaf node, if exactly input sequences contain valid instances of the current -mer, one performs 3 updates to the kernel matrix. For sequences each of length . (total length = &.! ), the worst case for the kernel computation occurs when the feature vectors are all equal and have the maximal number of non-zero entries, giving worst case overall running time " 3 .# "! ! 'W & " 3 . B ' B . For the application we discuss here, small values of are most useful, and the kernel calculations are quite inexpensive. When mismatch kernels are used in combination with SVMs, the learned classifier $WZ"6& , : 1 :W.: FH> )A@ BDC KZ8:K ! FH> )A@ BDC WZ"3 (where Z : are the training sequences that map to support vectors, : are labels, and . : are weights) can be implemented by pre-computing and storing per -mer scores. Then the prediction $WZ" can be calculated in linear time by look-up of -mer scores. In practice, one usually wants to use a normalized feature map, so one would also need to compute the norm of the vector F> )A@ BDC WZ" , with complexity . B 'WB for a sequence of length . . Simple 9TA normalization schemes, like dividing by sequence length, can also be used. 4 Experiments: Remote Protein Homology Detection We test the mismatch kernel with an SVM classifier on the SCOP [15] (version 1.37) datasets designed by Jaakkola et al. [10] for the remote homology detection problem. In these experiments, remote homology is simulated by holding out all members of a target SCOP family from a given superfamily. Positive training examples are chosen from the remaining families in the same superfamily, and negative test and training examples are chosen from disjoint sets of folds outside the target family’s fold. The held-out family members serve as positive test examples. In order to train HMMs, Jaakkola et al. used the SAM-T98 algorithm to pull in domain homologs from the non-redundant protein database and added these sequences as positive examples in the experiments. Details of the datasets are available at www.soe.ucsc.edu/research/compbio/discriminative. Because the test sets are designed for remote homology detection, we use small values of . We tested "! # & ! TA and ! TA , where we normalized the kernel via Norm > ) @ B*C KZ ! & > )A@ BDC WZ ! #> )A@ BDC WZ ! Z" > ) @ B*C M! 5 We found that "! & ! TA gave slightly better performance, though results were similar for the two choices. (Data for "! & ! T not shown.) We use a publicly available SVM implementation (www.cs.columbia.edu/compbio/svm) of the soft margin optimization algorithm described in [10]. For comparison, we include results from three other methods. These include the original experimental results from Jaakkola et al. for two methods: the SAM-T98 iterative HMM, and the Fisher-SVM method. We also test PSI-BLAST [3], an alignment-based method widely used in the biological community, on the same data using the methodology described in [14]. Figure 2 illustrates the mismatch-SVM method’s performance relative to three existing homology detection methods as measured by ROC scores. The figure includes results for all
SCOP families, and each series corresponds to one homology detection method. Qualitatively, the curves for Fisher-SVM and mismatch-SVM are quite similar. When we compare the overall performance of two methods using a two-tailed signed rank test [18, 19] based on ROC scores over the 33 families with a -value threshold of -M5 - and including a Bonferroni adjustment to account for multiple comparisons, we find only the following significant differences: Fisher-SVM and mismatch-SVM perform better than SAM-T98 (with p-values 1.3e-02 and 2.7e-02, respectively); and these three methods all perform significantly better than PSI-BLAST in this experiment. Figure 3 shows a family-by-family comparison of performance of the ! TA -mismatchSVM and Fisher-SVM using ROC scores in plot (A) and ROC-50 scores in plot (B). 1 In both plots, the points fall approximately evenly above and below the diagonal, indicating little difference in performance between the two methods. Figure 4 shows the improvement provided by including mismatches in the SVM kernel. The figures plot ROC scores (plot 1The ROC-50 score is the area under the graph of the number of true positives as a function of false positives, up to the first 50 false positives, scaled so that both axes range from 0 to 1. This score is sometimes preferred in the computational biology community, motivated by the idea that a biologist might be willing to sift through about 50 false positives. 0 5 10 15 20 25 30 35 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Number of families ROC (5,1)-Mismatch-SVM ROC Fisher-SVM ROC SAM-T98 PSI-BLAST Figure 2: Comparison of four homology detection methods. The graph plots the total number of families for which a given method exceeds an ROC score threshold. (A)) and ROC-50 scores (plot (B)) for two string kernel SVM methods: using & , & T mismatch kernel, and using &
(no mismatch) spectrum kernel, the best-performing choice with & - . Almost all of the families perform better with mismatching than without, showing that mismatching gives significantly better generalization performance. 5 Discussion We have presented a class of string kernels that measure sequence similarity without requiring alignment or depending upon a generative model, and we have given an efficient method for computing these kernels. For the remote homology detection problem, our discriminative approach — combining support vector machines with the mismatch kernel — performs as well in the SCOP experiments as the most successful known method. A practical protein classification system would involve fast multi-class prediction – potentially involving thousands of binary classifiers – on massive test sets. In such applications, computational efficiency of the kernel function becomes an important issue. Chris Watkins [20] and David Haussler [21] have recently defined a set of kernel functions over strings, and one of these string kernels has been implemented for a text classification problem [22]. However, the cost of computing each kernel entry is . 3 in the length of the input sequences. Similarly, the Fisher kernel of Jaakkola et al. requires quadratic-time computation for each Fisher vector calculated. The ! # -mismatch kernel is relatively inexpensive to compute for values of that are practical in applications, allows computation of multiple kernel values in one pass, and significantly improves performance over the previously presented (mismatch-free) spectrum kernel. Many family-based remote homogy detection algorithms incorporate a method for selecting probable domain homologs from unannotated protein sequence databases for additional training data. In these experiments, we used the domain homologs that were identified by SAM-T98 (an iterative HMM-based algorithm) as part of the Fisher-SVM method and included in the datasets; these homologs may be more useful to the Fisher kernel than to the mismatch kernel. We plan to extend our method by investigating semi-supervised techniques for selecting unannotated sequences for use with the mismatch-SVM. 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Fisher-SVM ROC (5,1)-Mismatch-SVM ROC 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Fisher-SVM ROC50 (5,1)-Mismatch-SVM ROC50 (A) (B) Figure 3: Family-by-family comparison of -mismatch-SVM with Fisher-SVM. The coordinates of each point in the plot are the ROC scores (plot (A)) or ROC-50 scores (plot (B)) for one SCOP family, obtained using the mismatch-SVM with , (x-axis) and Fisher-SVM (y-axis). The dotted line is . 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 k=3 Spectrum-SVM ROC (5,1)-Mismatch-SVM ROC 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 k=3 Spectrum-SVM ROC50 (5,1)-Mismatch-SVM ROC50 (A) (B) Figure 4: Family-by-family comparison of -mismatch-SVM with spectrum-SVM. The coordinates of each point in the plot are the ROC scores (plot (A)) or ROC-50 scores (plot (B)) for one SCOP family, obtained using the mismatch-SVM with , (x-axis) and spectrum-SVM with (y-axis). The dotted line is
. Many interesting variations on the mismatch kernel can be explored using the framework presented here. For example, explicit -mer feature selection can be implemented during calculation of the kernel matrix, based on a criterion enforced at each leaf or internal node. Potentially, a good feature selection criterion could improve performance in certain applications while decreasing kernel computation time. In biological applications, it is also natural to consider weighting each -mer instance contribution to a feature coordinate by evolutionary substitution probabilities. Finally, one could use linear combinations of kernels #> ) @ BEC to capture similarity of different length -mers. We believe that further experimentation with mismatch string kernels could be fruitful for remote protein homology detection and other biological sequence classification problems. Acknowledgments CL is partially supported by NIH grant LM07276-02. WSN is supported by NSF grants DBI-0078523 and ISI-0093302. We thank Nir Friedman for pointing out the connection with Fisher scores for Markov chain models. References [1] M. S. Waterman, J. Joyce, and M. Eggert. Computer alignment of sequences, chapter Phylogenetic Analysis of DNA Sequences. Oxford, 1991. [2] S. F. Altschul, W. Gish, W. Miller, E. W. Myers, and D. J. Lipman. A basic local alignment search tool. Journal of Molecular Biology, 215:403–410, 1990. [3] S. F. Altschul, T. L. Madden, A. A. Schaffer, J. Zhang, Z. Zhang, W. Miller, and D. J. Lipman. Gapped BLAST and PSI-BLAST: A new generation of protein database search programs. Nucleic Acids Research, 25:3389–3402, 1997. [4] Michael Gribskov, Andrew D. McLachlan, and David Eisenberg. Profile analysis: Detection of distantly related proteins. PNAS, pages 4355–4358, 1987. [5] A. Bairoch. The PROSITE database, its status in 1995. Nucleic Acids Research, 24:189–196, 1995. [6] T. K. Attwood, M. E. Beck, D. R. Flower, P. Scordis, and J. N Selley. The PRINTS protein fingerprint database in its fifth year. Nucleic Acids Research, 26(1):304–308, 1998. [7] A. Krogh, M. Brown, I. Mian, K. Sjolander, and D. Haussler. Hidden markov models in computational biology: Applications to protein modeling. Journal of Molecular Biology, 235:1501– 1531, 1994. [8] S. R. Eddy. Multiple alignment using hidden markov models. In Proceedings of the Third International Conference on Intelligent Systems for Molecular Biology, pages 114–120. AAAI Press, 1995. [9] P. Baldi, Y. Chauvin, T. Hunkapiller, and M. A. McClure. Hidden markov models of biological primary sequence information. PNAS, 91(3):1059–1063, 1994. [10] T. Jaakkola, M. Diekhans, and D. Haussler. A discriminative framework for detecting remote protein homologies. Journal of Computational Biology, 2000. [11] T. Jaakkola, M. Diekhans, and D. Haussler. Using the fisher kernel method to detect remote protein homologies. In Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology, pages 149–158. AAAI Press, 1999. [12] V. N. Vapnik. Statistical Learning Theory. Springer, 1998. [13] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge, 2000. [14] C. Leslie, E. Eskin, and W. S. Noble. The spectrum kernel: A string kernel for SVM protein classification. Proceedings of the Pacific Biocomputing Symposium, 2002. [15] A. G. Murzin, S. E. Brenner, T. Hubbard, and C. Chothia. SCOP: A structural classification of proteins database for the investigation of sequences and structures. Journal of Molecular Biology, 247:536–540, 1995. [16] M. Sagot. Spelling approximate or repeated motifs using a suffix tree. Lecture Notes in Computer Science, 1380:111–127, 1998. [17] G. Pavesi, G. Mauri, and G. Pesole. An algorithm for finding signals of unknown length in DNA sequences. Bioinformatics, 17:S207–S214, July 2001. Proceedings of the Ninth International Conference on Intelligent Systems for Molecular Biology. [18] S. Henikoff and J. G. Henikoff. Embedding strategies for effective use of information from multiple sequence alignments. Protein Science, 6(3):698–705, 1997. [19] S. L. Salzberg. On comparing classifiers: Pitfalls to avoid and a recommended approach. Data Mining and Knowledge Discovery, 1:371–328, 1997. [20] C. Watkins. Dynamic alignment kernels. Technical report, UL Royal Holloway, 1999. [21] D. Haussler. Convolution kernels on discrete structure. Technical report, UC Santa Cruz, 1999. [22] Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. Text classification using string kernels. Preprint.
|
2002
|
21
|
2,223
|
Adaptive Caching by Refetching Robert B. Gramacy , Manfred K. Warmuth , Scott A. Brandt, Ismail Ari Department of Computer Science, UCSC Santa Cruz, CA 95064 rbgramacy, manfred, scott, ari @cs.ucsc.edu Abstract We are constructing caching policies that have 13-20% lower miss rates than the best of twelve baseline policies over a large variety of request streams. This represents an improvement of 49–63% over Least Recently Used, the most commonly implemented policy. We achieve this not by designing a specific new policy but by using on-line Machine Learning algorithms to dynamically shift between the standard policies based on their observed miss rates. A thorough experimental evaluation of our techniques is given, as well as a discussion of what makes caching an interesting on-line learning problem. 1 Introduction Caching is ubiquitous in operating systems. It is useful whenever we have a small, fast main memory and a larger, slower secondary memory. In file system caching, the secondary memory is a hard drive or a networked storage server while in web caching the secondary memory is the Internet. The goal of caching is to keep within the smaller memory data objects (files, web pages, etc.) from the larger memory which are likely to be accessed again in the near future. Since the future request stream is not generally known, heuristics, called caching policies, are used to decide which objects should be discarded as new objects are retained. More precisely, if a requested object already resides in the cache then we call it a hit, corresponding to a low-latency data access. Otherwise, we call it a miss, corresponding to a high-latency data access as the data must be fetched from the slower secondary memory into the faster cache memory. In the case of a miss, room must be made in the cache memory for the new object. To accomplish this a caching policy discards from the cache objects which it thinks will cause the fewest or least expensive future misses. In this work we consider twelve baseline policies including seven common policies (RAND, FIFO, LIFO, LRU, MRU, LFU, and MFU), and five more recently developed and very successful policies (SIZE and GDS [CI97], GD* [JB00], GDSF and LFUDA [ACD 99]). These algorithms employ a variety of directly observable criteria including recency of access, frequency of access, size of the objects, cost of fetching the objects from secondary memory, and various combinations of these. The primary difficulty in selecting the best policy lies in the fact that each of these policies may work well in different situations or at different times due to variations in workload, Partial support from NSF grant CCR 9821087 Supported by Hewlett Packard Labs, Storage Technologies Department system architecture, request size, type of processing, CPU speed, relative speeds of the different memories, load on the communication network, etc. Thus the difficult question is: In a given situation, which policy should govern the cache? For example, the request stream from disk accesses on a PC is quite different from the request stream produced by web-proxy accesses via a browser, or that of a file server on a local network. The relative performance of the twelve policies vary greatly depending on the application. Furthermore, the characteristics of a single request stream can vary temporally for a fixed application. For example, a file server can behave quite differently during the middle of the night while making tape archives in order to backup data, whereas during the day its purpose is to serve file requests to and from other machines and/or users. Because of their differing decision criteria, different policies perform better given different workload characteristics. The request streams become even more difficult to characterize when there is a hierarchy or a network of caches handling a variety of file-type requests. In these cases, choosing a fixed policy for each cache in advance is doomed to be sub-optimal. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 205000 210000 215000 220000 225000 230000 235000 lru fifo mru lifo size lfu mfu rand gds gdsf lfuda gd 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 205000 210000 215000 220000 225000 230000 (a) (b) 205000 210000 215000 220000 225000 230000 235000 Lowest miss rate policy switches between SIZE, GDS, GDSF, and GD* size gds gdsf gd 205000 210000 215000 220000 225000 230000 Lowest miss rate policy ... SIZE, GDS, GDSF, and GD* (c) (d) Figure 1: Miss rates ( axis)of a) the twelve fixed policies (calculated w.r.t. a window of 300 requests) over 30,000 requests ( axis), b) the same policies on a random permutation of the data set, c) and d) the policies with the lowest miss rates in the figures above. The usual answer to the question of which policy to employ is either to select one that works well on average, or to select one that provides the best performance on some past workload that is believed to be representative. However, these strategies have two inherent costs. First, the selection (and perhaps tuning) of the single policy to be used in any given situation is done by hand and may be both difficult and error-prone, especially in complex system architectures with unknown and/or time-varying workloads. And second, the performance of the chosen policy with the best expected average case performance may in fact be worse than that achievable by another policy at any particular moment. Figure 1 (a) shows the hit rate of the twelve policies described above on a representative portion of one of our data sets (described below in Section 3) and Figure 1 (b) shows the hit rate of the same policies on a random permutation of the request stream. As can be clearly be seen, the miss rates on the permuted data set are quite different from those of the original data set, and it is this difference that our algorithms aim to exploit. Figures 1 (c) and (d) show which policy is best at each instant of time for the data segment and the permuted data segment. It is clear from these (representative) figures that the best policy changes over time. To avoid the perils associated with trying to hand-pick a single policy, one would like to be able to automatically and dynamically select the best policy for any given situation. In other words, one wants a cache replacement policy which is “adaptive”. In our Storage Systems Research Group, we have identified the need for such a solution in the context of complex network architectures and time-varying workloads and suggested a preliminary framework in which a solution could operate [AAG ar], but without giving specific algorithmic solutions to the adaptation problem. This paper presents specific algorithmic solutions that address the need identified in that work. It is difficult to give a precise definition of “adaptive” when the data stream is continually changing. We use the term “adaptive” only informally and when we want to be precise we use off-line comparators to judge the performance of our on-line algorithms, as is commonly done in on-line learning [LW94, CBFH 97, KW97]. An on-line algorithm is called adaptive if it performs well when measured up against off-line comparators. In this paper we use two off-line comparators: BestFixed and BestShifting( ). BestFixed is the a posteriori selected policy with the lowest miss rate on the entire request stream for our twelve policies. BestShifting( ) considers all possible partitions of the request stream into at most segments along with the best policy for each segment. BestShifting( ) chooses the partition with the lowest total miss rate over the entire dataset and can be computed in time using dynamic programming. Here is the total number of requests, a bound on the number of segments, and the number of base-line policies. Figure 2 0 200 400 600 4.0 4.5 5.0 5.5 WWk, BestShifting(K) K = Number of Shifts Missrates % Best Fixed = SIZE BestShift(K) All Virtual Caches BF=SIZE All VC Figure 2: Optimal offline comparators. AllVC
BestShifting( ). shows graphically each of the comparators mentioned above. Notice that BestFixed BestShifting( ), and that most of the advantage of shifting policies occurs with relatively few shifts ( shifts in roughly 300,000 requests). Rather than developing a new caching policy (well-plowed ground, to say the least), this paper uses a master policy to dynamically determine the success rate of all the other policies and switch among them based on their relative performance on the current request stream. We show that with no additional fetches, the master policy works about as well as BestFixed. We define a refetch as a fetch of a previously seen object that was favored by the current policy but discarded from the real cache by a previously active policy. With refetching, it can outperform BestFixed. In particular, when all required objects are refetched instantly, this policy has a 13-20% lower miss rate than BestFixed, and almost the same performance as BestShifting( ) for modest . For reference, when compared with LRU, this policy has a 49-63% lower miss rate. Disregarding misses on objects never seen before (compulsory misses), the performance improvements are even greater. Because refetches themselves potentially costly, it is important to note that they can be done in the background. Our preliminary experiments show this to be both feasible and effective, capturing most of the advantage of instant refetching. A more detailed discussion of our results is given in Section 3 2 The Master Policy We seek to develop an on-line master policy that determines which of a set of baseline policies should govern the real cache at any time. Appropriate switch points need to be found and switches must be facilitated. Our key idea is “virtual caches”. A virtual cache simulates the operation of a single baseline policy. Each virtual cache records a few bytes of metadata about each object in its cache: ID, size, and calculated priority. Object data is only kept in the real cache, making the cost of maintain- Figure 3: Virtual caches embedded in the cache memory. ing the virtual caches negligible1. Via the virtual caches, the master policy can observe the miss rates of each policy on the actual request stream in order to determine their performance on the current workload. To be fair, virtual caches reside in the memory space which could have been used to cache real objects, as is illustrated in Figure 3. Thus, the space used by the real cache is reduced by the space occupied by the virtual caches. We set the virtual size of each virtual cache equal to the size of the full cache. The caches used for computing the comparators BestFixed and BestShifting( ) are based on caches of the full size. A simple heuristic the master policy can use to choose which caching policy should control at any given time is to continuously monitor the number of misses incurred by each policy in a past window of, for example, 300 requests (depicted in Figure 1 (a)). The master policy then gives control of the real cache to the policy with the least misses in this window (shown in Figure 1 (c)). While this works well in practice, maintaining such a window for many fixed policies is expensive, further reducing the space for the real cache. It is also hard to tune the window size. A better master policy keeps just one weight for each policy (non-negative and summing to one) which represents an estimate of its current relative performance. The master policy is always governed by the policy with the maximum weight2. Weights are updated by using the combined loss and share updates of Herbster and Warmuth [HW98] and Bousquet and Warmuth [BW02] from the expert framework [CBFH 97] for on-line learning. Here the experts are the caching policies. This technique is preferred to the window-based master policy because it uses much less memory, and because the parameters of the weight updates are easier to tune than the window size. This also makes the resulting master policy more robust (not shown). 2.1 The Weight Updates Updating the weight vector
after each trial is a two-part process. First, the weights of all policies that missed the new request are multiplied by a factor and then renormalized. We call this the loss update. Since the weights are renormalized, they remain unchanged if all policies miss the new request. As noticed by Herbster and Warmuth [HW98], multiplicative updates drive the weights of poor experts to zero so quickly that it becomes difficult for them to recover if their experts subsequently start doing well. 1As an additional optimization, we record the id and size of each object only once, regardless of the number of virtual caches it appears in. 2This can be sub-optimal in the worst case since it is always possible to construct a data stream where two policies switch back and forth after each request. However, real request streams appear to be divided into segments that favor one of the twelve policies for a substantial number of requests (see Figure 1). Therefore, the second share update prevents the weights of experts that did well in the past from becoming too small, allowing them to recover quickly, as shown in Figure 4. Figure 1(a) shows the current absolute performance of the policies in a rolling window ( ), whereas Figure 4 depicts relative performance and shows how the policies compete over time. (Recall that the policy with the highest weight always controls the real cache). There are a number of share updates [HW98, BW02] with various recovery properties. We chose the FIXED SHARE TO UNIFORM PAST (FSUP) update because of its simplicity and efficiency. Note that the loss bounds proven in the expert framework for the combined loss and share update do not apply in this context. This is because we use the mixture weights only to select the best policy. However, our experimental results suggest that we are exploiting the 0 0.2 0.4 0.6 0.8 1 205000 210000 215000 220000 225000 230000 235000 FSUP Weight Requests Over Time Weight History for Individual Policies lru fifo mru lifo size lfu mfu rand gds gdsf lfuda gd Figure 4: Weights of baseline policies. recovery properties of the combined update that are discussed extensively by Bousquet and Warmuth [BW02]. Formally, for each trial , the loss update is miss
miss
for where is a parameter in and miss is 1 if the -th object is missed by policy and 0 otherwise. The initial distribution is uniform, i.e. . The Fixed-Share to Uniform Past update mixes the current weight vector with the past average weight vector , which is easy to maintain: ! #" $ where is a parameter in . A small parameter causes high weight to decay quickly if its corresponding policy starts incurring more misses than other policies with high weights. The higher the the more quickly past good policies will recover. In our experiments we used &% and ! . 2.2 Demand vs. Instantaneous Rollover When space is needed to cache a new request, the master policy discards objects not present in the governing policy’s virtual cache3. This causes the content of the real cache to “roll over” to the content of the current governing virtual cache. We call this demand rollover because objects in the governing virtual cache are refetched into the real cache on demand. While this master policy works almost as well as BestFixed, we were not satisfied and wanted to do as well as BestShifting( ) (for a reasonably large bound on the number of segments). We noticed that the content of the real cache lagged behind the content of the governing virtual cache and had more misses, and conjectured that ”quicker” rollover strategies would improve overall performance. Our search for a better master policy began by considering an extreme and unrealistic rollover strategy that assures no lag time: After each switch instantaneously refetch all 3We update the virtual caches before the real cache, so there are always objects in the real cache that are not in the governing virtual cache when the master policy goes to find space for a new request. the objects in the new governing virtual cache that were not retained in the real cache. We call this refetching policy instantaneous rollover. By appropriate tuning of the update parameters and the number of instantaneous rollovers can be kept reasonably small and the miss rates of our master policy are almost as good as BestShifting( ) for much larger than the actual number of shifts used on-line. Note that the comparator BestShifting( ) is also not penalized for its instantaneous rollovers. While this makes sense for defining a comparator, we now give more realistic rollover strategies that reduce the lag time. 2.3 Background Rollover Because instantaneous rollover immediately refetches everything in the governing virtual cache that is not already in the real cache, it may cause a large number of refetches even when the number of policy switches is kept small. If all refetches are counted as misses, then the miss rate of such a master policy is comparable to that of BestFixed. The same holds for BestShifting. However, from a user perspective, refetching is advantageous because of the latency advantage gained by having required objects in memory before they are needed. And from a system perspective, refetches can be “free” if they are done when the system is idle. To take advantage of these “free” refetches, we introduce the concept of background rollover. The exact criteria for when to refetch each missing object will depend heavily on the system, workload, and expected cost and benefit of each object. To characterize the performance of background rollover without addressing these architectural details, the following background refetching strategies were examined: 1 refetch for every cache miss; 1 for every hit; 1 for every request; 2 for every request; 1 for every hit and 5 for every miss, etc. Each background technique gave fewer misses than BestFixed, approaching and nearly matching the performance obtained by the master policy using instantaneous rollover. Of course, techniques which reduce the number of policy switches (by tuning and ) also reduce the number of refetches. Figure 5 compares the performance of each master policy with that of BestFixed and shows that the three master policies almost always outperform BestFixed. -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 205000 210000 215000 220000 225000 230000 Miss Rate Requests Over Time Miss Rate Differences bestF - demd bestF - back bestF - inst Figure 5: BestFixed - P, where P Instantaneous, Demand, and Background Rollover 2 . The baseline is BestFixed. Deviations from the baseline show how the performance of our on-line shifting policies differ in miss rate. Above (Below) corresponds to fewer (more) misses than BestFixed. 3 Data and Results Figure 6 shows how the master policy with instantaneous rollover (labeled ’roll’) “tracks” the baseline policy with the lowest miss rate over the representative data segment used in previous figures. Figure 7 shows the performance of our master policies with respect to BestFixed, BestShifting( ), and LRU. It shows that demand rollover does slightly worse than BestFixed, while background 1 (1 refetch every request) and background 2 (1 refetch every hit and 5 every miss) do better than BestFixed and almost as well as instantaneous, which itself does almost as well as BestShifting. All of the policies do significantly better than LRU. Discounting the compulsory misses, our best policies have 1/3 fewer “real” misses than BestFixed and 1/2 the “real” misses of LRU. Figure 8 summarizes the performance of our algorithms over three large datasets. These were gathered using Carnegie Mellon University’s DFSTrace system [MS96] and had durations ranging from a single day to over a year. The traces we used represent a variety of workloads including a personal workstation (Work-Week), a single user (User-Month), and a remote storage system with a large number of clients, filtered by LRU on the clients’ local caches (Server-Month-LRU). For each data set, the table shows the number of requests, % of requests skipped (size cache size), number of compulsory misses of objects not previously seen, and the number of rollovers. For each policy (including BestShifting( )), the table shows miss rate, and % improvement over BestFixed (labeled ’ BF’) and LRU. In each case all 12 virtual caches consumed on average less than 2% of the real cache space. We fixed % , # for all experiments. As already mentioned, BestShifting( ) is never penalized for rollovers. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 205000 210000 215000 220000 225000 230000 235000 Miss Rates Requests Over Time Miss Rates under FSUP with Master lru fifo mru lifo size lfu mfu rand gds gdsf lfuda gd roll Figure 6: “Tracking”the best policy. 0 200 400 600 800 2 3 4 5 6 7 8 9 WWk Master and Comparator Missrates K = Number of Shifts Missrates % LRU Best Fixed = SIZE BestShift(K) All Virtual Caches Compulsory Missrate BF=SIZE LRU All VC Background 2 Background 1 Demand Instantaneous K = 76 Figure 7: Online shifting policies against offline comparators and LRU for Work-Week dataset. Dataset Works User Server Week Month Month LRU #Requests 138k 382k 48k Cache size 900KB 2MB 4MB %Skipped 6.5% 12.8% 15.7% # Compuls 0.020 0.015 0.152 # Shifts 88 485 93 LRU Miss Rate 0.088 0.076 0.450 BestFixed Policy SIZE GDS GDSF Miss Rate 0.055 0.075 0.399 % LRU 36.8% 54.7% 54.2% Demand Miss Rate 0.061 0.076 0.450 % BestF -9.6% -0.5% -12.8% % LRU 30.9% 54.4% 48.5% Backgrnd 1 Miss Rate 0.053 0.068 0.401 % BestF 5.1% 9.8% -0.7% % LRU 40.1% 59.4% 55.5% Backgrnd 2 Miss Rate 0.047 0.067 0.349 % BestF 15.4% 11.9% 12.4% % LRU 46.6% 60.1% 60.3% Instant Miss Rate 0.044 0.065 0.322 % BestF 19.7% 13.4% 19.3% % LRU 49.2% 60.8% 63% BestShifting Miss Rate 0.042 0.039 0.312 % BestF 23.6% 48.0% 21.8% % LRU 52.2% 48.7% 30.1% Figure 8: Performance Summary. 4 Conclusion Operating systems have many hidden parameter tweaking problems which are ideal applications for on-line Machine Learning algorithms. These parameters are often set to values which provide good average case performance on a test workload. For example, we have identified candidate parameters in device management, file systems, and network protocols. Previously the on-line algorithms for predicting as well as the best shifting expert were used to tune the time-out for spinning down the disk of a PC [HLSS00]. In this paper we use the weight updates of these algorithms for dynamically determining the best caching policy. This application is more elaborate because we needed to actively gather performance information about the caching policies via virtual caches. In future work we plan to do a more thorough study of feasibility of background rollover by building actual systems. Acknowledgements: Thanks to David P. Helmbold for an efficient dynamic programming approach to BestShifting( ), Ahmed Amer for data, and Ethan Miller many helpful insights. References [AAG ar] Ismail Ari, Ahmed Amer, Robert Gramacy, Ethan Miller, Scott Brandt, and Darrell D. E. Long. ACME: Adaptive caching using multiple experts. In Proceedings of the 2002 Workshop on Distributed Data and Structures (WDAS 2002). Carleton Scientific, (to appear). [ACD 99] Martin Arlitt, Ludmilla Cherkasova, John Dilley, Rich Friedrich, and Tai Jin. Evaluating content management techniques for Web proxy caches. In Proceedings of the Workshop on Internet Server Performance (WISP99), May 1999. [BW02] O. Bousquet and M. K. Warmuth. Tracking a small set of experts by mixing past posteriors. J. of Machine Learning Research, 3(Nov):363–396, 2002. Special issue for COLT01. [CBFH 97] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427– 485, 1997. [CI97] Pei Cao and Sandy Irani. Cost-aware WWW proxy caching algorithms. In Proceedings of the 1997 Usenix Symposium on Internet Technologies and Systems (USITS-97), 1997. [HLSS00] David P. Helmbold, Darrell D. E. Long, Tracey L. Sconyers, and Bruce Sherrod. Adaptive disk spin-down for mobile computers. ACM/Baltzer Mobile Networks and Applications (MONET), pages 285–297, 2000. [HW98] M. Herbster and M. K. Warmuth. Tracking the best expert. Journal of Machine Learning, 32(2):151–178, August 1998. Special issue on concept drift. [JB00] Shudong Jin and Azer Bestavros. Greedydual* web caching algorithm: Exploiting the two sources of temporal locality in web request streams. Technical Report 2000-011, 4, 2000. [KW97] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Information and Computation, 132(1):1–64, January 1997. [LW94] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212–261, 1994. [MS96] Lily Mummert and Mahadev Satyanarayanan. Long term distributed file reference tracing: Implementation and experience. Software - Practice and Experience (SPE), 26(6):705–736, June 1996.
|
2002
|
22
|
2,224
|
Timing and Partial Observability in the Dopamine System Nathaniel D. Daw1,3, Aaron C. Courville2,3, and David S. Touretzky1,3 1Computer Science Department, 2Robotics Institute, 3Center for the Neural Basis of Cognition Carnegie Mellon University, Pittsburgh, PA 15213 {daw,aaronc,dst}@cs.cmu.edu Abstract According to a series of influential models, dopamine (DA) neurons signal reward prediction error using a temporal-difference (TD) algorithm. We address a problem not convincingly solved in these accounts: how to maintain a representation of cues that predict delayed consequences. Our new model uses a TD rule grounded in partially observable semi-Markov processes, a formalism that captures two largely neglected features of DA experiments: hidden state and temporal variability. Previous models predicted rewards using a tapped delay line representation of sensory inputs; we replace this with a more active process of inference about the underlying state of the world. The DA system can then learn to map these inferred states to reward predictions using TD. The new model can explain previously vexing data on the responses of DA neurons in the face of temporal variability. By combining statistical model-based learning with a physiologically grounded TD theory, it also brings into contact with physiology some insights about behavior that had previously been confined to more abstract psychological models. 1 Introduction A series of models [1, 2, 3, 4, 5] based on temporal-difference (TD) learning [6] has explained most responses of primate dopamine (DA) neurons during conditioning [7] as an error signal for predicting reward, and has also identified the DA system as a substrate for conditioning behavior [8]. We address a troublesome issue from these models: how to maintain a representation of cues that predict delayed consequences. For this, we use a formalism that extends the Markov processes in which previous models were grounded. Even in the laboratory, the world is often poorly described as Markov in immediate sensory observations. In trace conditioning, for instance, nothing observable spans the delay between a transient stimulus and the reward it predicts. For DA models, this raises problems of coping with hidden state and of tracking temporal intervals. Most previous models address these issues using a tapped delay line representation of the world’s state. This augments the representation of current sensory observations with remembered past observations, dividing temporal intervals into a series of states to mark the passage of time. But linear combinations of tapped delay lines do not properly model variability in the intervals between events. Also, the augmented representation may poorly match the contingency structure of the experimental situation: for instance, depending on the amount of history retained, it may be insufficient to span delays, or it may contain old, irrelevant data. We propose a model that better reflects experimental situations by using a formalism that explicitly incorporates hidden state and temporal variability: a partially observable semiMarkov process. The proposal envisions the interaction between a cortical perceptual system that infers the world’s hidden state using an internal world model, and a dopaminergic TD system that learns reward predictions for these inferred states. This model improves on its predecessors’ descriptions of neuronal firing in situations involving temporal variability, and suggests additional connections with animal behavior. 2 DA models and temporal variability ... (a) (b) ISI ITI S R R S ISI ITI −1 0 1 δ S→ ←R Markov TD model −1 0 1 δ S→ ←R −1 0 1 δ S→ ←R (c) 0 1 2 3 Time → −0.1 0 0.1 S→ ←R semi−Markov TD model −0.1 0 0.1 S→ ←R −0.1 0 0.1 S→ ←R (d) Time → 0 1 2 3 0 1 δ ←S ←R Markov TD model 0 1 δ ←S ←R 0 1 δ ←S ←R 0 1 δ ←S ←R 0 1 δ ←S ←R (e) 0 2 4 Time → −0.1 0.1 ←S ←R semi−Markov TD model −0.1 0.1 ←S ←R −0.1 0.1 ←S ←R −0.1 0.1 ←S ←R −0.1 0.1 ←S ←R (f) Time → 0 2 4 Figure 1: S: stimulus; R: reward. (a,b) State spaces for the Markov tapped delay line (a) and our semi-Markov (b) TD models of a trace conditioning experiment. (c,d) Modeled DA activity (TD error) when an expected reward is delivered early (top), on time (middle) or late (bottom). The tapped delay line model (c) produces spurious negative error after an early reward, while, in accord with experiments, our semi-Markov model (d) does not. Shaded stripes under (d) and (f) track the model’s belief distribution over the world’s hidden state (given a one-timestep backward pass), with the ISI in white, the ITI in black, and gray for uncertainty between the two. (e,f) Modeled DA activity when reward timing varies uniformly over a range. The tapped delay line model (e) incorrectly predicts identical excitation to rewards delivered at all times, while, in accord with experiment, our model (f) predicts a response that declines with delay. Several models [1, 2, 3, 4, 5] identify the firing of DA neurons with the reward prediction error signal δt of a TD algorithm [6]. In the models, DA neurons are excited by positive error in reward prediction (caused by unexpected rewards or reward-predicting stimuli) and inhibited by negative prediction error (caused by the omission of expected reward). If a reward arrives as expected, the models predict no change in firing rate. These characteristics have been demonstrated in recordings of primate DA neurons [7]. In idealized form (neglecting some instrumental contingencies), these experiments and the others that we consider here are all variations on trace conditioning, in which a phasic stimulus such as a flash of light signals that reward will be delivered after a delay. TD systems map a representation of the state of the world to a prediction of future reward, but previous DA modeling exploited few experimental constraints on the form of this representation. Houk et al. [1] computed values using only immediately observable stimuli and allowed learning about rewards to accrue to previously observed stimuli using eligibility traces. But in trace conditioning, DA neurons show a timed pause in their background firing when an expected reward fails to arrive [7]. Because the Houk et al. [1] model does not learn temporal relationships, it cannot produce well timed inhibition. Montague et al. [2] and Schultz et al. [3] addressed these data using a tapped delay line representation of stimulus history [8]: at time t, each stimulus is represented by a vector whose nth element codes whether the stimulus was observed at time t −n. This representation allows the models to learn the temporal relationship between stimulus and reward, and to correctly predict phasic inhibition timelocked to omitted rewards. These models, however, mispredict the behavior of DA neurons when the interval between stimulus and reward varies. In one experiment [9], animals were trained to expect a constant stimulus-reward interval, which was later varied. When a reward is delivered earlier than expected, the tapped delay line models correctly predict that it should trigger positive error (dopaminergic excitation), but also incorrectly predict a further burst of negative error (inhibition, not seen experimentally) when the reward fails to arrive at the time it was originally expected (Figure 1c, top). In part, this occurs because the models do not represent the reward as an observation, so its arrival can have no effect on later predictions. More fundamentally, this is a problem with how the models partition events into a state space. Figure 1a illustrates how the tapped delay lines mark time in the interval between stimulus and reward using a series of states, each of which learns its own reward prediction. After the stimulus occurs, the model’s representation marches through each state in succession. But this device fails to capture a distribution over the interval between two events. If the second event has occurred, the interval is complete and the system should not expect reward again, but the tapped delay line continues to advance. This may be correctable, though awkwardly, by representing the reward with its own delay line, which can then learn to suppress further reward expectation after a reward occurs [10]. However, to our knowledge it is experimentally unclear whether the suppression of this response requires repeated experience with the situation, as this account predicts. Also, whether this works depends on how information from multiple cues is combined into an aggregate reward prediction (i.e. on the function approximator used: it is easy to verify that a standard linear combination of the delay lines does not suffice). The models have a similar problem with a related experiment [11] (Figure 1e) where the stimulus-reward interval varied uniformly over a range of delays throughout training. In this case, all substates within the interval see reward with the same (low) probability, so each produces identical positive error when reward occurs there. In animal experiments, however, stronger dopaminergic activity is seen for earlier rewards [11]. 3 A new model Both of these experiments demonstrate that current TD models of DA do not adequately treat variability in event timing. We address them with a TD model grounded in a formalism that incorporates temporal variability, a partially observable [12] semi-Markov [13] process. Such a process is described by three functions, O, Q, and D, operating over two sets: the hidden states S and observations O. Q associates each state with a probability distribution over possible successors. If the process is in state s ∈S, then the next state is s′ with probability Qss′. These discrete state transitions can occur irregularly in continuous time (which we approximate to arbitrarily fine discretization). The dwell time τ spent in s before making a transition is distributed with probability Dsτ; we define the indicator φt as one if the state transitioned between t and t + 1 and zero otherwise. On entering s, the process emits some observation o ∈O with probability Oso. Some observations are distinguished as rewarding; we separately write the reward magnitude of an observation as r. Note that the processes we consider in this paper do not contain decisions. In this formalism, a trace conditioning experiment can be treated as alternation between two states (Figure 1b). The states correspond to the intervals between stimulus and reward (interstimulus interval: ISI) and between reward and stimulus (intertrial interval: ITI). A stimulus is the likely observation when entering the ISI and a reward when entering the ITI. We will index variables both by the time t and by a discrete index n which counts state transitions; e.g. the nth state, sn, is entered at time t = Pn−1 k=1 τk and can thus also be written as st. If φt = 0 (if the state did not transition between t and t+1) then st+1 = st, ot+1 is null and rt+1 = 0 (i.e., nonempty observations and rewards occur only on transitions). State transitions may be unsignaled: ot+1 may be null even if φt = 1. An unsignaled transition into the ITI state occurs in our model when reward is omitted, a common experimental manipulation [7]. This example demonstrates the relationship between temporal variability and partial observability: if reward timing can vary, nothing in the observable state reveals whether a late reward is still coming or has been omitted completely. TD algorithms [6] approximate a function mapping each state to its value, defined as the expectation (with respect to variability in reward magnitude, state succession, and dwell times) of summed, discounted future reward, starting from that state. In the semi-Markov case [13], a state’s value is defined as the reward expectation at the moment it is entered; we do not count rewards received on the transition in. The value of the nth state entered is: Vsn = E γτnrn+1 + γτn+τn+1rn+2 + ... = E γτn(rn+1 + Vsn+1) where γ < 1 is a discounting parameter. We address partial observability by using model-based inference to determine a distribution over the hidden states, which then serves as a basis over which a modified TD algorithm can learn values. The approach is similar to the Q-learning algorithm of Chrisman [14]. In our setting, however, values can in principle be learned exactly, since without decisions, they are linear in the space of hidden states. For state inference, we assume that the brain’s sensory processing systems use an internal model of the semi-Markov process — that is, the functions O, Q, and D. Here we take the model as given, though we have treated parts of the problem of learning such models elsewhere [15]. A key assumption about this internal model is that its distributions over intervals, rewards and observations contain asymptotic uncertainty, that is, they are not arbitrarily sharp. When learning internal models, such uncertainty can result from an assumption that parameters of the world are constantly changing [16]. Thus, in the inference model for the trace conditioning experiment, the ISI duration is modeled with a probability distribution with some nonzero variance rather than an impulse function. The model likewise assigns a small probability to anomalous transitions and observations (e.g. unrewarded transitions into the ITI state). This uncertainty is present only in the internal model: most anomalous events never occur in our simulations. Given the model and a series of observations o1 . . . ot, we can determine the likelihood that each hidden state is active using a standard forward-backward algorithm for hidden semi-Markov models [17]. The important quantity is the probability, for each state, that the system left that state at time t. With a one-timestep backward pass (to match the onetimestep value backups in the TD rule), this is: βs,t = P(st = s, φt = 1|o1 . . . ot+1) By Bayes’ theorem, βs,t ∝P(ot+1|st = s, φt = 1) · P(st = s, φt = 1|o1 . . . ot). The first term can be computed by integrating over st+1 in the model: P(ot+1|st=s, φt=1) = P s′∈S Qss′ · Os′ot+1; the second requires integrating over possible state sequences and dwell times: P(st = s, φt = 1|o1 . . . ot) = dlastO X τ=1 Dsτ·Osot−τ+1·P(st−τ+1 = s, φt−τ = 1|o1 . . . ot−τ) where dlastO is the number of timesteps since the last non-null observation and P(st−τ+1 = s, φt−τ = 1|o1 . . . ot−τ), the chance that the process entered s at t −τ + 1, equals P s′∈S Qs′s ·P(st−τ = s′, φt−τ = 1|o1 . . . ot−τ), allowing recursive computation. β is used for TD learning because it represents the probability of a transition, which is the event that triggers a value update in fully observable semi-Markov TD. Due to partial observability, we may not be certain when transitions have occurred or from which states, so we perform TD updates to every state at every timestep, weighted by β. We denote our estimate of the value of state s as ˆVs, to distinguish it from the true value Vs. The update to ˆVs at time t is proportional to the TD error: δs,t = βs,t(E[γτ] · (rt+1 + E[ ˆVs′]) −ˆVs) where E[γτ] = P k γkP(τt = k|st = s, φt = 1, o1 . . . ot+1) is the expected discounting (since dwell time may be uncertain) and E[ ˆVs′] = P s′∈S ˆVs′P(st+1 = s′|st = s, φt = 1, ot+1) is the expected subsequent value. Both expectations are conditioned on the process having left state s at time t, and computed using the internal world model. As in previous models, we associate the error signal δ with DA activity. However, because of uncertainty as to the state of the world, the TD error signal is vector-valued rather than scalar. DA neurons could code this vector in a distributed manner, which might explain experimentally observed response variability between neurons [7]. Alternatively, δs,t can be approximated with a scalar, which performs well if the inferred state occupancy is sharply peaked. In our figures, we use such an approximation, plotting DA activity as the cumulative TD error over states (implicitly weighted by β): δt = P s∈S δs,t. An approximate version of the vector signal could be reconstructed at target areas by multiplying by βs,t/ P s′∈S βs′,t. Note that with full observability, the (vector) learning rule reduces to standard semi-Markov TD, and conversely with full unobservability, it nudges states in the direction of a value iteration backup. In fact, the algorithm is exact in that it has the same fixed point as value iteration, assuming the inference model matches the contingencies of the world. (Due to uncertainty it does so only approximately in our simulations.) We sketch the proof. With each TD update, ˆVs is nudged toward some target value with some step size βs,t; the fixed point is the average of the targets, weighted by their probabilities and their step sizes. Fixing some arbitrary t, the update targets and β are functions of the observations o1 . . . ot+1, which are generated according to P(o1 . . . ot+1). The fixed point is: ˆVs = P o1...ot+1 P(o1 . . . ot+1) · βs,t · E[γτ] · (rt+1 + E[ ˆVs′]) P o1...ot+1 P(o1 . . . ot) · βs,t Marginalizing out the observations reduces this to Bellman’s equation for ˆVs, which is also, of course, the fixed-point equation for value iteration. 4 Results When expected reward is delivered early, the semi-Markov model assumes that this signals an early transition into the ITI state, and it thus does not expect further reward or produce spurious negative error (Figure 1d, top). Because of variability in the model’s ISI estimate, an early transition, while improbable, better explains the data than some other path through the state space. The early reward is worth more than expected, due to reduced discounting, and is thus accompanied by positive error. The model can also infer a state transition from the passage of time, absent any observations. In Figure 1d (bottom), when the reward is delivered late, the system infers that the world has entered the ITI state without reward, producing negative error. Figure 1f shows our model’s behavior when the ISI is uniformly distributed [11]. (The dwell time distribution D in the inference model was changed to reflect this distribution, as an animal should learn a different model here.) Earlier-than-average rewards are worth more than expected (due to discounting) and cause positive prediction error, while laterthan-average rewards cause negative error because they are more heavily discounted. This is broadly consistent with the experimental finding of decreasing response with increasing delay [11]. Inhibition at longer delays has not so far been observed in this experiment, though inhibition is in general difficult to detect. If discovered, such inhibition would support the semi-Markov model. Because it combines a conditional probability model with TD learning, our approach can incorporate insights from previous behavioral theories into a physiological model. Our state inference approach is based on a hidden Markov model (HMM) account we previously advanced to explain animal learning about the temporal relationships of events [15]. The present theory (with the model learning scheme from that paper) would account for the same data. Our model also accommodates two important theoretical ideas from more abstract models of animal learning that previous TD models cannot. One is the notion of uncertainty in some of its internal parameters, which Kakade and Dayan [16] use to explain interval timing and attentional effects in learning. Second, Gallistel has suggested that animal learning processes are timescale invariant. For example, altering the speed of events has no effect on the number of trials it takes animals to learn a stimulus-reward association [18]. This is not true of Markov TD models because their transitions are clocked to a fixed timescale. With tapped delay lines, timescale dilation increases the number of marker states in Figure 1a and slows learning. But our semi-Markov model is timescale invariant: learning is induced by state transitions which in turn are triggered by events or by the passage of time on a scale controlled by the internal model. (The form of temporal discounting we use is not timescale invariant, but this can be corrected as in [5].) 5 Discussion We have presented a model of the DA system that improves on previous models’ accounts of data involving temporal variability and partial observability, because, unlike prior models, it is grounded in a formalism that explicitly incorporates these considerations. Like previous models, ours identifies the DA response with reward prediction error, but it differs in the representational systems driving the predictions. Previous models assumed that tapped delay lines transcribed raw sensory events; ours envisions that these events inform a more active process of inference about the underlying state of the world. This is a principled approach to the problem of representing state when events can be separated by delays. Simpler schemes may capture the neuronal data, which are sparse, but without addressing the underlying computational issues we identify, they are unlikely to generalize. For instance, Suri and Schultz [4] propose that reward delivery overrides stimulus representations, canceling pending predictions and eliminating the spurious negative error in Figure 1c (top). But this would disrupt the behaviorally demonstrated ability of animals to learn that a stimulus predicts a series of rewards. Such static representational rules are insufficient since different tasks have different mnemonic requirements. In our account, unlike more ad-hoc theories, the problem of learning an appropriate representation for a task is well specified: it is the problem of modeling the task. Though we have not simulated model learning here (this is an important area for future work), it is possible using online HMM learning, and we have used this technique in a model of conditioning [15]. Another issue for the future is extending our theory to encompass action selection. DA models often assume an actor-critic framework [1] in which reward predictions are used to evaluate action selection policies. Partial observability complicates such an extension here, since policies must be defined over belief states (distributions over the hidden states S) to accommodate uncertainty; our use of S as a linear basis for value predictions is thus an oversimplification. Puzzlingly, the data we consider suggest that animals build internal models but also use sample-based TD methods to predict values. Given a full world model (which could in principle be solved directly for V ), it seems unclear why TD learning should be necessary. But since the world model must be learned incrementally online, it may be infeasible to continually re-solve it, and parts of the model may be poorly specified. In this case, TD learning in the inferred state space could maintain a reasonably current and observationally grounded value function. (Our particular formulation, which relies extensively on the model in the TD rule, may not be ideal from this perspective.) Suri [19] and Dayan [20] have also proposed TD theories of DA that incorporate world models to explain behavioral effects, though they do not address the theoretical issues or dopaminergic data considered here. While those accounts use the world model for directly anticipating future events, we have proposed another role for it in state inference. Also unlike our theory, the others cannot explain the experiments discussed in [15] because their internal models cannot represent simultaneous or backward contingencies. However, they treat the two major issues we have neglected: world model learning and action planning. The formal models in question have roughly equivalent explanatory power: a semi-Markov model can be simulated (to arbitrarily fine temporal discretization) by a Markov model that subdivides its states by dwell time. There is also an isomorphism between higherorder and partially observable Markov models. Thus it would be possible to devise a state representation for a Markov model that copes properly with temporal variability. But doing so by elaborating the tapped delay line architecture would amount to building a clockwork engine for the inference process we describe, without the benefit of useful abstractions such as distributions over intervals; a clearer approach would subdivide the states in our model. Though there exist isomorphisms between the formal models, there are algorithmic differences that may make our proposal experimentally distinguishable from others. The inhibitory responses in Figure 1f reflect the way semi-Markov models account for the costs of delays; they would not be seen in a Markov model with subdivided states. Such inhibition is somewhat parameter-dependent, since if inference parameters assign high probability to unsignaled transitions the decrease in reward value with delay can be mitigated by increasing uncertainty about the hidden state. Nonetheless, should data not uphold our prediction of inhibitory responses to late rewards, they would suggest a different definition of a state’s value. One choice would be the subdivision of our semi-Markov states by dwell time discussed above, which in the experiment of Figure 1f would decrease TD error toward but not past zero for longer delays. In this case, later rewards are less surprising because the conditional probability of reward increases as time passes without reward. A related prediction suggested by our model is that DA responses not just to rewards but also to stimuli that signal reward might be modulated by their timing relative to expectation. Responses to reward-predicting stimuli disappear in overtrained animals, presumably because the stimuli come to be predicted by events in the previous trial [7]. In tapped delay line models, this is possible only for a constant ITI (since if expectancy is divided between a number of states, stimulus delivery in any one of them cannot be completely predicted away). But the response to a stimulus in the semi-Markov model can show behavior exactly analogous to the reward response in Figure 1f — positive or negative error depending on the time of delivery relative to expectation. So, even in an experiment involving a randomized ITI, the net stimulus response (averaged over the range of ITIs) could be attenuated. Such behavior occurred in our simulations; the modeled DA responses to the stimuli in Figures 1d and 1f are positive because they were taken after shorter-than-average ITIs. It is difficult to evaluate this observation against available data, since the experiment involving overtrained monkeys [7] contained minimal ITI variability. We have suggested that the TD error may be a vector signal, with different neurons signaling errors for different elements of a state distribution. This could be investigated experimentally by recording DA neurons as a situation of ambiguous reward expectancy (e.g. one reward or three) resolved into a situation of intermediate, determinate reward expectancy (e.g. two rewards). Neurons carrying an aggregate error should uniformly report no error, but with a vector signal, different neurons might report both positive and negative error. Acknowledgments This work was supported by National Science Foundation grants IIS-9978403 and DGE9987588. Aaron Courville was funded in part by a Canadian NSERC PGS B fellowship. We thank Sham Kakade and Peter Dayan for helpful discussions. References [1] JC Houk, JL Adams, and AG Barto. A model of how the basal ganglia generate and use neural signals that predict reinforcement. In JC Houk, JL Davis, and DG Beiser, editors, Models of Information Processing in the Basal Ganglia, pages 249–270. MIT Press, 1995. [2] PR Montague, P Dayan, and TJ Sejnowski. A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J Neurosci, 16:1936–1947, 1996. [3] W Schultz, P Dayan, and PR Montague. A neural substrate of prediction and reward. Science, 275:1593–1599, 1997. [4] RE Suri and W Schultz. A neural network with dopamine-like reinforcement signal that learns a spatial delayed response task. Neurosci, 91:871–890, 1999. [5] ND Daw and DS Touretzky. Long-term reward prediction in TD models of the dopamine system. Neural Comp, 14:2567–2583, 2002. [6] RS Sutton. Learning to predict by the method of temporal differences. Machine Learning, 3:9–44, 1988. [7] W Schultz. Predictive reward signal of dopamine neurons. J Neurophys, 80:1–27, 1998. [8] RS Sutton and AG Barto. Time-derivative models of Pavlovian reinforcement. In M Gabriel and J Moore, editors, Learning and Computational Neuroscience: Foundations of Adaptive Networks, pages 497–537. MIT Press, 1990. [9] JR Hollerman and W Schultz. Dopamine neurons report an error in the temporal prediction of reward during learning. Nature Neurosci, 1:304–309, 1998. [10] DS Touretzky, ND Daw, and EJ Tira-Thompson. Combining configural and TD learning on a robot. In ICDL 2, pages 47–52. IEEE Computer Society, 2002. [11] CD Fiorillo and W Schultz. The reward responses of dopamine neurons persist when prediction of reward is probabilistic with respect to time or occurrence. In Soc. Neurosci. Abstracts, volume 27: 827.5, 2001. [12] LP Kaelbling, ML Littman, and AR Cassandra. Planning and acting in partially observable stochastic domains. Artif Intell, 101:99–134, 1998. [13] SJ Bradtke and MO Duff. Reinforcement learning methods for continuous-time Markov Decision Problems. In NIPS 7, pages 393–400. MIT Press, 1995. [14] L Chrisman. Reinforcement learning with perceptual aliasing: The perceptual distinctions approach. In AAAI 10, pages 183–188, 1992. [15] AC Courville and DS Touretzky. Modeling temporal structure in classical conditioning. In NIPS 14, pages 3–10. MIT Press, 2001. [16] S Kakade and P Dayan. Acquisition in autoshaping. In NIPS 12, pages 24–30. MIT Press, 2000. [17] Y Guedon and C Cocozza-Thivent. Explicit state occupancy modeling by hidden semi-Markov models: Application of Derin’s scheme. Comp Speech and Lang, 4:167–192, 1990. [18] CR Gallistel and J Gibbon. Time, rate and conditioning. Psych Rev, 107(2):289–344, 2000. [19] RE Suri. Anticipatory responses of dopamine neurons and cortical neurons reproduced by internal model. Exp Brain Research, 140:234–240, 2001. [20] P Dayan. Motivated reinforcement learning. In NIPS 14, pages 11–18. MIT Press, 2001.
|
2002
|
23
|
2,225
|
Multiple Cause Vector Quantization David A. Ross and Richard S. Zemel Department of Computer Science University of Toronto {dross,zemel}@cs.toronto.edu Abstract We propose a model that can learn parts-based representations of highdimensional data. Our key assumption is that the dimensions of the data can be separated into several disjoint subsets, or factors, which take on values independently of each other. We assume each factor has a small number of discrete states, and model it using a vector quantizer. The selected states of each factor represent the multiple causes of the input. Given a set of training examples, our model learns the association of data dimensions with factors, as well as the states of each VQ. Inference and learning are carried out efficiently via variational algorithms. We present applications of this model to problems in image decomposition, collaborative filtering, and text classification. 1 Introduction Many collections of data exhibit a common underlying structure: they consist of a number of parts or factors, each of which has a small number of discrete states. For example, in a collection of facial images, every image contains eyes, a nose, and a mouth (except under occlusion), each of which has a range of different appearances. A specific image can be described as a composite sketch: a selection of the appearance of each part, depending on the individual depicted. In this paper, we describe a stochastic generative model for data of this type. This model is well-suited to decomposing images into parts (it can be thought of as a Mr. Potato Head model), but also applies to domains such as text and collaborative filtering in which the parts correspond to latent features, each having several alternative instantiations. This representational scheme is powerful due to its combinatorial nature: while a standard clustering/VQ method containing N states can represent at most N items, if we divide the N into j-state VQs, we can represent jN/j items. MCVQ is also especially appropriate for high-dimensional data in which many values may be unspecified for a given input case. 2 Generative Model In MCVQ we assume there are K factors, each of which is modeled by a vector quantizer with J states. To generate an observed data example of D dimensions, x ∈ℜD, we stochastically select one state for each VQ, and one VQ for each dimension. Given these selections, a single state from a single VQ determines the value of each data dimension xd. b k s k rd x d a d µ kj o kj K D J Figure 1: Graphical model representation of MCVQ. We let rd=1 represent all the variables rd=1,k, which together select a VQ for x1. Similarly, sk=1 represents all sk=1,j, which together select a state of VQ 1. The plates depict repetitions across the appropriate dimensions for each of the three variables: the K VQs, the J states (codebook vectors) per VQ, and the D input dimensions. The selections are represented as binary latent variables, S = {skj}, R = {rdk}, for d = 1...D, k = 1...K, and j = 1...J. The variable skj = 1 if and only if state j has been selected from VQ k. Similarly rdk = 1 when VQ k has been selected for data dimension d. These variables can be described equivalently as multinomials, sk ∈1...J, rd ∈1...K; their values are drawn according to their respective priors, ak and bd. The graphical model representation of MCVQ is given in Fig. 1. Assuming each VQ state specifies the mean as well as the standard deviation of a Gaussian distribution, and the noise in the data dimensions is conditionally independent, we have (where θ = {µdkj, σdkj}): P(x|R, S, θ) = Y d Y k,j N(xd ; µdkj, σdkj)rdk skj The resulting model can be thought of as a two-dimensional mixture model, in which J ∗K possible states exist for each data dimension (xd). The selections of states for the different data dimensions are joined along the J dimension and occur independently along the K dimension. 3 Learning and Inference The joint distribution over the observed vector x and the latent variables is P(x, R, S|θ) = P(R|θ)P(S|θ)P(x|R, S, θ) = Y d,k ardk dk Y k,j bskj kj Y d,k,j N(xd ; θ)rdkskj Given an input x, the posterior distribution over the latent variables, P(R, S|x, θ), cannot tractably be computed, since all the latent variables become dependent. We apply a variational EM algorithm to learn the parameters θ, and infer hidden variables given observations. We approximate the posterior distribution using a factored distribution, where g and m are variational parameters related to r and s respectively: Q(R, S|x, θ) = Y d,k grdk dk Y k,j mskj kj The variational free energy, F(Q, θ) = EQ −log P(x, R, S|θ) + log Q(R, S|x, θ) is: F = EQ X d,k rdk log(gdk/akj) + X k,j skj log(mkj/bkj) + X d,k,j rdkskj log N(xd ; θ) = X k,j mkj log mkj + X d,k gdk log gdk + X d,k,j gdk mkj ϵdkj where ϵdkj = log σdkj + (xd−µdkj)2 2σ2 dkj , and we have assumed uniform priors for the selection variables. The negative of the free energy −F is a lower bound on the log likelihood of generating the observations. The variational EM algorithm improves this bound by iteratively improving −F with respect to Q (E-step) and to θ (M-step). Let C be the set of training cases, and Qc be the approximation to the posterior distribution over latent variables given the training case (observation) c ∈C. We further constrain this variational approach, forcing the {gc dk} to be consistent across all observations xc. Hence these parameters relating to the gating variables that govern the selection of a factor for a given observation dimension, are not dependent on the observation. This approach encourages the model to learn representations that conform to this constraint. That is, if there are several posterior distributions consistent with an observed data vector, it favours distributions over {rd} that are consistent with those of other observed data vectors. Under this formulation, only the {mc kj} parameters are updated during the E step for each observation c: mc kj = exp − X d gdk ϵc dkj / J X α=1 exp − X d gdk ϵc dαk The M step updates the parameters, µ and σ, from each hidden state kj to each input dimension d, and the gating variables {gdk}: gdk = exp −1 C X c,j mc kj ϵc dkj / K X β=1 exp −1 C X c,j mc jβ ϵc djβ µdkj = X c mc kjxc d / X c mc kj σ2 dkj = X c mc kj(xc d −µdkj)2 / X c mc kj A slightly different model formulation restricts the selections of VQs, {rdk}, to be the same for each training case. Variational EM updates for this model are identical to those above, except that the 1 C terms in the updates for gdk disappear. In practice, we obtain good results by replacing this 1 C term with an inverse temperature parameter, that is annealed during learning. This can be thought of as gradually moving from a generative model in which the rdk’s can vary across examples, to one in which they are the same for each example. The inferred values of the variational parameters specify a posterior distribution over the VQ states, which in turn implies a mixture of Gaussians for each input dimension. Below we use the mean of this mixture, ˆxc d = P k,j mc kj gdk µdkj, to measure the model’s reconstruction error on case c. 4 Related models MCVQ falls into the expanding class of unsupervised algorithms known as factorial methods, in which the aim of the learning algorithm is to discover multiple independent causes, or factors, that can well characterize the observed data. Its direct ancestor is Cooperative Vector Quantization [1, 2, 3], which models each data vector as a linear combination of VQ selections. Another part-seeking algorithm, non-negative matrix factorization (NMF) [4], utilizes a non-negative linear combination of non-negative basis functions. MCVQ entails another round of competition, from amongst the VQ selections rather than the linear combination of CVQ and NMF, which leads to a division of input dimensions into separate causes. The contrast between these approaches mirrors the development of the competitive mixture-of-experts algorithm which grew out of the inability of a cooperative, linear combination of experts to decompose inputs into separable experts. MCVQ also resembles a wide range of generative models developed to address image segmentation [5, 6, 7]. These are generally complex, hierarchical models designed to focus on a different aspect of this problem than that of MCVQ: to dynamically decide which pixels belong to which objects. The chief obstacle faced by these models is the unknown pose (primarily limited to position) of an object in an image, and they employ learned object models to find the single object that best explains each pixel. MCVQ adopts a more constrained solution w.r.t. part locations, assuming that these are consistent across images, and instead focuses on the assembling of input dimensions into parts, and the variety of instantiations of each part. The constraints built into MCVQ limit its generality, but also lead to rapid learning and inference, and enable it to scale up to high-dimensional data. Finally, MCVQ also closely relates to sparse matrix decomposition techniques, such as the aspect model [8], a latent variable model which associates an unobserved class variable, the aspect z, with each observation. Observations consist of co-occurrence statistics, such as counts of how often a specific word occurs in a document. The latent Dirichlet allocation model [9] can be seen as a proper generative version of the aspect model: each document/input vector is not represented as a set of labels for a particular vector in the training set, and there is a natural way to examine the probability of some unseen vector. MCVQ shares the ability of these models to associate multiple aspects with a given document, yet it achieves this by sampling from multiple aspects in parallel, rather than repeated sampling of an aspect within a document. It also imposes the additional selection of an aspect for each input dimension, which leads to a soft decomposition of these dimensions based on their choice of aspect. Below we present some initial experiments examining whether MCVQ can match the successful application of the aspect model to information retrieval and collaborative filtering problems, after evaluating it on image data. 5 Experimental Results 5.1 Parts-based Image Decomposition: Shapes and Faces The first dataset used to test our model consisted of 11 × 11 gray-scale images, as pictured in Fig. 2a. Each image in the set contains three shapes: a box, a triangle, and a cross. The horizontal position of each shape is fixed, but the vertical position is allowed to vary, uniformly and independently of the positions of the other shapes. A model containing 3 VQs, 5 states each, was trained on a set of 100 shape images. In this experiment, and all experiments reported herein, annealing proceeded linearly from an integer less than C to 1. The learned representation, pictured in Fig. 2b, clearly shows the specialization of each VQ to one of the shapes. The training set was selected so that none of the examples depict cases in which all three shapes are located near the top of the image. Despite this handicap, MCVQ is able to learn the full range of shape positions, and can accurately reconstruct such an image (Fig. 2c). In contrast, standard unsupervised methods such as Vector Quantization (Fig. 3a) and Principal Component Analysis (Fig. 3b) produce holistic representations of the data, in which each basis vector tries to account for variation observed across the entire image. Nonnegative matrix factorization does produce a parts-based representation (Fig. 3c), but captures less of the data’s structure. Unlike MCVQ, NMF does not group related parts, and its generative model does not limit the combination of parts to only produce valid images. As an empirical comparison, we tested the reconstruction error of each of the aforementioned methods on an independent test set of 629 images. Since each method has one or more free parameters (e.g. the # of principal components) we chose to relate models with similar description lengths1. Using a description length of about 5.9 × 105 bits, and pixel 1We define description length to be the number of bits required to represent the model, plus the for each component µ VQ 1 VQ 3 VQ 2 k = 3 k = 2 k = 1 G a) b) c) Original Reconstruction Figure 2: a) A sample of 24 training images from the Shapes dataset. b) A typical representation learned by MCVQ with 3 VQs and 5 states per VQ. c) Reconstruction of a test image: original (left) and reconstruction (right). b) c) Original PCA VQ d) NMF a) Figure 3: Other methods trained on shape images: a) VQ, b) PCA, and c) NMF. d) Reconstruction of a test image by the three methods (cf. Fig. 2c). values ranging from -1 to 1, the average r.m.s. reconstruction error was 0.21 for MCVQ (3 VQs), 0.22 for PCA, 0.35 for NMF, and 0.49 for VQ. Note that this metric may be useful in determining the number of VQs, e.g., MCVQ with 6 VQs had an eror of 0.6. As a more interesting visual application, we trained our model on a database of face images (www.ai.mit.edu/cbcl/projects).The dataset consists of 19 × 19 gray-scale images, each containing a single frontal or near-frontal face. A model of 6 VQs with 12 states each was trained on 2000 images, requiring 15 iterations of EM to converge. As with shape images, the model learned a parts-based representation of the faces. The reconstruction of two test images, along with the specific parts used to generate each, is illustrated in Fig. 4. It is interesting to note that the pixels comprising a single part need not be physically adjacent (e.g. the eyes) as long as their appearances are correlated. We again compared the reconstruction error of MCVQ with VQ, PCA, and NMF. The training and testing sets contained 1800 and 629 images respectively. Using a description length of 1.5×106 bits, and pixel values ranging from -1 to 1, the average r.m.s. reconstruction error number of bits to encode all the test examples using the model. This metric balances the large model cost and small encoding cost of VQ/MCVQ with the small model cost and large encoding cost of PCA/NMF. Figure 4: The reconstruction of two test images from the Faces dataset. Beside each reconstruction are the parts—the most active state in each of six VQs—used to generate it. Each part j ∈k is represented by its gated prediction (gdk ∗mkj) for each image pixel i. was 0.12 for PCA, 0.20 for NMF, 0.23 for MCVQ (both 3 and 6 VQs), and 0.28 for VQ. 5.2 Collaborative Filtering The application of MCVQ to image data assumes that the images are normalized, i.e., that the head is in a similar pose in each image. Normalization can be difficult to achieve in some image contexts; however, in many other types of applications, the input representation is more stable. For example, many information retrieval applications employ bag-of-words representations, in which a given word always occupies the same input element. We test MCVQ on a collaborative filtering task, utilizing the EachMovie dataset, where the input vectors are ratings by users of movies, and a given element always corresponds to the same movie. The original dataset contains ratings, on a scale from 1 to 6, of a set of 1649 movies, by 74,424 users. In order to reduce the sparseness of the dataset, since many users rated only a few movies, we only included users who rated at least 75 movies and movies rated by at least 126 users, leaving a total of 1003 movies and 5831 users. The remaining dataset was still very sparse, as the maximum user rated 928 movies, and the maximum movie was rated by 5401 users. We split the data randomly into 4831 users for a training set, and 1000 users in a test set. We ran MCVQ with 8 VQs and 6 states per VQ on this dataset. An example of the results, after 18 iterations of EM, is shown in Fig. 5. Note that in the MCVQ graphical model (Fig. 1), all the observation dimensions are leaves, so an input variable whose value is not specified in a particular observation vector will not play a role in inference or learning. This makes inference and learning with sparse data rapid and efficient. We compare the performance of MCVQ on this dataset to the aspect model. We implemented a version of the aspect model, with 50 aspects and truncated Gaussians for ratings, and used “tempered EM” (with smoothing) to fit the parameters[10]. For both models, we train the model on the 4831 users in the training set, and then, for each test user, we let the model observe some fixed number of ratings and hold out the rest. We evaluate the models by measuring the absolute difference between their predictions for a held-out rating and the user’s true rating, averaged over all held-out ratings for all test users (Fig. 6). The Fugitive 5.8 (6) Pulp Fiction 5.5 (4) Cinema Paradiso 5.6 (6) Shawshank Redemption 5.5 (5) Terminator 2 5.7 (5) Godfather: Part II 5.3 (5) Touch of Evil 5.4 (-) Taxi Driver 5.3 (6) Robocop 5.4 (5) Silence of the Lambs 5.2 (4) Rear Window 5.2 (6) Dead Man Walking 5.1 (-) Kazaam 1.9 (-) Brady Bunch Movie 1.4 (1) Jean de Florette 2.1 (3) Billy Madison 3.2 (-) Rent-a-Kid 1.9 (-) Ready to Wear 1.3 (-) Lawrence of Arabia 2.0 (3) Clerks 3.0 (4) Amazing Panda Adventure 1.7 (-) A Goofy Movie 0.8 (1) Sense Sensibility 1.6 (-) Forrest Gump 2.7 (2) Best of Wallace & Gromit 5.6 (-) Tank Girl 5.5 (6) Mediterraneo 5.3 (6) Sling Blade 5.4 (5) The Wrong Trousers 5.4 (6) Showgirls 5.3 (4) Three Colors: Blue 4.9 (5) One Flew ... Cuckoo’s Nest 5.3 (6) A Close Shave 5.3 (5) Heidi Fleiss... 5.2 (5) Jean de Florette 4.9 (6) Dr. Strangelove 5.2 (5) Robocop 2.6 (2) Talking About Sex 2.4 (5) Jaws 3-D 2.2 (-) The Beverly Hillbillies 2.0 (-) Dangerous Ground 2.5 (2) Barbarella 2.0 (4) Richie Rich 1.9 (-) Canadian Bacon 1.9 (4) Street Fighter 2.0 (-) The Big Green 1.8 (2) Getting Even With Dad 1.5 (-) Mrs. Doubtfire 1.7 (-) Figure 5: The MCVQ representation of two test users in the EachMovie dataset. The 3 most conspicuously high-rated (bold) and low-rated movies by the most active states of 4 of the 8 VQs are shown, where conspicuousness is the deviation from the mean rating for a given movie. Each state’s predictions, µdkj, can be compared to the test user’s true ratings (in parentheses); the model’s prediction is a convex combination of state predictions. Note the intuitive decomposition of movies into separate VQs, and that different states within a VQ may predict very different rating patterns for the same movies. 200 300 400 500 600 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 Number of observed ratings Mean test prediction error MCVQ Aspect Figure 6: The average absolute deviation of predicted and true values of held-out ratings is compared for MCVQ and the aspect model. Note that the number of users per x-bin decreases with increasing x, as a user must rate at least x+1 movies to be included. 5.3 Text Classification MCVQ can also be used for information retrieval from text documents, by employing the bag-of-words representation. We present preliminary results on the NIPS corpus (available at www.cs.toronto.edu/˜roweis/data.html), which consists of the full text of the NIPS conference proceedings, volumes 1 to 12. The data was pre-processed to remove common words (e.g. the), and those appearing in fewer than five documents, resulting in a vocabulary of 14,265 words. For each of the 1740 papers in the corpus, we generated a vector containing the number of occurrences of each word in the vocabulary. These vectors were normalized so that each contained the same number of words. A model of 8 VQs, 8 states each, was trained on the data, converging after 15 iterations of EM. A sample of the results is shown in Fig. 7. When trained on text data, the values of {gdk} provide a segmentation of the vocabulary into subsets of words with correlated frequencies. Within a particular subset, the words can be positively correlated, indicating that they tend to appear in the same documents, or negatively correlated, indicating that they seldom appear together. 6 Conclusion We have presented a novel method for learning factored representations of data which can be efficiently learned, and employed across a wide variety of problem domains. MCVQ combines the cooperative nature of some methods, such as CVQ, NMF, and LSA, that Predictive Sequence Learning in Recurrent Neocortical Circuits The Relevance Vector Machine R. P. N. Rao & T. J. Sejnowski Michael E. Tipping afferent ekf latent ltp svms hme similarity extraction lgn niranjan som gerstner svm svr classify net interneurons freitas detection zador margin svs classes weights excitatory kalman search soma kernel hyperparameters classification functions membrane wp data depression risk kopf class units query critic mdp spline jutten chip barn mdp documents stack pomdps tresp pes ocular correlogram pomdps chess suffix prioritized saddle cpg retinal interaural littman portfolio nuclei singh hyperplanes axon surround epsp prioritized players knudsen elevator tensor behavioural cmos bregman pomdp Figure 7: The representation of two documents by an MCVQ model with 8 VQs and 8 states per VQ. For each document we show the states selected for it from 4 VQs. The bold (plain) words for each state are those most conspicuous by their above (below) average predicted frequency. use multiple causes to generate input, with competitive aspects of clustering methods. In addition, it gains combinatorial power by splitting the input into subsets, and can readily handle sparse, high-dimensional data. One direction of further research involves extending the applications described above, including applying MCVQ to other dimensions of the NIPS corpus such as authors to find groupings of authors based on word-use frequency. An important theoretical direction is to incorporate Bayesian learning for selecting the number and size of each VQ. References [1] R.S. Zemel. A Minimum Description Length Framework for Unsupervised Learning. PhD thesis, Dept. of Computer Science, University of Toronto, Toronto, Canada, 1993. [2] G. Hinton and R.S. Zemel. Autoencoders, minimum description length, and Helmholtz free energy. In G. Tesauro J. D. Cowan and J. Alspector, editors, Advances in Neural Information Processing Systems 6. Morgan Kaufmann Publishers, San Mateo, CA, 1994. [3] Z. Ghahramani. Factorial learning and the EM algorithm. In G. Tesauro, D.S. Touretzky, and T.K. Leen, editors, Advances in Neural Information Processing Systems 7. MIT Press, Cambridge, MA, 1995. [4] D.D. Lee and H.S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401:788–791, October 1999. [5] C. Williams and N. Adams. DTs: Dynamic trees. In M.J. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information Processing Systems 11. MIT Press, Cambridge, MA, 1999. [6] G.E. Hinton, Z. Ghahramani, and Y.W. Teh. Learning to parse images. In S.A. Solla, T.K. Leen, and K.R. Muller, editors, Advances in Neural Information Processing Systems 12. MIT Press, Cambridge, MA, 2000. [7] N. Jojic and B.J. Frey. Learning flexible sprites in video layers. In CVPR, 2001. [8] T. Hofmann. Probabilistic latent semantic analysis. In Proc. of Uncertainty in Artificial Intelligence, UAI’99, Stockholm, 1999. [9] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. In T.K. Leen, T. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13. MIT Press, Cambridge, MA, 2001. [10] T. Hofmann. Learning what people (don’t) want. In European Conference on Machine Learning, 2001.
|
2002
|
24
|
2,226
|
Unsupervised Color Constancy Kinh Tieu Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 tieu@ai.mit.edu Erik G. Miller Computer Science Division UC Berkeley Berkeley, CA 94720 egmil@cs.berkeley.edu Abstract In [1] we introduced a linear statistical model of joint color changes in images due to variation in lighting and certain non-geometric camera parameters. We did this by measuring the mappings of colors in one image of a scene to colors in another image of the same scene under different lighting conditions. Here we increase the flexibility of this color flow model by allowing flow coefficients to vary according to a low order polynomial over the image. This allows us to better fit smoothly varying lighting conditions as well as curved surfaces without endowing our model with too much capacity. We show results on image matching and shadow removal and detection. 1 Introduction The number of possible images of an object or scene, even when taken from a single viewpoint with a fixed camera, is very large. Light sources, shadows, camera aperture, exposure time, transducer non-linearities, and camera processing (such as auto-gain-control and color balancing) can all affect the final image of a scene. These effects have a significant impact on the images obtained with cameras and hence on image processing algorithms, often hampering or eliminating our ability to produce reliable recognition algorithms. Addressing the variability of images due to these photic parameters has been an important problem in machine vision. We distinguish photic parameters from geometric parameters, such as camera orientation or blurring, that affect which parts of the scene a particular pixel represents. We also note that photic parameters are more general than “lighting parameters” and include anything which affects the final RGB values in an image given that the geometric parameters and the objects in the scene have been fixed. We present a statistical linear model of color change space that is learned by observing how the colors in static images change jointly under common, naturally occurring lighting changes. Such a model can be used for a number of tasks, including synthesis of images of new objects under different lighting conditions, image matching, and shadow detection. Results for each of these tasks will be reported. Several aspects of our model merit discussion. First, it is obtained from video data in a completely unsupervised fashion. The model uses no prior knowledge of lighting conditions, surface reflectances, or other parameters during data collection and modeling. It also has no built-in knowledge of the physics of image acquisition or “typical” image color changes, such as brightness changes. Second, it is a single global model and does not need to be re-estimated for new objects or scenes. While it may not apply to all scenes equally well, it is a model of frequently occurring joint color changes, which is meant to apply to all scenes. Third, while our model is linear in color change space, each joint color change that we model (a 3-D vector field) is completely arbitrary, and is not itself restricted to being linear. This gives us great modeling power, while capacity is controlled through the number of basis fields allowed. After discussing previous work in Section 2, we introduce the color flow model and how it is obtained from observations in Section 3. In Section 4, we show how the model and a single observed image can be used to generate a large family of related images. We also give an efficient procedure for finding the best fit of the model to the difference between two images. In Section 5 we give preliminary results for image matching (object recognition) and shadow detection. 2 Previous work The color constancy literature contains a large body of work on estimating surface reflectances and various photic parameters from images. A common approach is to use linear models of reflectance and illuminant spectra [2]. Gray world algorithms [3] assume the average reflectance of all the surfaces in a scene is gray. White world algorithms [4] assume the brightest pixel corresponds to a scene point with maximal reflectance. Brainard and Freeman attacked this problem probabilistically [5] by defining prior distributions on particular illuminants and surfaces. They used a new, maximum local mass estimator to choose a single best estimate of the illuminant and surface. Another technique is to estimate the relative illuminant or mapping of colors under an unknown illuminant to a canonical one. Color gamut mapping [6] uses the convex hull of all achievable RGB values to represent an illuminant. The intersection of the mappings for each pixel in an image is used to choose a “best” mapping. [7] trained a back-propagation multi-layer neural network to estimate the parameters of a linear color mapping. The approach in [8] works in the log color spectra space where the effect of a relative illuminant is a set of constant shifts in the scalar coefficients of linear models for the image colors and illuminant. The shifts are computed as differences between the modes of the distribution of coefficients of randomly selected pixels of some set of representative colors. [9] bypasses the need to predict specific scene properties by proving that the set of images of a gray Lambertian convex object under all lighting conditions form a convex cone.1 We wanted a model which, based upon a single image (instead of three required by [9]), could make useful predictions about other images of the same scene. This work is in the same spirit, although we use a statistical method rather than a geometric one. 3 Color flows In the following, let C = {(r, g, b)T ∈R3 : 0 ≤r ≤255, 0 ≤g ≤255, 0 ≤b ≤255} be the set of all possible observable image color 3-vectors. Let the vector-valued color of an image pixel p be denoted by c(p) ∈C. Suppose we are given two P-pixel RGB color images I1 and I2 of the same scene taken under two different photic parameters θ1 and θ2 (the images are registered). Each pair of 1This result depends upon the important assumption that the camera, including the transducers, the aperture, and the lens introduce no non-linearities into the system. The authors’ results on color images also do not address the issue of metamers, and assume that light is composed of only the wavelengths red, green, and blue. a b c d e f Figure 1: Matching non-linear color changes. b is the result of squaring the value of a (in HSV) and re-normalizing it to 255. c-f are attempts to match b with a using four different algorithms. Our algorithm (f) was the only one to capture the non-linearity. corresponding image pixels pk 1 and pk 2, 1 ≤k ≤P, in the two images represents a singlecolor mapping c(pk 1) 7→c(pk 2) that is conveniently represented by the vector difference: d(pk 1, pk 2) = c(pk 2) −c(pk 1). (1) By computing P vector differences (one for each pair of pixels) and placing each at the point c(pk 1) in color space C, we have a partially observed color flow: Φ′(c(pk 1)) = d(pk 1, pk 2), 1 ≤k ≤P (2) defined at points in C for which there are colors in image I1. To obtain a full color flow (i.e. a vector field Φ defined at all points in C) from a partially observed color flow Φ′, we must address two issues. First, there will be many points in C at which no vector difference is defined. Second, there may be multiple pixels of a particular color in image I1 that are mapped to different colors in image I2. We use a radial basis function estimator which defines the flow at a color point (r, g, b)T as the weighted proximity-based average of nearby observed “flow vectors”. We found empirically that σ2 = 16 (with colors on a 0–255 scale) worked well. Note that color flows are defined so that a color point with only a single nearby neighbor will inherit a flow vector that is nearly parallel to its neighbor. The idea is that if a particular color, under a photic parameter change θ1 7→θ2, is observed to get a little bit darker and a little bit bluer, for example, then its neighbors in color space are also defined to exhibit this behavior. 3.1 Structure in the space of color flows Consider a flat Lambertian surface that may have different reflectances as a function of the wavelength. While in principle it is possible for a change in lighting to map any color from such a surface to any other color independently of all other colors2, we know from experience that many such joint maps are unlikely. This suggests that while the marginal distribution of mappings for a particular color is broadly distributed, the space of possible joint color maps (i.e., color flows) is much more compact3. In learning a statistical model of color flows, many common color flows can be anticipated such as ones that make colors a little darker, lighter, or more red. These types of flows can be well modeled with a simple global 3x3 matrix A that maps a color c1 in image I1 to a color c2 in image I2 via c2 = Ac1. (3) However, there are many effects which linear maps cannot model. Perhaps the most significant is the combination of a large brightness change coupled with a non-linear gain-control adjustment or brightness re-normalization by the camera. Such photic changes will tend 2By carefully choosing properties such as the surface reflectance of a point as a function of wavelength and lighting any mapping ˜Φ can, in principle, be observed even on a flat Lambertian surface. However the metamerism which would cause such effects is uncommon in practice [10, 11] 3We will address below the significant issue of non-flat surfaces and shadows, which can cause highly “incoherent”maps. Figure 2: Evidence of non-linear color changes. The first two images are of the top and side of a box covered with multi-colored paper. The quotient image is shown next. The rightmost image is an ideal quotient image, corresponding to a linear lighting model. Figure 3: Effects of the first three eigenflows. See text. to leave the bright and dim parts of the image alone, while spreading the central colors of color space toward the margins. For a linear imaging process, the ratio of the brightnesses of two images, or quotient image [12], should vary smoothly except at surface normal boundaries. However as shown in Figure 2, the quotient image is a function not only of surface normal, but also of albedo– direct evidence of a non-linear imaging process. Another pair of images exhibiting a nonlinear color flow is shown in Figures 1a and b. Notice that the brighter areas of the original image get brighter and the darker portions get darker. 3.2 Color eigenflows We wanted to capture the structure in color flow space by observing real-world data in an unsupervised fashion. A one square meter color palette was printed on standard non-glossy plotter paper using every color that could be produced by a Hewlett Packard DesignJet 650C. The poster was mounted on a wall in our office so that it was in the direct line of overhead lights and computer monitors but not the single office window. An inexpensive video camera (the PC-75WR, Supercircuits, Inc.) with auto-gain-control was aimed at the poster so that the poster occupied about 95% of the field of view. Images of the poster were captured using the video camera under a wide variety of lighting conditions, including various intervals during sunrise, sunset, at midday, and with various combinations of office lights and outdoor lighting (controlled by adjusting blinds). People used the office during the acquisition process as well, thus affecting the ambient lighting conditions. It is important to note that a variety of non-linear normalization mechanisms built into the camera were operating during this process. We chose image pairs Ij = (Ij 1, Ij 2), 1 ≤j ≤800, by randomly and independently selecting individual images from the set of raw images. Each image pair was then used to estimate a full color flow Φ(Ij). We used 4096 distinct RGB colors (equally spaced in RGB space), so Φ(Ij) was represented by a vector of 3 ∗4096 = 12288 components. We modeled the space of color flows using principal components analysis (PCA) because: 1) the flows are well represented (in an L2 sense) by a small number of principal components, and 2) finding the optimal description of a difference image in terms of color flows was computationally efficient using this representation (see Section 4). We call the principal components of the color flow data “color eigenflows”, or just eigenflows,4 for short. We emphasize that these principal components of color flows have nothing to do with the distribution of colors in images, but only model the distribution of changes in color. This is a key and potentially confusing point. Our work is very different from approaches that compute principal components in the intensity or color space itself [14, 15]. Perhaps the most important difference is that our model is a global model for all images, while the 4PCA has been applied to motion vector fields [13], and these have also been termed “eigenflows”. a 1 2 3 4 0 5 10 15 20 25 rms error image color flow linear diagonal gray world b Figure 4: Image matching. Top row: original images. Bottom row: best approximation to original images using eigenflows and the source image a. Reconstruction errors per pixel component for four methods are shown in b. above methods are models only for a particular set of images, such as faces. 4 Using color flows to synthesize novel images How do we generate a new image from a source image and a color flow Φ? For each pixel p in the new image, its color c′(p) can be computed as c′(p) = c(p) + αΦ(ˆc(p)), (4) where c(p) is color in the source image and α is a scalar multiplier that represents the “quantity of flow”. ˆc(p) is interpreted to be the color vector closest to c(p) (in color space) at which Φ has been computed. RGB values are clipped to 0–255. Figure 3 shows the effect of the first three eigenflows on an image of a face. The original image is in the middle of each row while the other images show the application of each eigenflow with α values between ±4 standard deviations. The first eigenflow (top row) represents a generic brightness change that could probably be represented well with a linear model. Notice, however, the third row. Moving right from the middle image, the contrast grows. The shadowed side of the face grows darker while the lighted part of the face grows lighter. This effect cannot be achieved with a simple matrix multiplication as given in Equation 3. It is precisely these types of non-linear flows we wish to model. We stress that the eigenflows were only computed once (on the color palette data), and that they were applied to the face image without any knowledge of the parameters under which the face image was taken. 4.1 Flowing one image to another Suppose we have two images and we pose the question of whether they are images of the same object or scene. We suggest that if we can “flow” one image to another then the images are likely to be of the same scene. Let us treat an image I as a function that takes a color flow and returns a difference image D by placing at each (x,y) pixel in D the color change vector Φ(c(px,y)). The difference image basis for I and set of eigenflows Ψi, 1 ≤i ≤E, is Di = I(Ψi). The set of images S that can be formed using a source image and a set of eigenflows is S = {S : S = I + PE i=1 γiDi}, where the γi’s are scalars, and here I is just an image, and not a function. In our experiments, we used E = 30 of the top eigenvectors. We can only flow image I1 to another image I2 if it is possible to represent the difference image as a linear combination of the Di’s, i.e. if I2 ∈S. We find the optimal (in the least-squares sense) γi’s by solving the system D = E X i=1 γiDi, (5) a b e c d f Figure 5: Modeling lighting changes with color flows. a. Image with strong shadow. b. Same image under more uniform lighting conditions. c. Flow from a to b using eigenflows. d. Flow from a to b using linear. Evaluating the capacity of the color flow model. e. Mirror image of b. f. Failure to flow b to e implies that the model is not overparameterized. using the pseudo-inverse, where D = I2 −I1. The error residual represents a match score for I1 and I2. We point out again that this analysis ignores clipping effects. While clipping can only reduce the error between a synthetic image and a target image, it may change which solution is optimal in some cases. 5 Experiments 5.1 Image matching One use of the color change model is for image matching. An ideal system would flow matching images with zero error, and have large errors for non-matching images. We first examined our ability to flow a source image to a matching target image under different photic parameters. We compared our system to 3 other commonly used methods: linear, diagonal, and gray world. The linear method finds the matrix A in Equation 3 that minimizes the L2 error between the synthetic and target images; diagonal does the same with a diagonal A; gray world linearly matches the mean R, G, B values of the synthetic and target images. While our goal was to reduce the numerical difference between two images using flows, it is instructive to examine one example that was particularly visually compelling, shown in Figure 1. In a second experiment (Figure 4), we matched images of a face taken under various camera parameters but with constant lighting. Color flows outperforms the other methods in all but one task, on which it was second. 5.2 Local flows In another test, the source and target images were taken under very different lighting conditions. Furthermore, shadowing effects and lighting direction changed between the two images. None of the methods could handle these effects when applied globally. Thus we repeatedly applied each method on small patches of the image. Our method again performed the best, with an RMS error of 13.8 per pixel component, compared with errors of 17.3, 20.1, and 20.6 for the other methods. Figure 5 shows obvious visual artifacts with the linear method, while our method seems to have produced a much better synthetic image, especially in the shadow region at the edge of the poster. a b c d Figure 6: Backgrounding with color flows. a A background image. b A new object and shadow have appeared. c For each of the two regions (from background subtraction), a “flow” was done between the original image and the new image based on the pixels in each region. d The color flow of the original image using the eigenflow coefficients recovered from the shadow region. The color flow using the coefficients from the non-shadow region are unable to give a reasonable reconstruction of the new image. Synthesis on patches of images greatly increases the capacity of the model. We performed one experiment to measure the over-fitting of our method versus the others by trying to flow an original image to its reflection (Figure 5). The RMS error per pixel component was 33.2 for our method versus 41.5, 47.3, and 48.7 for the other methods. Note that while our method had lower error (which is undesirable), there was still a significant spread between matching images and non-matching images. We believe we can improve differentiation between matching and non-matching image pairs by assigning a cost to the change in γi across each image patch. For non-matching images, we would expect the γi’s to vary rapidly to accommodate the changing image. For matching images, sharp changes would only be necessary at shadow boundaries or changes in the surface orientation relative to directional light sources. 5.3 Shadows Shadows confuse tracking algorithms [16], backgrounding schemes and object recognition algorithms. For example, shadows can have a dramatic effect on the magnitude of difference images, despite the fact that no “new objects” have entered a scene. Shadows can also move across an image and appear as moving objects. Many of these problems could be eliminated if we could recognize that a particular region of an image is equivalent to a previously seen version of the scene, but under a different lighting. Figure 6a shows how color flows may be used to distinguish between a new object and a shadow by flowing both regions. A constant color flow across an entire region may not model the image change well. However, we can extend our basic model to allow linearly or quadratically (or other low order polynomially) varying fields of eigenflow coefficients. That is, we can find the best least squares fit of the difference image allowing our γ estimates to vary linearly or quadratically over the image. We implemented this technique by computing flows γx,y between corresponding image patches (indexed by x and y), and then minimizing the following form: arg min M X x,y (γx,y −Mcx,y)T Σ−1 x,y(γx,y −Mcx,y). (6) Here, each cx,y is a vector polynomial of the form [x y 1]T for the linear case and [x2 xy y2 x y 1]T for the quadratic case. M is an Ex3 matrix in the linear case and an Ex6 matrix in the quadratic case. The Σ−1 x,y’s are the error covariances in the estimate of the γx,y’s for each patch. Allowing the γ’s to vary over the image greatly increases the capacity of a matcher, but by limiting this variation to linear or quadratic variation, the capacity is still not able to qualitatively match “non-matching” images. Note that this smooth variation in eigenflow coefficients can model either a nearby light source or a smoothly curving surface, since either of these conditions will result in a smoothly varying lighting change. constant linear quadratic shadow 36.5 12.5 12.0 non-shadow 110.6 64.8 59.8 Table 1: Error residuals for shadow and non-shadow regions after color flows. We consider three versions of the experiment: 1) a single vector of flow coefficients, 2) linearly varying γ’s, 3) quadratically varying γ’s. In each case, the residual error for the shadow region is much lower than for the non-shadow region (Table 1). 5.4 Conclusions Except for the synthesis experiments, most of the experiments in this paper are preliminary and only a proof of concept. Much larger experiments need to be performed to establish the utility of the color change model for particular applications. However, since the color change model represents a compact description of lighting changes, including nonlinearities, we are optimistic about these applications. References [1] E. Miller and K. Tieu. Color eigenflows: Statistical modeling of joint color changes. In IEEE ICCV, volume 1, pages 607–614, 2001. [2] D. H. Marimont and B. A. Wandell. Linear models of surface and illuminant spectra. J. Opt. Soc. Amer., 11, 1992. [3] G. Buchsbaum. A spatial processor model for object color perception. J. Franklin Inst., 310, 1980. [4] J. J. McCann, J. A. Hall, and E. H. Land. Color mondrian experiments: The study of average spectral distributions. J. Opt. Soc. Amer., A(67), 1977. [5] D. H. Brainard and W. T. Freeman. Bayesian color constancy. J. Opt. Soc. Amer., 14(7):1393– 1411, 1997. [6] D. A. Forsyth. A novel algorithm for color constancy. IJCV, 5(1), 1990. [7] V. C. Cardei, B. V. Funt, and K. Barnard. Modeling color constancy with neural networks. In Proc. Int. Conf. Vis., Recog., and Action: Neural Models of Mind and Machine, 1997. [8] R. Lenz and P. Meer. Illumination independent color image representation using logeigenspectra. Technical Report LiTH-ISY-R-1947, Link¨oping University, April 1997. [9] P. N. Belhumeur and D. Kriegman. What is the set of images of an object under all possible illumination conditions? IJCV, 28(3):1–16, 1998. [10] W. S. Stiles, G. Wyszecki, and N. Ohta. Counting metameric object-color stimuli using frequency limited spectral reflectance functions. J. Opt. Soc. Amer., 67(6), 1977. [11] L. T. Maloney. Evaluation of linear models of surface spectral reflectance with small numbers of parameters. J. Opt. Soc. Amer., A1, 1986. [12] A. Shashua and R. Riklin-Raviv. The quotient image: Class-based re-rendering and recognition with varying illuminations. IEEE PAMI, 3(2):129–130, 2001. [13] J. J. Lien. Automatic Recognition of Facial Expressions Using Hidden Markov Models and Estimation of Expression Intensity. PhD thesis, Carnegie Mellon University, 1998. [14] M. Turk and A. Pentland. Eigenfaces for recognition. J. Cog. Neuro., 3(1):71–86, 1991. [15] M. Soriano, E. Marszalec, and M. Pietikainen. Color correction of face images under different illuminants by rgb eigenfaces. In Proc. 2nd Int. Conf. on Audio- and Video-Based Biometric Person Authentication, pages 148–153, 1999. [16] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers. Wallflower: Principles and practice of background maintenance. In IEEE CVPR, pages 255–261, 1999.
|
2002
|
25
|
2,227
|
Value-Directed Compression of POMDPs Pascal Poupart Departement of Computer Science University of Toronto Toronto, ON, M5S 3H5 ppoupart@cs.toronto.edu Craig Boutilier Department of Computer Science University of Toronto Toronto, ON, M5S 3H5 cebly@cs.toronto.edu Abstract We examine the problem of generating state-space compressions of POMDPs in a way that minimally impacts decision quality. We analyze the impact of compressions on decision quality, observing that compressions that allow accurate policy evaluation (prediction of expected future reward) will not affect decision quality. We derive a set of sufficient conditions that ensure accurate prediction in this respect, illustrate interesting mathematical properties these confer on lossless linear compressions, and use these to derive an iterative procedure for finding good linear lossy compressions. We also elaborate on how structured representations of a POMDP can be used to find such compressions. 1 Introduction Partially observable Markov decision processes (POMDPs) provide a rich framework for modeling a wide range of sequential decision problems in the presence of uncertainty. Unfortunately, the application of POMDPs to real world problems remains limited due to the intractability of current solution algorithms, in large part because of the exponential growth of state spaces with the number of relevant variables. Ideally, we would like to mitigate this source of intractability by compressing the state space as much as possible without compromising decision quality. Our aim in solving a POMDP is to maximize future reward based on our current beliefs about the world. By compressing its belief state, an agent may lose relevant information, which results in suboptimal policy choice. Thus an important aspect of belief state compression lies in distinguishing relevant information from that which can be safely discarded. A number of schemes have been proposed for either directly or indirectly compressing POMDPs. For example, approaches using bounded memory [8, 10] and state aggregation—eitherdynamic [2] or static [5, 9]—can be viewed in this light. In this paper, we study the effect of static state-space compression on decision quality. We first characterize lossless compressions—those that do not lead to any error in expected value—by deriving a set of conditions that guarantee decision quality will not be impaired. We also characterize the specific case of linear compressions. This analysis leads to algorithms that find good compression schemes, including methods that exploit structure in the POMDP dynamics (as exhibited, e.g., in graphical models). We then extend these concepts to lossy compressions. We derive a (somewhat loose) upper bound on the loss in decision quality when the conditions for lossless compression (of some required dimensionality) are not met. Finally we propose a simple optimization program to find linear lossy compressions that minimizes this bound, and describe how structured POMDP models can be used to implement this scheme efficiently. 2 Background and Notation 2.1 POMDPs A POMDP is defined by: a set of states ; a set of actions ; a set of observations ; a transition function , where
denotes the transition probability ; an observation function , where denotes the probability of making observation in state ; and a reward function , where denotes the immediate reward associated with state .1 We assume discrete state, action and observation sets and we focus on discounted, infinite horizon POMDPs with discount factor "!$#&% . Policies and value functions for POMDPs are typically defined over belief space, where a belief state ' is a distribution over capturing an agent’s knowledge about the current state of the world. Belief state ' can be updated in response to a specific action-observation pair ()
+* using Bayes rule: ' -,/.0213' 4 5
- ( . is a normalization constant). We denote the (unnormalized) mapping 7698 : , where, in matrix form, we have 698 : ;=< ,>+ <
; ? < . Note that a belief state ' and reward function can be viewed respectively as -dimensional row and column vectors. We define '@A,B'DCE . Solving a POMDP consists of finding an optimal policy F mapping belief states to actions. The value GH of a policy F is the expected sum of discounted rewards and is defined as: G H '@A,2I '@KJL!7M : G H N H+OP Q8 : '@ (1) A number of techniques [11] based on value iteration or policy iteration can be used to compute optimal or approximately optimal policies for POMDPs. 2.2 Conditional Independence and Additive Separability When our state space is defined by a set of variables, POMDPs can often be represented concisely in a factored way by specifying the transition, observation and reward functions using a dynamic Bayesian network (DBN). Such representations exploit the fact that transitions associated with each variable depend only on a small subset of variables. These representations can often be exploited to solve POMDPs without state space enumeration [2]. Recently, Pfeffer [13] showed that conditional independence combined with some form of additive separability can enable efficient inference in many DBNs. Roughly, a function can be additively separated when it decomposes into a sum of smaller terms. For instance, RTS is separable if there exist conditional distributions U RV and WX S , and .ZY\[ ] ^%@_ , such that + R`Sa,&.b9U RTcJde%gfL.be9Wh SI . This ensures that one need only know the marginals of R and S (instead of their joint distribution) to infer . Pfeffer shows how additive separability in the CPTs of a DBN can be exploited to identify families of self-sufficient variables. A self-sufficient family consists of a set of subsets of variables such that the marginals of each subset are sufficient to predict the marginals of the same subsets at the next time step. Hence, if we require the probabilities of a few variables, and can identify a self-sufficient family containing those variables, then we need only compute marginals over this family when monitoring belief state. 1The ideas presented in this paper generalize to cases when i and j also depend on actions. ~b b’ b r r’ T π T π T π ~b b’ b r r’ b) a) f T ~b’ R R R R ~ ~ π ~ ~ g f T T ~b’ f g R R R R ~ ~ π π π π Figure 1: a) Functional flow of a POMDP (dotted arrows) and a compressed POMDP (solid arrows) where the next belief state is accurately predicted. b) Functional flow of a POMDP (dotted arrows) and a compressed POMDP (solid arrows) where the next compressed belief state is accurately predicted. 2.3 Invariant and Krylov Subspaces We briefly review several linear algebraic concepts used later (see [15] for more details). Let be a vector subspace. We say is invariant with respect to matrix if it is closed under multiplication by (i.e., Y Y ). A Krylov subspace $ &
is the smallest subspace that contains and is invariant with respect to . A basis for a Krylov subspace can easily be generated by repeatedly multiplying by (i.e., ,
c c c ). If $ &
is -dimensional, one can show that is the last linearly independent vector in this sequence and that all subsequent vectors are linear combinations of . In a DBN, families of self-sufficient variables naturally correspond to invariant subspaces. For instance, suppose is a linear function that depends only on self-sufficient family
R!
SD " . If we regress through the dynamics of the DBN—i.e., if we multiply by the transition matrix 698 : —the resulting function will also be defined over the truth values of
R! and
SD " . Hence, when a family of variables is self-sufficient, the subspace of linear functions defined over the truth values of that family is invariant w.r.t. 3698 : . 3 Lossless Compressions If a compression of the state space of a POMDP allows us to accurately evaluate all policies, we say the compression is lossless, since we have sufficient information to select the optimal policy. We provide one characterization of lossless compressions. We then specialize this to the linear case, and discuss the use of compact POMDP representations. Let be a compression function that maps each belief state ' into some lower dimensional compressed belief state #' (see Figure 1(a)). Here # ' can be viewed as a bottleneck (e.g., in the sense of the information bottleneck [17]) that filters the information contained in ' before it’s used to estimate future rewards. We desire a compression such that # ' corresponds to the smallest statistic sufficient for accurately predicting the current reward as well as the next belief state ' (since we can accurately predict all following rewards from ' ). Such a compression exists if we can also find mappings $4698 : and # such that: , # &%' and 698 : ,&$ 698 : %'3 Y Y (2) Since we are only interested in predicting future rewards, we don’t really need to accurately estimate the next belief state ' ; we could just predict the next compressed belief state # ' since it captures all information in ' relevant for estimating future rewards. Figure 1(b) illustrates the resulting functional flow, where # 698 : represents the transition function that directly maps one compressed belief state to the next compressed belief state. Eq. 2 can then be replaced by the following weaker but still sufficient conditions: &, # %' and %D 698 : , # 698 : %' 3 Y Y (3) Given an , # and # h698 : satisfying Eq. 3, we can evaluate a policy F using the compressed POMDP dynamics as follows: # G H # '@A, # # '@ JL!7M : # G H # H+O P Q8 : # ' e (4) Once # G-H is found, we can recover the original value function GIH , # G-H % . Indeed, Eq. 1 and Eq. 4 are equivalent: Theorem 1 Let , # and # h698 : satisfy Eq. 3 and let GHV, # G-H % . Then Eq. 1 holds iff Eq. 4 does. Proof G-H '@A,2 H '@KJL!70 : G-H )gH+OP Q8 : '@ # GH 'Ee , # I '@ JL!0 : # G-H NgH+O PQ48 :5'Eee # GH 'Ee , # I '@ JL!0 : # G-H # gH+OP Q8 : 'Eee # G7H # '@D, # # '@cJ !0 : # GH # gHOP Q8 :5 #'@e 3.1 Linear compressions We say is a linear compression when is a linear function, representable by some matrix . In this case, the approximate transition and reward functions # 68 : and # must also be linear (assuming Eq. 3 is satisfied). Eq. 3 can be rewritten in matrix notation: , # and 698 : , # 698 : 3
(5) In a linear compression, can be viewed as effecting a change of basis for the value function, with the columns of defining a subspace in which the compressed value function lies. Furthermore, the rank of indicates the dimensionality of the compressed state space. When Eq. 5 is satisfied, the columns of span a subspace that contains and that is invariant with respect to each 698 : . Intuitively, Eq. 5 says that a sufficient statistic must be able to “predict itself” at the next time step (hence the subspace is invariant), and that it must predict the current reward (hence the subspace contains ). Formally: Theorem 2 Let # 698 : , # and satisfy Eq. 5. Then the range of contains and is invariant with respect to each 698 : . Proof Eq. 5 ensures is a linear combination of the columns of , so it lies in the range of . It also requires that the columns of each 698 : are linear combinations of the columns of , so is invariant with respect to each 7698 : . Thus, the best linear lossless compression corresponds to the smallest invariant subspace that contains . This is by definition the Krylov subspace $
698 :5 Y Y 7 . Using this fact we can easily compute the best lossless linear compression by iteratively multiplying by each 68 : until the Krylov basis is obtained. We then let the Krylov basis form the columns of , and compute # and each # h698 : by solving each part of Eq. 5. Finally, we can solve the POMDP in the compressed state space by using # and # 698 : . Note that this technique can be viewed as a generalization of Givan et al’s MDP model minimization technique [3]. It is interesting to note that Littman et al. [9] proposed a similar iterative algorithm to compress POMDPs based on predicting future observations.2 2Assuming that rewards are functions of the observations. 3.2 Structured Linear Compressions When a POMDP is specified in compactly, say, using a DBN, the size of the state space may be exponentially larger than the specification. The practical need to avoid state enumeration is a key motivation for POMDP compression. However, the complexity of the search for a good compression must also be independent of the state space size. Unfortunately, the iterative Krylov algorithm involves repeatedly multiplying explicit transition matrices and basis vectors. We consider several ways in which a compact POMDP specification can be exploited to construct a linear compression without state enumeration. One solution lies in exploiting DBN structure and context-specific independence. If transition, observation and reward functions are represented using DBNs and structured CPTs (e.g., decision trees or algebraic decision diagrams), then the matrix operations required by the Krylov algorithm can be implemented effectively [1, 7]. Although this approach can offer substantial savings, the DTs or ADDs that represent the basis vectors of the Krylov subspace may still be much larger than the dimensionality of the compressed state space and the original DBN specifications. Alternatively, families of self-sufficient variables corresponding to invariant subspaces can be identified by exploiting additive separability. Starting with the variables upon which depends, we can recursively grow a family of variables until it is self-sufficient with respect to each 698 : . The corresponding subspace is invariant and necessarily contains . Assuming a tractable self-sufficient family is found, a compact basis can then be constructed by using all indicator functions for each subset of variables in this family (e.g., if
R$ S is one such subset of binary variables, then eight basis vectors will correspond to this set). This approach allows us to quickly identify a good compression by a simple inspection of the additive separability structure of the DBN. The resulting compression is not necessarily optimal; however, it is the best among those corresponding to some such family. It is important to note that the dynamics # 698 : and reward # of the compressed POMDP can be constructed easily (i.e., without state enumeration) from this and the original DBN model. Pfeffer [13] notes that observations tend to reduce the amount of additive separability present in a DBN, thereby increasing the size of self-sufficient families. Therefore, we should point out that lossless compressions of POMDPs that exploit self-sufficiency and offer an acceptable degree of compression may not exist. Hence lossy compressions are likely to be required in many cases. Finally, we ask whether the existence of lossless compressions requires some form of structure in the POMDP. We argue that this is almost always the case. Suppose a transition matrix h698 : and a reward vector are chosen uniformly at random. The odds that falls into a proper invariant subspace of 68 : are essentially zero since there are infinitely more vectors in the full space than in all the proper invariant subspaces put together. This means that if a POMDP can be compressed, it must almost certainly be because its dynamics exhibit some structure. We have described how context-specific independence and additive separability can be exploited to identify some linear lossless compressions. However they do not guarantee that the optimal compression will be found, so it remains an open question whether other types of structure could be used in similar ways. 4 Lossy compressions Since we cannot generally find effective lossless compressions, we also consider lossy compressions. We propose a simple approach to find linear lossy compressions that “almost satisfy” Eq. 5. Table 1 outlines a simple optimization program to find lossy compressions that minimize a weighted sum of the max-norm residual errors, and , in Eq. 5. Here and are weights that allow us to vary the degree to which the two components of Eq. 5 J s.t. f @2f # (6) f h698 : f # h698 : 3 Y Y (7) , % Table 1: Optimization program for linear lossy compressions should be satisfied. The unknowns of the program are all the entries of # , # h698 : and as well as and . The constraint , % is necessary to preserve scale, otherwise could be driven down to 0 simply by setting all the entries of to 0. Since # h68 : and # multiply , some constraints are nonlinear. However, it is possible to solve this optimization program by solving a series of LPs (linear programs). We alternate solving the LP that adjusts # and # h698 : while keeping fixed, and solving the LP that adjusts while keeping # and # 698 : fixed. This guarantees that the objective function decreases at each iteration and will converge, but not necessarily to a local optimum. 4.1 Max-norm Error Bound The quality of the compression resulting from this program depends on the weights and . Ideally, we would like to set and in a way that J represents the loss in decision quality associated with compressing the state space. If we can bound the error
of evaluating any policy using the compressed POMDP, then the difference in expected total return between the policy that is best w.r.t. the compressed POMDP and the true optimal policy is at most
. Let
be H EG7Hhf # G-H'% . Theorem 3 gives an upper bound on
as a linear combination of the max-norm residual errors in Eq. 5. Theorem 3 Let
L, H EGH f # G7H % , ,E f # % , , 698 : @h698 :
f # h698 : %' and # Gg, H # G-H . Then
Z J
"! $# . We omit the proof due to lack of space. It essentially consists of a sequence of substitutions of the type % & '% ( and %2J ( )% J*& . We suspect that the above error bound will grossly overestimate the loss in decision quality, however we intend to use it mostly as a guide for setting and . Here ! + # G, .?%f"! is typically much greater than % e% fV! because of the factor # G,/ , which means that has a much higher impact on the loss in decision quality than . Intuitively, this makes sense because the error in predicting the next compressed belief state may compound over time, so we should set significantly higher than . 4.2 Structured Compressions As with lossless compressions, solving the program in Table 1 may be intractable due to the size of . There are 0 constraints and # unknown entries in matrix .3 We describe several techniques that allow one to exploit problem structure to find an acceptable lossy compression without state space enumeration. One approach is related to the basis function model proposed in [4], in which we restrict to be functions over some small set of factors (subsets of state variables.) This ensures that the number of unknown parameters in any column of (which we optimize in Table 1) is 3Assuming 1 2 is small, the 3 1 2 3 4 variables in each 1 576 8 9 and 3 1 2 3 variables in 1 j are unproblematic. linear in the number of instantiations of each factor. By keeping factors small, we maintain a manageable set of unknowns. To deal with the 0 constraints, we can exploit the structure imposed on and the DBN structure to reduce the number of constraints to something (in the many cases) polynomial in the number of state variables. This can be achieved using the techniques described in [4, 16] to rewrite an LP with many fewer constraints or to generate small subsets of constraints incrementally. These techniques are rather involved, so we refer to the cited papers for details. By searching within a restricted set of structured compressions and by exploiting DBN structure it is possible to efficiently solve the optimization program in Table 1. The question of factor selection remains: on what factors should be defined? A version of this question has been tackled in [12, 14] in the context of selecting a basis to approximately solve MDPs. The techniques proposed in those papers could be adapted to our optimization program. An alternative method for structuring the computation of involves additive separability. Let < ( ) be subsets of variables, and < < # be a function over < and the compressed state space # . We restrict each column of to be a separable function of the < ; that is, column (corresponding to state # ; ) is 0 < < < < # ; for some parameters < . Here the < can be viewed as weights indicating the importance of the contribution of each < in the separable function. Given a family of subsets, the parameters over which we optimize to determine are now the < and the entries of each function < < # A . While nonlinear, the same alternating minimization scheme described earlier can be used to optimize these two classes of parameters of in turn. Note that the number of variables is dependent only on the size of the subsets < and the compressed state space # . Furthermore, this form of additive separability lends itself to the same compact constraint generation techniques mentioned above. Finally, the (discrete) search for decent subsets < can be interleaved with optimization of the compression mapping for fixed sets < . 5 Preliminary Experiments We report on preliminary experiments with the coffee problem described in [2]. Given its relatively small size (32 states, 3 observations and 2 actions), these results should be viewed as simply illustrating the feasibility and potential of the algorithms proposed in Secs. 3.1 and 4.1. Further experiments for the structured versions (Secs. 3.2 and 4.2) are necessary to assess the degree of compression achievable with large, realistic problems. The 32-dimensional belief space can be compressed without any loss to a 7-dimensional subspace using the Krylov subspace algorithm described in Section 3.1. For further compression, we applied the optimization program described in Table 1 by setting the weights and to % and 55 respectively. The alternating variable technique was iterated % times, with the best solution chosen from % random restarts (to mitigate the effects of local optima). Figure 2 shows the loss in expected return (w.r.t. the optimal policy) when policy computed using varying degrees of compression is executing for %95 stages. The loss is sampled from 100,000 random initial belief states, averaged over 10 runs. These policies manage to achieve expected returns with less than loss. In contrast, the average loss of a random policy is about (or
). 6 Concluding Remarks We have presented an in-depth theoretical analysis of the impact of static compressions on decision quality. We derived a set of conditions that guarantee compression does not impair decision quality, leading to interesting mathematical properties for linear compressions that allow us to exploit structure in the POMDP dynamics. We also proposed a simple 3 4 5 6 7 0 0.1 0.2 0.3 Dimensionality of Compressed Space Average Loss (Absolute) 0% 1% 2% 3% Average Loss (Relative) Figure 2: Average loss for various lossy compressions optimization program to search for good lossy compressions. Preliminary results suggest that significant compression can be achieved with little impact on decision quality. This research can be extended in various directions. It would be interesting to carry out a similar analysis in terms of information theory (instead of linear algebra) since the problem of identifying information in a belief state relevant to predicting future rewards can be modeled naturally using information theoretic concepts [6]. Dynamic compressions could also be analyzed since, as we solve a POMDP, the set of reasonable policies shrinks, allowing greater compression. References [1] C. Boutilier, R. Dearden, and M. Goldszmidt. Stochastic dynamic programming with factored representations. Artificial Intelligence, 121:49–107, 2000. [2] C. Boutilier and D. Poole. Computing optimal policies for partially observable decision processes using compact representations. Proc. AAAI-96, pp.1168–1175, Portland, OR, 1996. [3] R. Givan, T. Dean, and M. Greig. Equivalence notions and model minimization in Markov decision processes. Artificial Intelligence, to appear, 2002. [4] C. Guestrin, D. Koller, and R. Parr. Max-norm projections for factored MDPs. Proc. IJCAI-01, pp.673–680, Seattle, WA, 2001. [5] C. Guestrin, D. Koller, and R. Parr. Solving factored POMDPs with linear value functions. IJCAI-01 Worksh. on Planning under Uncertainty and Inc. Info., Seattle, WA, 2001. [6] C. Guestrin and D. Ormoneit. Information-theoretic features for reinforcement learning. Unpublished manuscript. [7] J. Hoey, R. St-Aubin, A. Hu, and C. Boutilier. SPUDD: Stochastic planning using decision diagrams. Proc. UAI-99, pp.279–288, Stockholm, 1999. [8] M. L. Littman. Memoryless policies: theoretical limitations and practical results. In D. Cliff, P. Husbands, J. Meyer, S. W. Wilson, eds., Proc. 3rd Intl. Conf. Sim. of Adaptive Behavior, Cambridge, 1994. MIT Press. [9] M. L. Littman, R. S. Sutton, and S. Singh. Predictive representations of state. Proc.NIPS-02, Vancouver, 2001. [10] R. A. McCallum. Hidden state and reinforcement learning with instance-based state identification. IEEE Transations on Systems, Man, and Cybernetics, 26(3):464–473, 1996. [11] K. Murphy. A survey of POMDP solution techniques. Technical Report, U.C. Berkeley, 2000. [12] R. Patrascu, P. Poupart, D. Schuurmans, C. Boutilier, C. Guestrin. Greedy linear valueapproximation for factored Markov decision processes. AAAI-02, pp.285–291, Edmonton, 2002. [13] A. Pfeffer. Sufficiency, separability and temporal probabilistic models. Proc. UAI-01, pp.421– 428, Seattle, WA, 2001. [14] P. Poupart, C. Boutilier, R. Patrascu, and D. Schuurmans. Piecewise linear value function approximation for factored MDPs. AAAI-02, pp.292–299, Edmonton, 2002. [15] Y. Saad. Iterative Methods for Sparse Linear Systems. PWS, Boston, 1996. [16] D. Schuurmans and R. Patrascu. Direct value-approximation for factored MDPs. Proc. NIPS01, Vancouver, 2001. [17] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. 37th Annual Allerton Conf. on Comm., Contr. and Computing, pp.368–377, 1999.
|
2002
|
26
|
2,228
|
Constraint Classification for Multiclass Classification and Ranking Sariel Har-Peled Dan Roth Dav Zimak Department of Computer Science University of Illinois Urbana, IL 61801 sariel,danr,davzimak @uiuc.edu Abstract The constraint classification framework captures many flavors of multiclass classification including winner-take-all multiclass classification, multilabel classification and ranking. We present a meta-algorithm for learning in this framework that learns via a single linear classifier in high dimension. We discuss distribution independent as well as margin-based generalization bounds and present empirical and theoretical evidence showing that constraint classification benefits over existing methods of multiclass classification. 1 Introduction Multiclass classification is a central problem in machine learning, as applications that require a discrimination among several classes are ubiquitous. In machine learning, these include handwritten character recognition [LS97, LBD 89], part-of-speech tagging [Bri94, EZR01], speech recognition [Jel98] and text categorization [ADW94, DKR97]. While binary classification is well understood, relatively little is known about multiclass classification. Indeed, the most common approach to multiclass classification, the oneversus-all (OvA) approach, makes direct use of standard binary classifiers to encode and train the output labels. The OvA scheme assumes that for each class there exists a single (simple) separator between that class and all the other classes. Another common approach, all-versus-all (AvA) [HT98], is a more expressive alternative which assumes the existence of a separator between any two classes. OvA classifiers are usually implemented using a winner-take-all (WTA) strategy that associates a real-valued function with each class in order to determine class membership. Specifically, an example belongs to the class which assigns it the highest value (i.e., the “winner”) among all classes. While it is known that WTA is an expressive classifier [Maa00], it has limited expressivity when trained using the OvA assumption since OvA assumes that each class can be easily separated from the rest. In addition, little is known about the generalization properties or convergence of the algorithms used. This work is motivated by several successful practical approaches, such as multiclass support vector machines (SVMs) and the sparse network of winnows (SNoW) architecture that rely on the WTA strategy over linear functions. Our aim is to improve the understanding of such classifier systems and to develop more theoretically justifiable algorithms that realize the full potential of WTA. An alternative interpretation of WTA is that every example provides an ordering of the classes (sorted in descending order by the assigned values), where the “winner” is the first class in this ordering. It is thus natural to specify the ordering of the classes for an example directly, instead of implicitly through WTA. In Section 2, we introduce constraint classification, where each example is labeled with a set of constraints relating multiple classes. Each such constraint specifies the relative order of two classes for this example. The goal is to learn a classifier consistent with these constraints. Learning is made possible by a simple transformation mapping each example into a set of examples (one for each constraint) and the application of any binary classifier on the mapped examples. In Section 3, we present a new algorithm for constraint classification that takes on the properties of the binary classification algorithm used. Therefore, using the Perceptron algorithm, it is able to learn a consistent classifier if one exists, using the winnow algorithm it can learn attribute efficiently, and using the SVM, it provides a simple implementation of multiclass SVM. The algorithm can be implemented with a subtle change to the standard (via OvA) approach to training a network of linear threshold gates. In Section 4, we discuss both VC-dimension and margin-based generalization bounds presented a companion paper[HPRZ02]. Our generalization bounds apply to WTA classifiers over linear functions, for which VC-style bounds were not known. In addition to multiclass classification, constraint classification generalizes multilabel classification, ranking on labels, and of course, binary classification. As a result, our algorithm provides new insight into these problems, as well as new, powerful tools for solving them. For example, in Section , we show that the commonly used OvA assumption can cause learning to fail, even when a consistent classifier exists. Section 5 provides empirical evidence that the constraint classification outperforms the OvA approach. 2 Constraint Classification Learning problems often assume that examples,
, are drawn from fixed probability distribution, , over
. is referred to as the instance space and is referred to as the output space (label set). Definition 2.1 (Learning) Given examples, ! " # %$&#'$(%)*)*) +-,.#/,# , drawn 0 - from 123 , a hypothesis class 4 and an error function 56/7
89
:4<; >= )? , a learning algorithm @2 AB#4 attempts to output a function CD 4 , where C 6'E; , that minimizes the expected error on a randomly drawn example. Definition 2.2 (Permutations) Denote the set of full orders over ?F)***0G as /H , consisting of all permutations of ?/**)(0G . Similarly, I H denotes the set of all partial orders over ?/*)*(G . A partial order, J KI LH , defines a binary relation, MN and can be represented by set of pairs on which M N holds, J1 PO')Q RM N O . In addition, for any set of pairs J $SPOT$*%*)*)) VUWXO)U- , we refer to J both as a set of pairs and as the partial order produced by the transitive closure of J with respect to MYN . Given two partial orders Z 0[ \I H , Z is consistent with [ (denoted Z^] [ ) if for every +0XO'2 ?F)***0G S_ , `MbacO holds whenever `Md:O . If Je f H is a full order, then it can be represented by a list of G integers where `M N O if precedes O in the list. The size of a partial order, Q JFQ is the number of pairs specified in J . Definition 2.3 (Constraint Classification) Constraint classification is a learning problem where each example +3. gh
<I H is labeled according to a partial order ! KI H . A constraint classifier, C^6i; I H , is consistent with example if is consistent with Cj k ( ] C +k ). When Q cQ'l7J , we call it J -constraint classification. Internal Output Size of Problem Representation Space ( ) Hypothesis Mapping binary ?/*?
( ? multiclass $T)**) H : H ?F)***0G $ H
) G ? ! -multilabel $T)**) H : H ?/**)(0G " #$ " $% H &
( ! AG ! ranking $T)**) H : H H #$ '( $% H
* G ? constraint* $T)**) H : H I H #$ '( $% H
* – J -constraint* $T)**) H : H I H N #$ '( $% H
* J Table 1: Definitions for various learning problems (notice that the hypothesis for constraint classification is always a full order) and the size of the resultant mapping to ) -constraint classification. *,+.-0/1*,23 is a variant of *4+.-5/1*42 that returns the 6 maximal indices with respect to 798;:< . *,+.-0=?>0+?@ is a linear sorting function (see Definition 2.6). Definition 2.4 (Error Indicator Function) For any j# ^i
^I H , and hypothesis C 6 E; I H , the indicator function 5 j#W0Ck indicates an error on example , 5 j#W0Ck` ? if BA ] Cj W , and = otherwise. For example, if G DC and example j#8 " + FEG %) HE3C T , Cc$T W: HE3G*?/C , and C _ W` " C%E3$G*?S , then CW$ is correct since E precedes G and E precedes C in the full order FEG*?FC whereas C _ is incorrect since C precedes E in C%E3$G*?S . Definition 2.5 (Error) Given an example +# drawn from j23 , the true error of C g4 , where C 6L ; JI is defined to be K5LML3 ACk ONPQJR 5: j#W0CkTS . Given # + $ # $ %)**)) + , # , # , the empirical error of C 4 with respect to is defined to be K5LML' PB0CkR $ U VWU QXZY [ \4]?^ V 5` j#W0Ck*Q . In this paper, we consider constraint classification problems where hypotheses are functions from to H that output a permutation of ?/*)*(G . Definition 2.6 (Linear Sorting Function) Let K _ $ )*)* H be a set of G vectors, where $T)**) H D . Given , a linear sorting classifier is a function C 6 ; H computed in the following way: Cj W` `'( baj$ H
) where #$ '( returns a permutation of ?/*)*(G where precedes O if
dce gfP
# . In the case that &
* h f
* , precedes O if 9i!O . Constraint classification can model many well-studied learning problems including multiclass classification, ranking and multilabel classification. Table 1 shows a few interesting classification problems expressible as constraint classification. It is easy to show: Lemma 2.7 (Problem mappings) All of the learning problems in Table 1 can be expressed as constraint classification problems. Consider a C -class multiclass example, $G/ . It is transformed into the G -constraint example, FG*?S%> _G%EF() _GC' > . If we find a constraint classifier that correctly labels according to the given constraints where kjl
Bcm b$n
, jl
dco _
, and jl
dco qp
, then G r#$ $% _ j, p s`
P . If instead we are given a ranking example j _G%E3)?FC' S , it can be transformed into FG$E/%> FE3)?>() ?/C' S . 3 Learning In this section, G -class constraint classification is transformed into binary classification in higher dimension. Each example j# 9
I H becomes a set of examples in H
?/*? with each constraint PO' contributing a single ‘positive’ and a single ‘negative’ example. Then, a separating hyperplane for the expanded example set (in H ) can be viewed as a linear sorting function over G linear functions, each in dimensional space. 3.1 Kesler’s Construction Kesler’s construction for multiclass classification was first introduced by Nilsson in 1965[Nil65, 75–77] and can also be found more recently[DH73]. This subsection extends the Kesler construction for constraint classification. Definition 3.1 (Chunk) A vector $ *)*) H H
,
0
-
, is broken into G chunks $T)*)( H where the -th chunk, Y c$.] $ *)*)# . Definition 3.2 (Expansion) Let
j# be a vector # embedded in G3 dimensions, by writing the coordinates of in the -th chunk of a vector in H Y $.] . Denote by " the zero vector of length ! . Then
+V can be written as the concatenation of three vectors,
j# E Y $] #j Y H W ] 2 H . Finally, +#XO
jPO' , is the embedding of in the -th chunk and in the O -th chunk of a vector in H . Definition 3.3 (Expanded Example Sets) Given an example +# , where # and I H , we define the expansion of j# into a set of examples as follows, j#`
j#PO'%)?> PO': H
? L A set of negative examples is defined as the reflection of each expanded example through the origin, specifically R +3R ?> *?S8 H
? L and the set of both positive and negative examples is denoted by 1 j# R +# . The expansion of a set of examples, , is defined as the union of all of the expanded examples in the set, AL` Yb[ \4]T^ V +3 H
` ?F)? ` Note that the original Kesler construction produces only . We also create to simplify the analysis and to maintain consistency when learning non-linear functions (such as SVM). 3.2 Algorithm Figure 1 (a) shows a meta-learning algorithm for constraint classification that finds a linear sorting function by using any algorithm for learning a binary classifier. Given a set of examples !
I H , the algorithm simply finds a separating hyperplane Cj ` " k
for A`# H
?/*? . Suppose C correctly classifies *?SR "
j#PO'(*?>: PL , then
$ EB
% & d
& |