index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
2,200
Learning to Classify Galaxy Shapes Using the EM Algorithm Sergey Kirshner Information and Computer Science University of California Irvine, CA 92697-3425 skirshne@ics.uci.edu Igor V. Cadez Sparta Inc., 23382 Mill Creek Drive #100, Laguna Hills, CA 92653 igor cadez@sparta.com Padhraic Smyth Information and Computer Science University of California Irvine, CA 92697-3425 smyth@ics.uci.edu Chandrika Kamath Center for Applied Scienti£c Computing Lawrence Livermore National Laboratory Livermore, CA 94551 kamath2@llnl.gov Abstract We describe the application of probabilistic model-based learning to the problem of automatically identifying classes of galaxies, based on both morphological and pixel intensity characteristics. The EM algorithm can be used to learn how to spatially orient a set of galaxies so that they are geometrically aligned. We augment this “ordering-model” with a mixture model on objects, and demonstrate how classes of galaxies can be learned in an unsupervised manner using a two-level EM algorithm. The resulting models provide highly accurate classi£cation of galaxies in cross-validation experiments. 1 Introduction and Background The £eld of astronomy is increasingly data-driven as new observing instruments permit the rapid collection of massive archives of sky image data. In this paper we investigate the problem of identifying bent-double radio galaxies in the FIRST (Faint Images of the Radio Sky at Twenty-cm) Survey data set [1]. FIRST produces large numbers of radio images of the deep sky using the Very Large Array at the National Radio Astronomy Observatory. It is scheduled to cover more that 10,000 square degrees of the northern and southern caps (skies). Of particular scienti£c interest to astronomers is the identi£cation and cataloging of sky objects with a “bent-double” morphology, indicating clusters of galaxies ([8], see Figure 1). Due to the very large number of observed deep-sky radio sources, (on the order of 106 so far) it is infeasible for the astronomers to label all of them manually. The data from the FIRST Survey (http://sundog.stsci.edu/) is available in both raw image format and in the form of a catalog of features that have been automatically derived from the raw images by an image analysis program [8]. Each entry corresponds to a single detectable “blob” of bright intensity relative to the sky background: these entries are called Figure 1: 4 examples of radio-source galaxy images. The two on the left are labelled as “bent-doubles” and the two on the right are not. The con£gurations on the left have more “bend” and symmetry than the two non-bent-doubles on the right. components. The “blob” of intensities for each component is £tted with an ellipse. The ellipses and intensities for each component are described by a set of estimated features such as sky position of the centers (RA (right ascension) and Dec (declination)), peak density ¤ux and integrated ¤ux, root mean square noise in pixel intensities, lengths of the major and minor axes, and the position angle of the major axis of the ellipse counterclockwise from the north. The goal is to £nd sets of components that are spatially close and that resemble a bent-double. In the results in this paper we focus on candidate sets of components that have been detected by an existing spatial clustering algorithm [3] where each set consists of three components from the catalog (three ellipses). As of the year 2000, the catalog contained over 15,000 three-component con£gurations and over 600,000 con£gurations total. The set which we use to build and evaluate our models consists of a total of 128 examples of bent-double galaxies and 22 examples of non-bent-double con£gurations. A con£guration is labelled as a bent-double if two out of three astronomers agree to label it as such. Note that the visual identi£cation process is the bottleneck in the process since it requires signi£cant time and effort from the scientists, and is subjective and error-prone, motivating the creation of automated methods for identifying bent-doubles. Three-component bent-double con£gurations typically consist of a center or “core” component and two other side components called “lobes”. Previous work on automated classi£cation of three-component candidate sets has focused on the use of decision-tree classi£ers using a variety of geometric and image intensity features [3]. One of the limitations of the decision-tree approach is its relative in¤exibility in handling uncertainty about the object being classi£ed, e.g., the identi£cation of which of the three components should be treated as the core of a candidate object. A bigger limitation is the £xed size of the feature vector. A primary motivation for the development of a probabilistic approach is to provide a framework that can handle uncertainties in a ¤exible coherent manner. 2 Learning to Match Orderings using the EM Algorithm We denote a three-component con£guration by C = (c1, c2, c3), where the ci’s are the components (or “blobs”) described in the previous section. Each component cx is represented as a feature vector, where the speci£c features will be de£ned later. Our approach focuses on building a probabilistic model for bent-doubles: p (C) = p (c1, c2, c3), the likelihood of the observed ci under a bent-double model where we implicitly condition (for now) on the class “bent-double.” By looking at examples of bent-double galaxies and by talking to the scientists studying them, we have been able to establish a number of potentially useful characteristics of the components, the primary one being geometric symmetry. In bent-doubles, two of the components will look close to being mirror images of one another with respect to a line through the third component. We will call mirror-image components lobe compocomponent 3 component 1 component 2 core lobe 2 lobe 1 1 core lobe 1 lobe 2 2 lobe 2 lobe 1 core 3 lobe 2 core lobe 1 6 core lobe 2 5 lobe 1 lobe 1 lobe 2 core 4 Figure 2: Possible orderings for a hypothetical bent-double. A good choice of ordering would be either 1 or 2. nents, and the other one the core component. It also appears that non-bent-doubles either don’t exhibit such symmetry, or the angle formed at the core component is too straight— the con£guration is not “bent” enough. Once the core component is identi£ed, we can calculate symmetry-based features. However, identifying the most plausible core component requires either an additional algorithm or human expertise. In our approach we use a probabilistic framework that averages over different possible orderings weighted by their probability given the data. In order to de£ne the features, we £rst need to determine the mapping of the components to labels “core”, “lobe 1”, and “lobe 2” (c, l1, and l2 for short). We will call such a mapping an ordering. Figure 2 shows an example of possible orderings for a con£guration. We can number the orderings 1, . . . , 6. We can then write p (C) = 6 X k=1 p (cc, cl1, cl2|Ω= k) p (Ω= k) , (1) i.e., a mixture over all possible orientations. Each ordering is assumed a priori to be equally likely, i.e., p(Ω= k) = 1 6. Intuitively, for a con£guration that clearly looks like a bentdouble the terms in the mixture corresponding to the correct ordering would dominate, while the other orderings would have much lower probability. We represent each component cx by M features (we used M = 3). Note that the features can only be calculated conditioned on a particular mapping since they rely on properties of the (assumed) core and lobe components. We denote by fmk (C) the values corresponding to the mth feature for con£guration C under the ordering Ω= k, and by fmkj (C) we denote the feature value of component j: fmk (C) = (fmk1 (C) , . . . , fmkBm (C)) (in our case, Bm = 3 is the number of components). Conditioned on a particular mapping Ω= k, where x ∈{c, l1, l2} and c,l1,l2 are de£ned in a cyclical order, our features are de£ned as: • f1k (C) : Log-transformed angle, the angle formed at the center of the component (a vertex of the con£guration) mapped to label x; • f2k (C) : Logarithms of side ratios, |center of x to center of next(x)| |center of x to center of prev(x)|; • f3k (C) : Logarithms of intensity ratios, peak ¤ux of next(x) peak ¤ux of prev(x) , and so (C|Ω= k) = (f1k (C) , f2k (C) f3k (C)) for a 9-dimensional feature vector in total. Other features are of course also possible. For our purposes in this paper this particular set appears to capture the more obvious visual properties of bent-double galaxies. For a set D = {d1, . . . , dN} of con£gurations, under an i.i.d. assumption for con£gurations, we can write the likelihood as P (D) = N Y i=1 K X k=1 P (Ωi = k) P (f1k (di) , . . . , fMk (di)) , where Ωi is the ordering for con£guration di. While in the general case one can model P (f1k (di) , . . . , fMk (di)) as a full joint distribution, for the results reported in this paper we make a number of simplifying assumptions, motivated by the fact that we have relatively little labelled training data available for model building. First, we assume that the fmk (di) are conditionally independent. Second, we are also able to reduce the number of components for each fmk (di) by noting functional dependencies. For example, given two angles of a triangle, we can uniquely determine the third one. We also assume that the remaining components for each feature are conditionally independent. Under these assumptions the multivariate joint distribution P (f1k (di) , . . . , fMk (di)) is factored into a product of simple distributions, which (for the purposes of this paper) we model using Gaussians. If we know for every training example which component should be mapped to label c, we can then unambiguously estimate the parameters for each of these distributions. In practice, however, the identity of the core component is unknown for each object. Thus, we use the EM algorithm to automatically estimate the parameters of the above model. We begin by randomly assigning an ordering to each object. For each subsequent iteration the E-step consists of estimating a probability distribution over possible orderings for each object, and the M-step estimates the parameters of the feature-distributions using the probabilistic ordering information from the E-step. In practice we have found that the algorithm converges relatively quickly (in 20 to 30 iterations) on both simulated and real data. It is somewhat surprising that this algorithm can reliably “learn” how to align a set of objects, without using any explicit objective function for alignment, but instead based on the fact that feature values for certain orderings exhibit a certain self-consistency relative to the model. Intuitively it is this self-consistency that leads to higher-likelihood solutions and that allows EM to effectively align the objects by maximizing the likelihood. After the model has been estimated, the likelihood of new objects can also be calculated under the model, where the likelihood now averages over all possible orderings weighted by their probability given the observed features. The problem described above is a speci£c instance of a more general feature unscrambling problem. In our case, we assume that con£gurations of three 3-dimensional components (i.e. 3 features) each are generated by some distribution. Once the objects are generated, the orders of their components are permuted or scrambled. The task is then to simultaneously learn the parameters of the original distributions and the scrambling for each object. In the more general form, each con£guration consists of L M-dimensional con£gurations. Since there are L! possible orderings of L components, the problem becomes computationally intractable if L is large. One solution is to restrict the types of possible scrambles (to cyclic shifts for example). 3 Automatic Galaxy Classi£cation We used the algorithm described in the previous section to estimate the parameters of features and orderings of the bent-double class from labelled training data and then to rank candidate objects according to their likelihood under the model. We used leave-one-out cross-validation to test the classi£cation ability of this supervised model, where for each of the 150 examples we build a model using the positive examples from the set of 149 “other” examples, and then score the “left-out” example with this model. The examples are then sorted in decreasing order by their likelihood score (averaging over different possi0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False positive rate True positive rate Figure 3: ROC plot for a model using angle, ratio of sides, and ratio of intensities, as features, and learned using ordering-EM with labelled data. ble orderings) and the results are analyzed using a receiver operating characteristic (ROC) methodology. We use AROC, the area under the curve, as a measure of goodness of the model, where a perfect model would have AROC = 1 and random performance corresponds to AROC = 0.5. The supervised model, using EM for learning ordering models, has a cross-validated AROC score of 0.9336 (Figure 3) and appears to be quite useful at detecting bent-double galaxies. 4 Model-Based Galaxy Clustering A useful technique in understanding astronomical image data is to cluster image objects based on their morphological and intensity properties. For example, consider how one might cluster the image objects in Figure 1 into clusters, where we have features on angles, intensities, and so forth. Just as with classi£cation, clustering of the objects is impeded by not knowing which of the “blobs” corresponds to the true “core” component. From a probabilistic viewpoint, clustering can be treated as introducing another level of hidden variables, namely the unknown class (or cluster) identity of each object. We can generalize the EM algorithm for orderings (Section 2) to handle this additional hidden level. The model is now a mixture of clusters where each cluster is modelled as a mixture of orderings. This leads to a more complex two-level EM algorithm than that presented in Section 2, where at the inner-level the algorithm is learning how to orient the objects, and at the outer level the algorithm is learning how to group the objects into C classes. Space does not permit a detailed presentation of this algorithm—however, the derivation is straightforward and produces intuitive update rules such as: ˆµcmj = 1 N ˆP (cl = c|Θ) N X i=1 K X k=1 P (cli = c|Ωi = k, D, Θ) P (Ωi = k|D, Θ) fmkj (di) where µcmj is the mean for the cth cluster (1 ≤c ≤C), the mth feature (1 ≤m ≤M), and the jth component of fmk (di), and Ωi = k corresponds to ordering k for the ith object. We applied this algorithm to the data set of 150 sky objects, where unlike the results in Section 3, the algorithm now had no access to the class labels. We used the Gaussian conditional-independence model as before, and grouped the data into K = 2 clusters. Figures 4 and 5 show the highest likelihood objects, out of 150 total objects, under the Bent−double Bent−double Bent−double Bent−double Bent−double Bent−double Bent−double Bent−double Figure 4: The 8 objects with the highest likelihood conditioned on the model for the larger of the two clusters learned by the unsupervised algorithm. Bent−double Non−bent−double Non−bent−double Non−bent−double Non−bent−double Non−bent−double Bent−double Non−bent−double Figure 5: The 8 objects with the highest likelihood conditioned on the model for the smaller of the two clusters learned by the unsupervised algorithm. 0 50 100 150 0 50 100 150 Supervised Rank Unsupervised Rank bent−doubles non−bent−doubles Figure 6: A scatter plot of the ranking from the unsupervised model versus that of the supervised model. models for the larger cluster and smaller cluster respectively. The larger cluster is clearly a bent-double cluster: 89 of the 150 objects are more likely to belong to this cluster under the model and 88 out of the 89 objects in this cluster have the bent-double label. In other words, the unsupervised algorithm has discovered a cluster that corresponds to “strong examples” of bent-doubles relative to the particular feature-space and model. In fact the non-bentdouble that is assigned to this group may well have been mislabelled (image not shown here). The objects in Figure 5 are clearly inconsistent with the general visual pattern of bent-doubles and this cluster consists of a mixture of non-bent-double and “weaker” bentdouble galaxies. The objects in Figures 5 that are labelled as bent-doubles seem quite atypical compared to the bent-doubles in Figure 4. A natural hypothesis is that cluster 1 (88 bent-doubles) in the unsupervised model is in fact very similar to the supervised model learned using the labelled set of 128 bent-doubles in Section 3. Indeed the parameters of the two Gaussian models agree quite closely and the similarity of the two models is illustrated clearly in Figure 6 where we plot the likelihoodbased ranks of the unsupervised model versus those of the supervised model. Both models are in close agreement and both are clearly performing well in terms of separating the objects in terms of their class labels. 5 Related Work and Future Directions A related earlier paper is Kirshner et al [6] where we presented a heuristic algorithm for solving the orientation problem for galaxies. The generalization to an EM framework in this paper is new, as is the two-level EM algorithm for clustering objects in an unsupervised manner. There is a substantial body of work in computer vision on solving a variety of different object matching problems using probabilistic techniques—see Mjolsness [7] for early ideas and Chui et al. [2] for a recent application in medical imaging. Our work here differs in a number of respects. One important difference is that we use EM to learn a model for the simultaneous correspondence of N objects, using both geometric and intensity-based features, whereas prior work in vision has primarily focused on matching one object to another (essentially the N = 2 case). An exception is the recent work of Frey and Jojic [4, 5] who used a similar EM-based approach to simultaneously cluster images and estimate a variety of local spatial deformations. The work described in this paper can be viewed as an extension and application of this general methodology to a real-world problem in galaxy classi£cation. Earlier work on bent-double galaxy classi£cation used decision tree classi£ers based on a variety of geometric and intensity-based features [3]. In future work we plan to compare the performance of this decision tree approach with the probabilistic model-based approach proposed in this paper. The model-based approach has some inherent advantages over a decision-tree model for these types of problems. For example, it can directly handle objects in the catalog with only 2 blobs or with 4 or more blobs by integrating over missing intensities and over missing correspondence information using mixture models that allow for missing or extra “blobs”. Being able to classify such con£gurations automatically is of signi£cant interest to the astronomers. Acknowledgments This work was performed under a sub-contract from the ASCI Scienti£c Data Management Project of the Lawrence Livermore National Laboratory. The work of S. Kirshner and P. Smyth was also supported by research grants from NSF (award IRI-9703120), the Jet Propulsion Laboratory, IBM Research, and Microsoft Research. I. Cadez was supported by a Microsoft Graduate Fellowship. The work of C. Kamath was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. We gratefully acknowledge our FIRST collaborators, in particular, Robert H. Becker for sharing his expertise on the subject. References [1] R. H. Becker, R. L. White, and D. J. Helfand. The FIRST Survey: Faint Images of the Radio Sky at Twenty-cm. Astrophysical Journal, 450:559, 1995. [2] H. Chui, L. Win, R. Schultz, J. S. Duncan, and A. Rangarajan. A uni£ed feature registration method for brain mapping. In Proceedings of Information Processing in Medical Imaging, pages 300–314. Springer-Verlag, 2001. [3] I. K. Fodor, E. Cant´u-Paz, C. Kamath, and N. A. Tang. Finding bent-double radio galaxies: A case study in data mining. In Proceedings of the Interface: Computer Science and Statistics Symposium, volume 33, 2000. [4] B. J. Frey and N. Jojic. Estimating mixture models of images and inferring spatial transformations using the EM algorithm. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1999. [5] N. Jojic and B. J. Frey. Topographic transformation as a discrete latent variable. In Advances in Neural Information Processing Systems 12. MIT Press, 2000. [6] S. Kirshner, I. V. Cadez, P. Smyth, C. Kamath, and E. Cant´u-Paz. Probabilistic modelbased detection of bent-double radio galaxies. In Proceedings 16th International Conference on Pattern Recognition, volume 2, pages 499–502, 2002. [7] E. Mjolsness. Bayesian inference on visual grammars by neural networks that optimize. Technical Report YALEU/DCS/TR-854, Department of Computer Science, Yale University, May 1991. [8] R. L. White, R. H. Becker, D. J. Helfand, and M. D. Gregg. A catalog of 1.4 GHz radio sources from the FIRST Survey. Astrophysical Journal, 475:479, 1997.
2002
189
2,201
Information Regularization with Partially Labeled Data Martin Szummer MIT AI Lab & CBCL Cambridge, MA 02139 szummer@ai.mit.edu Tommi Jaakkola MIT AI Lab Cambridge, MA 02139 tommi@ai.mit.edu Abstract Classification with partially labeled data requires using a large number of unlabeled examples (or an estimated marginal P(x)), to further constrain the conditional P(y|x) beyond a few available labeled examples. We formulate a regularization approach to linking the marginal and the conditional in a general way. The regularization penalty measures the information that is implied about the labels over covering regions. No parametric assumptions are required and the approach remains tractable even for continuous marginal densities P(x). We develop algorithms for solving the regularization problem for finite covers, establish a limiting differential equation, and exemplify the behavior of the new regularization approach in simple cases. 1 Introduction Many modern classification problems are rife with unlabeled examples. To benefit from such examples, we must exploit either implicitly or explicitly the link between the marginal density P(x) over examples x and the conditional P(y|x) representing the decision boundary for the labels y. High density regions or clusters in the data, for example, can be expected to fall solely in one or another class. Most discriminative methods do not attempt to explicitly model or incorporate information from the marginal density P(x). However, many discriminative algorithms such as SVMs exploit the notion of margin that effectively relates P(x) to P(y|x); the decision boundary is biased to fall preferentially in low density regions of P(x) so that only a few points fall within the margin band. The assumptions relating P(x) to P(y|x) are seldom made explicit. In this paper we appeal to information theory to explicitly constrain P(y|x) on the basis of P(x) in a regularization framework. The idea is in broad terms related to a number of previous approaches including maximum entropy discrimination [1], data clustering by information bottleneck [2], and minimum entropy data partitioning [3]. See also [4]. + + + + + + + + + + + + + + + – + – + + + – – – – + + – + I(x; y) 0 0.65 1 1 Figure 1: Mutual information I(x; y) measured in bits for four regions with different configurations of labels y= {+,-}. The marginal P(x) is discrete and uniform across the points. The mutual information is low when the labels are homogenous in the region, and high when labels vary. The mutual information is invariant to the spatial configuration of points within the neighborhood. 2 Information Regularization We begin by showing how to regularize a small region of the domain X. We will subsequently cover the domain (or any chosen subset) with multiple small regions, and describe criteria that ensure regularization of the whole domain on the basis of the individual regions. 2.1 Regularizing a Single Region Consider a small contiguous region Q in the domain X (e.g., an ϵ-ball). We will regularize the conditional probability P(y|x) by penalizing the amount of information the conditionals imply about the labels within the region. The regularizer is a function of both P(y|x) and P(x), and will penalize changes in P(y|x) more in regions with high P(x). Let L be the set of labeled points (size NL) and L ∪U be the set of labeled and unlabeled points (size NLU). The marginal P(x) is assumed to be given, and may be available directly in terms of a continuous density, or as an empirical density P(x) = 1/NLU · P i∈L∪U δ(x −xi) corresponding to a set of points {xi} that may not have labels (δ(·) is the Dirac delta function integrating to 1). As a measure of information, we employ mutual information [5], which is the average number of bits that x contains about the label in region Q (see Figure 1.) The measure depends both on the marginal density P(x) (specifically its restriction to x ∈Q namely P(x|Q) = P(x)/ R Q P(x) dx) and the conditional P(y|x). Equivalently, we can interpret mutual information as a measure of disagreement among P(y|x), x ∈Q. The measure is zero for any constant P(y|x). More precisely, the mutual information in region Q is IQ(x; y) = X y Z x∈Q P(x|Q)P(y|x) log P(y|x) P(y|Q) dx, (1) where P(y|Q) = R x∈Q P(x|Q)P(y|x) dx. The densities conditioned on Q are normalized to integrate to 1 within the region Q. Note that the mutual information is invariant to permutations of the elements of X within Q, which suggests that the regions must be small enough to preserve locality. The regularization penalty has to further scale with the number of points in the region (or the probability mass). We introduce the following regularization principle: Information regularization penalize (MQ/VQ) · IQ(x; y), which is the information about the labels within a local region Q, weighted by the overall probability mass MQ in the region, and normalized by a measure of variability VQ (variance) of x in the region. Here MQ = R x∈Q P(x) dx. The mutual information IQ(x; y) measures the information per point, and to obtain the total mutual information contained in a region, we must multiply by the probability mass MQ. The regularization will be stronger in regions with high P(x). VQ is a measure of variance of x restricted to the region, and is introduced to remove overall dependence on the size of the region. In one dimension, VQ = var(x|Q). When the region is small, then the marginal will be close to uniform over the region and VQ ∝R2, where R is, e.g., the radius for spherical regions. We omit here the analysis of the ddimensional case and only note that we may choose VQ = tr ΣQ, where the covariance ΣQ = R x∈Q(x −EQ(x))(x −EQ(x))T P(x|Q) dx. The choice of VQ is based on the limiting argument discussed next. 2.2 Limiting Behavior for Vanishing Size Regions When the size of the region is scaled down, the mutual information will go to zero for any continuous P(y|x). We derive here the appropriate regularization penalty in the limit of vanishing regions. For simplicity, we only consider the one-dimensional case. Within a small region Q we can (under mild continuity assumptions) approximate P(y|x) by a Taylor expansion around the mean point x0 ∈Q, obtaining P(y|Q) ≈P(y|x0) to first order. By using log(1 + z) ≈z −z2/2 and substituting the approximate P(y|x) and P(y|Q) into IQ(x; y), we get the following first order expression for mutual information: IQ(x; y) = 1 2 var(x|Q) | {z } size-dependent X y P(y|x0) d log P(y|x) dx 2 x0 | {z } size-independent (2) var(x|Q) is dependent on the size (and more generally shape) of region Q while the remaining parts are independent of the size (and shape). The regularization penalty should not scale with the resolution at which we penalize information and we thus divide out the size-dependent part. The size-independent part is the Fisher information [5], where we think of P(y|x) as parameterized by x. The expression d log P(y|x)/dx is known as the Fisher score. 2.3 Regularizing the Domain We want to regularize the conditional P(y|x) across the domain X (or any subset of interest). Since individual regions must be relatively small to preserve locality, we need multiple regions to cover the domain. The cover is the set C of these regions. Since the regularization penalty is assigned to each region, the regions must overlap to ensure that the conditionals in different regions become functionally dependent. See Figure 2. In general all areas with significant marginal density P(x) should be included in the cover or will not be regularized (areas of zero marginal need not be considered). The cover should generally be connected (with respect to neighborhood relations of the regions) so that labeled points have potential to influence all conditionals. The amount of overlap between any two regions in the cover determines how strongly the corresponding conditionals are tied to each other. On the other hand, the regions should be small to preserve locality. The limit of a large number of small overlapping regions can be defined, and we ensure continuity of P(y|x) when the offset between regions vanishes relative to their size (in all dimensions). 3 Classification with Information Regularization Information regularization across multiple regions can be performed, for example, by minimizing the maximum information per region, subject to correct classification of the labeled points. Specifically, we constrain each region in the cover (Q ∈C) to carry at most γ units of information. min P(y|xk), γ γ (3a) s.t. (MQ/VQ) · IQ(x; y) ≤γ ∀Q ∈C (3b) P(y|xk) = δ(y, ˜yk) ∀k ∈L (3c) 0 ≤P(y|xk) ≤1, P y P(y|xk) = 1 ∀k ∈L ∪U, ∀y. (3d) We have incorporated the labeled points by constraining their conditionals to the observed values (eq. 3c) (see below for other ways of incorporating labeled information). The solution P(y|x) to this optimization problem is unique in regions that achieve the information constraint with equality (as long as P(x) > 0). (Uniqueness follows from the strict convexity of mutual information as a function of P(y|x) for nonzero P(x)). Define an atomic subregion as a non-empty intersection of regions that cannot be further intersected by any region (Figure 2). All unlabeled points in an atomic subregion belong to the same set of regions, and therefore participate in exactly the same constraints. They will be regularized the same way, and since mutual information is a convex function, it will be minimized when the conditionals P(y|x) are equal in the atomic subregion. We can therefore parsimoniously represent conditionals of atomic subregions, instead of individual points, merely by treating such atomic subregions as “merged points” and weighting the associated constraint by the probability mass contained in the subregion. 3.1 Incorporating Noisy Labels Labeled points participate in the information regularization in the same way as unlabeled points. However, their conditionals have additional constraints, which incorporate the label information. In equation 3c we used the constraint P(y|xk) = δ(y, ˜yk) for all labeled points. This constraint does not permit noise in the labels (and cannot be used when two points at the same location have disagreeing labels.) Alternatively, we can apply either of the constraints (fix-lbl): P(y|xi) = (1 −b)δ(y,˜yi)b1−δ(y,˜yi), ∀i ∈L (exp-lbl): EP(i)[P(˜yi|xi)] ≥1 −b. The expectation is over the labeled set L, where P(i) = 1/NL. The parameter b ∈[0, 0.5) models the amount of label noise, and is determined from prior knowledge or can be optimized via cross-validation. Constraint (fix-lbl) is written out for the binary case for simplicity. The conditionals of the labeled points are directly determined by their labels, and are treated as fixed constants. Since b < 0.5, the thresholded conditional classifies labeled points in the observed class. In constraint (exp-lbl), the conditionals for labeled points can have an average error at most b, where the averaged is over all labeled points. Thus, a few points may have conditionals that deviate significantly from their observed labels, giving robustness against mislabeled points and outliers. To obtain classification decisions, we simply choose the class with the maximum posterior yk = argmaxy P(y|xk). Working with binary valued P(y|x) ∈0, 1 directly would yield a more difficult combinatorial optimization problem. 3.2 Continuous Densities Information regularization is also computationally feasible for continuous marginal densities, known or estimated. For example, we may be given a continuous unlabeled data distribution P(x) and a few discrete labeled points, and regularize across a finite set of covering regions. The conditionals are uniform inside atomic subregions (except at labeled points), requiring estimates of only a finite number of conditionals. 3.3 Implementation Firstly, we choose appropriate regions forming a cover, and find the atomic subregions. The choices differ depending on whether the data is all discrete or whether continuous marginals P(x) are given. Secondly, we perform a constrained optimization to find the conditionals. If the data is all discrete, create a spherical region centered at every labeled and unlabeled point (or over some reduced set still covering all the points). We have used regions of fixed radius R, but the radius could also be set adaptively at each point to the distance of its Knearest neighbor. The union of such regions is our cover, and we choose the radius R (or K) large enough to create a connected cover. The cover induces a set of atomic subregions, and we merge the parameters P(y|x) of points inside individual atomic subregions (atomic subregions with no observed points can be ignored). The marginal of each atomic subregion is proportional to the number of (merged) points it contains. If continuous marginals are given, they will put probability mass in all atomic subregions where the marginal is non-zero. To avoid considering an exponential number of subregions, we can limit the overlap between the regions by creating a sparser cover. Given the cover, we now regularize the conditionals P(y|x) in the regions, according to eq. 3a. This is a convex minimization problem with a global minimum, since mutual information is convex in P(y|x). It can be solved directly in the given primal form, using a quasi-Newton BFGS method. For eq. 3a, the required gradients of the constraints for the binary class (y = {±1}) case (region Q, atomic subregion r) are: MQ VQ dIQ(x; y) dP(y = 1|xr) = MQ VQ P(xr|Q)  log P(y = 1|xr) P(y = −1|xr) P(y = −1|Q) P(y = 1|Q)  . (4) The Matlab BFGS implementation fmincon can solve 100 subregion problems in a few minutes. 3.4 Minimize Average Information An alternative regularization criterion minimizes the average mutual information across regions. When calculating the average, we must correct for the overlaps of intersecting regions to avoid doublecounting (in contrast, the previous regularization criterion (eq. 3b) avoided doublecounting by restricting information in each region individually). The influence of a region is proportional to the probability mass MQ contained in it. However, a point x may belong to N(x) regions. We define an adjusted density P ∗(x) = P(x)/N(x) to calculate an adjusted probability mass M ∗ Q which discounts overlap. We can then minimize average mutual information according to min P(y|xk) X Q M ∗ Q VQ IQ(x; y) (5a) s.t. P(y|xk) = δ(y, ˜yk) ∀k ∈L (5b) 0 ≤P(y|xk) ≤1, P y P(y|xk) = 1 ∀k ∈L ∪U, ∀y. (5c) with similar necessary adjustments to incorporate noisy labels. 3.4.1 Limiting Behavior The above average information criterion is a discrete version of a continuous regularization criterion. In the limit of a large number of small regions in the cover (where the spacing of the regions vanishes relative to their size), we obtain a well-defined regularization criterion resulting in continuous P(y|x): min P(y|x) s.t. P(˜yk|xk)=δ(y,˜yk) ∀k∈L Z X y P(x0)P(y|x0) d log P(y|x) dx 2 x0 dx0. (6) The regularizer can also be seen as the average Fisher information (see section 2.2). More generally, we can formulate the regularization problem as a Tikhonov regularization, where the loss is the negative log-probability of labels: min P(y|x) 1 NL X k∈L −log P(˜yk|xk) + λ Z X y P(x0)P(y|x0) d log P(y|x) dx 2 x0 dx0. (7) 3.4.2 Differential Equation Characterizing the Solution The optimization problem (eq. 6) can be solved using calculus of variations. Consider the one-dimensional binary class case and write the problem as min P(y=1|x) R f x, P(y = 1|x), P ′(y = 1|x)  dx where f(·) = P(x)P ′(y = 1|x)2/[P(y = 1|x)(1 −P(y = 1|x))]. Necessary conditions for the solution P(y = 1|x) are provided by the Euler-Lagrange equations [6] ∂f ∂P(y = 1|x) −d dx ∂f ∂P ′(y = 1|x) = 0 ∀x. (8) (natural boundary conditions apply since we can assume P(x) = 0 and P ′(y|x) = 0 at the boundary of the domain X). After substituting f and simplifying we have P ′′(y = 1|x) = P ′(y = 1|x)2(1 −2P(y = 1|x)) 2P(y = 1|x)(1 −P(y = 1|x)) −P ′(x)P ′(y = 1|x) P(x) . (9) This differential equation governs the solution and we solve it numerically. The labeled points provide boundary conditions, e.g. P(y = ˜yk|xk) = 1 −b for some small fixed b ≥0. We must search for initial values of P ′(˜yk|xk) to match the boundary conditions of P(˜yk|xk). The solution is continuous and piecewise differentiable. 4 Results and Discussion We have experimentally studied the behavior of the regularizer with different marginal densities P(x). Figure 3 shows the one-dimensional case with a continuous marginal density 1 2 3 4 5 6 7 −1 −0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Posterior P(y|x) P(y|x) P(x) labeled points Figure 2: (Left) Three intersecting regions, and their atomic subregions (numbered). P(y|x) for unlabeled points will be constant in atomic subregions. Figure 3: (Right) The conditional (solid line) for a continuous marginal P(x) (dotted line) consisting of a mixture of two continuous Gaussian and two labeled points at (x=-0.8,y=-1) and (x=0.8,y=1). The row of circles at the top depicts the region structure used (a rendering of overlapping one-dimensional intervals.) −1 −0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Posterior P(y|x) P(y|x) P(x) labeled points −1 −0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Posterior P(y|x) P(y|x) P(x) labeled points Figure 4: Conditionals (solid lines) for two continuous marginals (dotted lines) plus two labeled points. Left: the marginal is uniform, and the conditional approaches a straight line. Right: the marginal is a mixture of two Gaussians (with lower variance and shifted compared to Figure 3.) The conditional changes slowly in regions of high density. (mixture of two Gaussians), and two discrete labeled points. We choose NQ=40 regions centered at uniform intervals of [−1, 1], overlapping each other half-way, creating NQ + 1 atomic subregions. There are two labeled points. We show the solution attained by minimizing the maximum information (eq. 3a), and using the (fix-lbl) constraint with label noise b = 0.05. The conditional varies smoothly between the labeled points of opposite classes. Note the dependence on the marginal density P(x). The conditional is smoother in high-density regions, and changes more rapidly in low-density regions, as expected. Figure 4 shows more examples, and Figure 5 illustrates solutions obtained via the differential equation (eq. 6). −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P(y|x) p(x) x x −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P(y|x) p(x) x x Figure 5: Conditionals for two other continuous marginals plus two labeled points (marked as crosses and located at x=-1, 2 in the left figure and x=-2, 2 in the right), solved via the differential equation (eq. 6). The conditionals are continuous but non-differentiable at the two labeled points (marked as crosses). 5 Conclusion We have presented an information theoretic regularization framework for combining conditional and marginal densities in a semi-supervised estimation setting. The framework admits both discrete and continuous (known or estimated) densities. The tractability is largely a function of the number of nonempty intersections of chosen covering regions. The principle extends beyond the presented scope. It provides flexible means of tailoring the regularizer to particular needs. The shape and structure of the regions give direct ways of imposing relations between particular variables or values of those variables. The regions can be easily defined on low-dimensional data manifolds. In future work we will try the regularizer on large high-dimensional datasets and explore theoretical connections to network information theory. Acknowledgements The authors gratefully acknowledge support from Nippon Telegraph & Telephone (NTT) and NSF ITR grant IIS-0085836. Tommi Jaakkola also acknowledges support from the Sloan Foundation in the form of the Sloan Research Fellowship. Martin Szummer would like to thank Thomas Minka for valuable comments. References [1] Tommi Jaakkola, Marina Meila, and Tony Jebara. Maximum entropy discrimination. Technical Report AITR-1668, Mass. Inst. of Technology AI lab, 1999. http://www.ai.mit.edu/. [2] Naftali Tishby and Noam Slonim. Data clustering by markovian relaxation and the information bottleneck method. In Advances in Neural Information Processing Systems (NIPS), volume 13, pages 640–646. MIT Press, 2001. [3] Stephen Roberts, C. Holmes, and D. Denison. Minimum-entropy data partitioning using reversible jump Markov chain Monte Carlo. IEEE Trans. Pattern Analysis and Mach. Intell. (PAMI), 23(8):909–914, 2001. [4] Matthias Seeger. Input-dependent regularization of conditional density models. Unpublished. http://www.dai.ed.ac.uk/homes/seeger/, 2001. [5] Thomas Cover and Joy Thomas. Elements of Information Theory. Wiley, 1991. [6] Robert Weinstock. Calculus of Variations. Dover, 1974.
2002
19
2,202
FloatBoost Learning for Classification Stan Z. Li Microsoft Research Asia Beijing, China ZhenQiu Zhang  Institute of Automation CAS, Beijing, China Heung-Yeung Shum Microsoft Research Asia Beijing, China HongJiang Zhang Microsoft Research Asia Beijing, China Abstract AdaBoost [3] minimizes an upper error bound which is an exponential function of the margin on the training set [14]. However, the ultimate goal in applications of pattern classification is always minimum error rate. On the other hand, AdaBoost needs an effective procedure for learning weak classifiers, which by itself is difficult especially for high dimensional data. In this paper, we present a novel procedure, called FloatBoost, for learning a better boosted classifier. FloatBoost uses a backtrack mechanism after each iteration of AdaBoost to remove weak classifiers which cause higher error rates. The resulting float-boosted classifier consists of fewer weak classifiers yet achieves lower error rates than AdaBoost in both training and test. We also propose a statistical model for learning weak classifiers, based on a stagewise approximation of the posterior using an overcomplete set of scalar features. Experimental comparisons of FloatBoost and AdaBoost are provided through a difficult classification problem, face detection, where the goal is to learn from training examples a highly nonlinear classifier to differentiate between face and nonface patterns in a high dimensional space. The results clearly demonstrate the promises made by FloatBoost over AdaBoost. 1 Introduction Nonlinear classification of high dimensional data is a challenging problem. While designing such a classifier is difficult, AdaBoost learning methods, introduced by Freund and Schapire [3], provides an effective stagewise approach: It learns a sequence of more easily learnable “weak classifiers”, and boosts them into a single strong classifier by a linear combination of them. It is shown that the AdaBoost learning minimizes an upper error bound which is an exponential function of the margin on the training set [14]. Boosting learning originated from the PAC (probably approximately correct) learning theory [17, 6]. Given that weak classifiers can perform slightly better than random guessing  http://research.microsoft.com/  szli  The work presented in this paper was carried out at Microsoft Research Asia. on every distribution over the training set, AdaBoost can provably achieve arbitrarily good bounds on its training and generalization errors [3, 15]. It is shown that such simple weak classifiers, when boosted, can capture complex decision boundaries [1]. Relationships of AdaBoost [3, 15] to functional optimization and statistical estimation are established recently. A number of gradient boosting algorithms are proposed [4, 8, 21]. A significant advance is made by Friedman et al. [5] who show that the AdaBoost algorithms minimize an exponential loss function which is closely related to Bernoulli likelihood. In this paper, we address the following problems associated with AdaBoost: 1. AdaBoost minimizes an exponential (some another form of ) function of the margin over the training set. This is for convenience of theoretical and numerical analysis. However, the ultimate goal in applications is always minimum error rate. A strong classifier learned by AdaBoost may not necessarily be best in this criterion. This problem has been noted, eg by [2], but no solutions have been found in literature. 2. An effective and tractable algorithm for learning weak classifiers is needed. Learning the optimal weak classifier, such as the log posterior ratio given in [15, 5], requires estimation of densities in the input data space. When the dimensionality is high, this is a difficult problem by itself. We propose a method, called FloatBoost (Section 3), to overcome the first problem. FloatBoost incorporates into AdaBoost the idea of Floating Search originally proposed in [11] for feature selection. A backtrack mechanism therein allows deletion of those weak classifiers that are non-effective or unfavorable in terms of the error rate. This leads to a strong classifier consisting of fewer weak classifiers. Because deletions in backtrack is performed according to the error rate, an improvement in classification error is also obtained. To solve the second problem above, we provide a statistical model (Section 4) for learning weak classifiers and effective feature selection in high dimensional feature space. A base set of weak classifiers, defined as the log posterior ratio, are derived based on an overcomplete set of scalar features. Experimental results are presented in (Section 5) using a difficult classification problem, face detection. Comparisons are made between FloatBoost and AdaBoost in terms of the error rate and complexity of boosted classifier. Results clear show that FloatBoost yields a strong classifier consisting of fewer weak classifiers yet achieves lower error rates. 2 AdaBoost Learning In this section, we give a brief description of AdaBoost algorithm, in the notion of RealBoost [15, 5], as opposed to the original discrete AdaBoost [3]. For two class problems, a set of labelled training examples is given as         , where ! is the class label associated with example "#%$'& . A stronger classifier is a linear combination of ( weak classifiers )+* , .* / 021 43 0 5 (1) In this real version of AdaBoost, the weak classifiers can take a real value, 3 0 , +6$ , and have absorbed the coefficients needed in the discrete version (there, 3 0 , 789 ). The class label for  is obtained as ) , :-<; =?>@BA )C*  ED while the magnitude F )C* , F indicates the confidence. Every training example is associated with a weight. During the learning process, the weights are updated dynamically in such a way that more emphasis is placed on hard examples which are erroneously classified previously. It is important for the original AdaBoost. However, recent studies [4, 8, 21] show that the artificial operation of explicit re-weighting is unnecessary and can be incorporated into a functional optimization procedure of boosting. 0. (Input) (1) Training examples          , where ! ; of which  examples have #" %$ and examples have " '&$ ; (2) The maximum number (*),+.- of weak classifiers to be combined; 1. (Initialization) /10324 "  576 for those examples with #"89%$ or /10324 "  5.: for those examples with " '&$ . (;=< ; 2. (Forward Inclusion) while (?>@( ),+.(1) (;AB(C@$ ; (2) Choose DE according to Eq.4; (3) Update /10 E 4 " AGFIHKJMLN&O "QP ER " TS , and normalize to U " /10 E 4 " V$ ; 3. (Output) P W,XY[Z]\^L U E _a` D _ TS . Figure 1: RealBoost Algorithm. An error occurs when ) 5 *b  , or  )C* , Rced . The “margin” of an example   4 achieved by 3 , 7%$ on the training set examples is defined as  3 , . This can be considered as a measure of the confidence of the 3 ’s prediction. The upper bound on classification error achieved by )+* can be derived as the following exponential loss function [14] f  ) * #/ Vgh^ijQkal mon j p (2) AdaBoost construct 3 5 by stage-wise minimization of Eq.(2). Given the current ) * h  -rq * h  071  3 0 , , the best 3 * 5 for the new strong classifier ) * , ) * h   3 *  is the one which leads to the minimum cost 3 * -tsu >av = @ w#x f  ) * h  5  3  5 (3) It is shown in [15, 5] that the minimizer is 3 * 5  y{zo| >~} + F  € m * h  p } +- F  € m * h  p (4) where  m * h  p are the weights given at time ( . Using } 5 F  € }  F ,€ } 4 and letting 2* 5  y‚zƒ| >…„  F +€ „  F +-€ (5) †  y ‡ zo| >~} + } +- ˆ (6) we arrive 3 *   * ,  † (7) The half log likelihood ratio   is learned from the training examples of the two classes, and the threshold † is determined by the log ratio of prior probabilities. † can be adjusted to balance between detection rate and false alarm (ROC curve). The algorithm is shown in Fig.1 (Note: Re-weight formula in this description is equivalent to the multiplicative rule in the original form of AdaBoost [3, 15]). In Section 4, we will present an model for approximating } 5 F ,€ m * h  p . 3 FloatBoost Learning FloatBoost backtracks after the newest weak classifier 3 * is added and delete unfavorable weak classifiers 3 0 from the ensemble (1), following the idea of Floating Search [11]. Floating Search [11] is originally aimed to deal with non-monotonicity of straight sequential feature selection, non-monotonicitymeaning that adding an additional feature may lead to drop in performance. When a new feature is added, backtracks are performed to delete those features that cause performance drops. Limitations of sequential feature selection is thus amended, improvement gained with the cost of increased computation due to the extended search. 0. (Input) (1) Training examples          , where  ; of which  examples have "89%$ and examples have "^'&$ ; (2) The maximum number (*),+.- of weak classifiers; (3) The error rate   P E  , and the acceptance threshold   . 1. (Initialization) (1) / 0324 "  576 for those examples with " 9%$ or / 0324 "  5.: for those examples with "8 &$ ; (2)  )  _  max-value (for  '$ I €( ),+.- ), (;=< ,  2   . 2. (Forward Inclusion) (1) (;AB(C@$ ; (2) Choose DE according to Eq.4; (3) Update / 0 E 4 " AGFIHKJMLN&O "QP ER " TS , and normalize to U " / 0 E 4 " V$ ; (4)  E   E R DE  ; If  )  E   P E~ , then  )  E    P E  ; 3. (Conditional Exclusion) (1) D Z Y[\  l   P E &*D ; (2) If   P E &*D  >  )  E8 , then (a)  E8   E &*D ;  )  E    P E &*D  ; (;( & $ ; (b) P E 9U  l D ; (c) goto 3.(1); (3) else (a) if (;9(*),+.- or    E~a>  , then goto 4; (b) / 0 E 4 " A FHJMLN&O "QP ER " TS ; goto 2.(1); 4. (Output) P W,XY[Z]\^L U  0 4  l D TS . Figure 2: FloatBoost Algorithm. The FloatBoost procedure is shown in Fig.2 Let  *  3     3 * be the so-far-best set of ( weak classifiers;   ) * be the error rate achieved by ) * 5 .q * 021  3 0 , (or a weighted sum of missing rate and false alarm rate which is usually the criterion in one-class detection problem);  "! # 0 be the minimum error rate achieved so far with an ensemble of $ weak classifiers. In Step 2 (forward inclusion), given already selected, the best weak classifier is added one at a time, which is the same as in AdaBoost. In Step 3 (conditional exclusion), FloatBoost removes the least significant weak classifier from  * , subject to the condition that the removal leads to a lower error rate  "! # * h  . These are repeated until no more removals can be done. The procedure terminates when the risk on the training set is below f or the maximum number (  is reached. Incorporating the conditional exclusion, FloatBoost renders both effective feature selection and classifier learning. It usually needs fewer weak classifiers than AdaBoost to achieve the same error rate  . 4 Learning Weak Classifiers The section presents a method for computing the log likelihood in Eq.(5) required in learning optimal weak classifiers. Since deriving a weak classifier in high dimensional space is a non-trivial task, here we provide a statistical model for stagewise learning of weak classifiers based on some scalar features. A scaler feature  of  is computed by a transform from the  -dimensional data space to the real line,  5 :8$ . A feature can be the coefficient of, say, a wavelet transform in signal and image processing. If projection pursuit is used as the transform,   , is simply the  -th coordinate of  . A dictionary of  candidate scalar features can be created    ,       . In the following, we use  m 0 p to denote the feature selected in the $ -th stage, while  , is the feature computed from  using the  -th transform. Assuming that is an over-complete basis, a set of candidate weak classifiers for the optimal weak classifier (7) can be designed in the following way: First, at stage ( where (  features  m  p   m Ip      m * h  p have been selected and the weight is given as  m * h  p , we can approximate „ 5 F ,€ m * h  p by using the distributions of ( features „  F ,€ m * h  p  „   m  p   m p      m * h  p    F ,€ m * h  p (8) „   m  p F ,€ m * h  p „   m Ip F   m  p  m * h  p  „   m * h  p F ,  m  p      m * h Ip € m * h  p „    F   m  p      m * h   m * h  p (9) Because is an over-complete basis set, the approximation is good enough for large enough ( and when the ( features are chosen appropriately. Note that „   m 0 p F ,  m  p      m 0 h  p is actually „   m 0 p F ,€ m 0 h  p because  m 0 p contains the information about entire history of  and accounts for the dependencies on  m  p      m 0 h  p . Therefore, we have „  F , m * h  p  „   m  p F ,€ mp „   m p F ,€ m  p  (10) „   m * h  p F ,€ m * h Ip „  F ,€ m * h  p (11) On the right-hand side of the above equation, all conditional densities are fixed except the last one „  +F ,€ m * h  p . Learning the best weak classifier at stage ( is to choose the best feature  m * p for   such that f is minimized according to Eq.(3). The conditional probability densities „    F ,€ m * h  p for the positive class   and the negative class   can be estimated using the histograms computed from the weighted voting of the training examples using the weights  m * h  p . Let  m * p  , . y{zƒ| >1„   F   m * h  p „   F   m * h  p (12) and 3 m * p  , .- m * p  ,  † . We can derive the set of candidate weaker classifiers as  m * p  3 m * p   F  (13) Recall that the best 3 * , among all in  m * h  p for the new strong classifier ) * 5 ) * h   3 * 5 is given by Eq.(3) among all 3    m * p , for which the optimal weak classifier has been derived as (7). According the theory of gradient based boosting [4, 8, 21], we can choose the optimal weak classifier by finding the 3 *  that best fits the gradient  f  ) * h  where  f  )+* h  .  f  ,   f  " (14) In our stagewise approximation formulation, this can be done by first finding the 3 * , 7  m * p that best fits  f in direction and then scaling it so that the two has the same (re-weighted) norm. An alternative selection scheme is simply to choose  so that the error rate (or some risk), computed from the two histograms „    F  € m * h  p and „   F  -€ m * h  p , is minimized. 5 Experimental Results Face Detection The face detection problem here is to classifier an image of standard size (eg 20x20 pixels) into either face or nonface (imposter). This is essentially a one-class problem in that everything not a face is a nonface. It is a very hard problem. Learning based methods have been the main approach for solving the problem , eg [13, 16, 9, 12]. Experiments here follow the framework of Viola and Jones [19, 18]. There, AdaBoost is used for learning face detection; it performs two important tasks: feature selection from a large collection features; and constructing classifiers using selected features. Data Sets A set of 5000 face images are collected from various sources. The faces are cropped and re-scaled to the size of 20x20. Another set of 5000 nonface examples of the same size are collected from images containing no faces. The 5000 examples in each set is divided into a training set of 4000 examples and a test set of 1000 examples. See Fig.3 for a random sample of 10 face and 10 nonface examples. Figure 3: Face (top) and nonface (bottom) examples. Scalar Features Three basic types of scalar features   are derived from each example, as shown in Fig.4, for constructing weak classifiers. These block differences are an extended set of steerable filters used in [10, 20]. There are hundreds of thousands of different   for admissible   ,    values. Each candidate weak classifier is constructed as the log likelihood ratio (12) computed from the two histograms „    F ,€ m * h  p of a scalar feature   for the face (   ) and nonface (   ) examples (cf. the last part of the previous section). Figure 4: The three types of simple Harr wavelet like features  mp defined on a sub-window  . The rectangles are of size : and are at distances of     apart. Each feature takes a value calculated by the weighted (   y ) sum of the pixels in the rectangles. Performance Comparison The same data sets are used for evaluating FloatBoost and AdaBoost. The performance is measured by false alarm error rate given the detection rate fixed at 99.5%. While a cascade of stronger classifiers are needed to achiever very low false alarm [19, 7], here we present the learning curves for the first strong classifier composed of up to one thousand weak classifiers. This is because what we aim to evaluate here is to contrast between FloatBoost and AdaBoost learning algorithms, rather than the system work. Interested reader is referred to [7] for a complete system which achieved a false alarm of ]d h  with the detection rate of 95%. (A live demo of multi-view face detection system, the first real-time system of the kind in the world, is being submitted to the conference). 100 200 300 400 500 600 700 800 900 1000 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 # Weak Classifiers Error Rates AdaBoost−train AdaBoost−test FloatBoost−train FloatBoost−test Figure 5: Error Rates of FloatBoost vs AdaBoost for frontal face detection. The training and testing error curves for FloatBoost and AdaBoost are shown in Fig.5, with the detection rate fixed at 99.5%. The following conclusions can be made from these curves: (1) Given the same number of learned features or weak classifiers, FloatBoost always achieves lower training error and lower test error than AdaBoost. For example, on the test set, by combining 1000 weak classifiers, the false alarm of FloatBoost is 0.427 versus 0.485 of AdaBoost. (2) FloatBoost needs many fewer weak classifiers than AdaBoost in order to achieve the same false alarms. For example, the lowest test error for AdaBoost is 0.481 with 800 weak classifiers, whereas FloatBoost needs only 230 weak classifiers to achieve the same performance. This clearly demonstrates the strength of FloatBoost in learning to achieve lower error rate. 6 Conclusion and Future Work By incorporating the idea of Floating Search [11] into AdaBoost [3, 15], FloatBoost effectively improves the learning results. It needs fewer weaker classifiers than AdaBoost to achieve a similar error rate, or achieves lower a error rate with the same number of weak classifiers. Such a performance improvement is achieved with the cost of longer training time, about 5 times longer for the experiments reported in this paper. The Boosting algorithm may need substantial computation for training. Several methods can be used to make the training more efficient with little drop in the training performance. Noticing that only examples with large weigh values are influential, Friedman et al. [5] propose to select examples with large weights, i.e. those which in the past have been wrongly classified by the learned weak classifiers, for the training weak classifier in t+- he next round. Top examples within a fraction of of the total weight mass are used, where 88A d d Id4 ? D . References [1] L. Breiman. “Arcing classifiers”. The Annals of Statistics, 26(3):801–849, 1998. [2] P. Buhlmann and B. Yu. “Invited discussion on ‘Additive logistic regression: a statistical view of boosting (friedman, hastie and tibshirani)’ ”. The Annals of Statistics, 28(2):377–386, April 2000. [3] Y. Freund and R. Schapire. “A decision-theoretic generalization of on-line learning and an application to boosting”. Journal of Computer and System Sciences, 55(1):119–139, Aug 1997. [4] J. Friedman. “Greedy function approximation: A gradient boosting machine”. The Annals of Statistics, 29(5), October 2001. [5] J. Friedman, T. Hastie, and R. Tibshirani. “Additive logistic regression: a statistical view of boosting”. The Annals of Statistics, 28(2):337–374, April 2000. [6] M. J. Kearns and U. Vazirani. An Introduction to Computational Learning Theory. MIT Press, Cambridge, MA, 1994. [7] S. Z. Li, L. Zhu, Z. Q. Zhang, A. Blake, H. Zhang, and H. Shum. “Statistical learning of multi-view face detection”. In Proceedings of the European Conference on Computer Vision, page ???, Copenhagen, Denmark, May 28 - June 2 2002. [8] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Functional gradient techniques for combining hypotheses. In A. Smola, P. Bartlett, B. Sch¨olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 221–247. MIT Press, Cambridge, MA, 1999. [9] E. Osuna, R. Freund, and F. Girosi. “Training support vector machines: An application to face detection”. In CVPR, pages 130–136, 1997. [10] C. P. Papageorgiou, M. Oren, and T. Poggio. “A general framework for object detection”. In Proceedings of IEEE International Conference on Computer Vision, pages 555–562, Bombay, India, 1998. [11] P. Pudil, J. Novovicova, and J. Kittler. “Floating search methods in feature selection”. Pattern Recognition Letters, (11):1119–1125, 1994. [12] D. Roth, M. Yang, and N. Ahuja. “A snow-based face detector”. In Proceedings of Neural Information Processing Systems, 2000. [13] H. A. Rowley, S. Baluja, and T. Kanade. “Neural network-based face detection”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):23–28, 1998. [14] R. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. “Boosting the margin: A new explanation for the effectiveness of voting methods”. The Annals of Statistics, 26(5):1651–1686, October 1998. [15] R. E. Schapire and Y. Singer. “Improved boosting algorithms using confidence-rated predictions”. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, pages 80–91, 1998. [16] K.-K. Sung and T. Poggio. “Example-based learning for view-based human face detection”. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(1):39–51, 1998. [17] L. Valiant. “A theory of the learnable”. Communications of ACM, 27(11):1134–1142, 1984. [18] P. Viola and M. Jones. “Asymmetric AdaBoost and a detector cascade”. In Proceedings of Neural Information Processing Systems, Vancouver, Canada, December 2001. [19] P. Viola and M. Jones. “Rapid object detection using a boosted cascade of simple features”. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, December 12-14 2001. [20] P. Viola and M. Jones. “Robust real time object detection”. In IEEE ICCV Workshop on Statistical and Computational Theories of Vision, Vancouver, Canada, July 13 2001. [21] R. Zemel and T. Pitassi. “A gradient-based boosting algorithm for regression problems”. In Advances in Neural Information Processing Systems, volume 13, Cambridge, MA, 2001. MIT Press.
2002
190
2,203
Adaptive Scaling for Feature Selection in SVMs Yves Grandvalet Heudiasyc, UMR CNRS 6599, Universit´e de Technologie de Compi`egne, Compi`egne, France Yves.Grandvalet@utc.fr St´ephane Canu PSI INSA de Rouen, St Etienne du Rouvray, France Stephane.Canu@insa-rouen.fr Abstract This paper introduces an algorithm for the automatic relevance determination of input variables in kernelized Support Vector Machines. Relevance is measured by scale factors defining the input space metric, and feature selection is performed by assigning zero weights to irrelevant variables. The metric is automatically tuned by the minimization of the standard SVM empirical risk, where scale factors are added to the usual set of parameters defining the classifier. Feature selection is achieved by constraints encouraging the sparsity of scale factors. The resulting algorithm compares favorably to state-of-the-art feature selection procedures and demonstrates its effectiveness on a demanding facial expression recognition problem. 1 Introduction In pattern recognition, the problem of selecting relevant variables is difficult. Optimal subset selection is attractive as it yields simple and interpretable models, but it is a combinatorial and acknowledged unstable procedure [2]. In some problems, it may be better to resort to stable procedures penalizing irrelevant variables. This paper introduces such a procedure applied to Support Vector Machines (SVM). The relevance of input features may be measured by continuous weights or scale factors, which define a diagonal metric in input space. Feature selection consists then in determining a sparse diagonal metric, and sparsity can be encouraged by constraining an appropriate norm on scale factors. Our approach can be summarized by the setting of a global optimization problem pertaining to 1) the parameters of the SVM classifier, and 2) the parameters of the feature space mapping defining the metric in input space. As in standard SVMs, only two tunable hyper-parameters are to be set: the penalization of training errors, and the magnitude of kernel bandwiths. In this formalism we derive an efficient algorithm to monitor slack variables when optimizing the metric. The resulting algorithm is fast and stable. After presenting previous approaches to hard and soft feature selection procedures in the context of SVMs, we present our algorithm. This exposure is followed by an experimental section illustrating its performances and conclusive remarks. 2 Feature Selection via adaptive scaling Scaling is a usual preprocessing step, which has important outcomes in many classification methods including SVM classifiers [9, 3]. It is defined by a linear transformation within the input space:  , where  diag  is a diagonal matrix       of scale factors. Adaptive scaling consists in letting to be adapted during the estimation process with the explicit aim of achieving a better recognition rate. For kernel classifiers, is a set of hyperparameters of the learning process. According to the structural risk minimization principle [8], can be tuned in two ways: 1. estimate the parameters of classifier  by empirical risk minimization for several values of     to produce a structure of classifiers  multi-indexed by   ! " . Select one element of the structure by finding the set  #   minimizing some estimate of generalization error. 2. estimate the parameters of classifier  and the hyper-parameters  $ !  by empirical risk minimization, while a second level hyper-parameter, say % , constrains   ! " in order to avoid overfitting. This procedure produces a structure of classifiers indexed by  % , whose value is computed by minimizing some estimate of generalization error. The usual paradigm consists in computing the estimate of generalization error for regularly spaced hyper-parameter values and picking the best solution among all trials. Hence, the first approach requires intensive computation, since the trials should be completed over a & -dimensional grid over ' values. Several authors suggested to address this problem by optimizing an estimate of generalization error with respect to the hyper-parameters. For SVM classifiers, Cristianini et al. [4] first proposed to apply an iterative optimization scheme to estimate a single kernel width hyper-parameter. Weston et al. [9] and Chapelle et al. [3] generalized this approach to multiple hyper-parameters in order to perform adaptive scaling and variable selection. The experimental results in [9, 3] show the benefits of this optimization. However, relying on the optimization of generalization error estimates over many hyper-parameters is hazardous. Once optimized, the unbiased estimates become down-biased, and the bounds provided by VC-theory usually hold for kernels defined a priori (see the proviso on the radius/margin bound in [8]). Optimizing these criteria may thus result in overfitting. In the second solution considered here, the estimate of generalization error is minimized with respect to  % , a single (second level) hyper-parameter, which constrains  # !  . The role of this constraint is twofold: control the complexity of the classifier, and encourage variable selection in input space. This approach is related to some successful soft-selection procedures, such as lasso and bridge [5] in the frequentist framework and Automatic Relevance Determination (ARD) [7] in the Bayesian framework. Note that this type of optimization procedure has been proposed for linear SVM in both frequentist [1] and Bayesian frameworks [6]. Our method generalizes this approach to nonlinear SVM. 3 Algorithm 3.1 Support Vector Machines The decision function provided by SVM is (*),+.-$/   0 , where function  is defined as:    214365   "798 ;:=<2> </?<A@   <CB  798 B (1) where the parameters  1 B 8 are obtained by solving the following optimization problem:       )    143 1 7  : <   < subject to > <  1 3 5   < 798    <  BB   <    BB  (2) with 5   defined as 5   . In this problem setting, and the parameters of the feature space mapping (typically a kernel bandwidth) are tunable hyper-parameters which need to be determined by the user. 3.2 A global optimization problem In [9, 3], adaptive scaling is performed by iteratively finding the parameters  1 B 8 of the SVM classifier  for a fixed value of    !  and minimizing a bound on the estimate of generalization error with respect to hyper-parameters       B . The algorithm minimizes 1) the SVM empirical criterion with respect to parameters and 2) an estimate of generalization error with respect to hyper-parameters. In the present approach, we avoid the enlargement of the set of hyper-parameters by letting   !  to be standard parameters of the classifier. Complexity is controlled by and by constraining the magnitude of . The latter defines the single hyper-parameter of the learning process related to scaling variables. The learning criterion is defined as follows:                     )  )    1 3 1 7! " : < "  < subject to > <  1 3 5   < 7 8    <  BB#  < $   BB# &  : " &% &% %   '  BB & (3) Like in standard SVM classification, the minimization of an estimate of generalization error is postponed to a later step, which consists in picking the best solution among all trials on the two dimensional grid of hyper-parameters   % B . In (3), the constraint on should favor sparse solutions. To allow  to go to zero, ( should be positive. To encourage sparsity, zeroing a small # should allow a high increase of &) , *,+  ' , hence ( should be small. In the limit of (-. , the constraint counts the number of non-zero scale parameters, resulting in a hard selection procedure. This choice might seem appropriate for our purpose, but it amounts to attempt to solve a highly non-convex optimization problem, where the number of local minima grows exponentially with the input dimension & . To avoid this problem, we suggest to use (  , which is the smallest value for which the problem is convex with the linear mapping 5     . Indeed, for linear kernels, the constraint on amounts to minimize the standard SVM criterion where the penalization on the /10 norm is replaced by the penalization of the /3254 47682 norm. Hence, setting (  provides the solution of the / SVM classifier described in [1]. For non-linear kernels however, the two solutions differ notably since the present algorithm modifies the metric in input space, while the / SVM classifier modifies the metric in feature space. Finally, note that unicity can be guaranteed for (  and Gaussian kernels with large bandwidths (  % -9 ). 3.3 An alternated optimization scheme Problem (3) is complex; we propose to solve iteratively a series of simplier problems. The function  is first optimized with respect to parameters  1 B 8 for a fixed mapping 5 (standard SVM problem). Then, the parameters of the feature space mapping are optimized while some characteristics of  are kept fixed: At step , starting from a given  value, the optimal  1 /  B  8  0 are computed. Then   is determined by a descent algorithm. In this scheme,  1 /  B  8!  0 are computed by solving the standard quadratic optimization problem (2). Our implementation, based on an interior point method, will not be detailed here. Several SVM retraining are necessary, but they are faster than the usual training since the algorithm is initialized appropriately with the solutions of the preceding round. For solving the minimization problem with respect to , we use a reduced conjugate gradient technique. The optimization problem was simplified by assuming that some of the other variables are fixed. We tried several versions: 1) 1 fixed; 2) Lagrange multipliers fixed; 3) set of support vectors fixed. For the three versions, the optimal value of 8 , or at least the optimal value of the slack variables can be obtained by solving a linear program, whose optimum is computed directly (in a single iteration). We do not detail our first version here, since the two last ones performed much better. The main steps of the two last versions are sketched below. 3.4 Sclaling parameters update Starting from an initial solution  B 1  B 8/ 0 , our goal is to update by solving a simple intermediate problem providing an improved solution to the global problem (3). We first assume that the Lagrange multipliers defining 1 are not affected by updates, so that 1 is defined as 1    ? > 5   < . Regarding problem (3), 1 is sub-optimal when varies; nevertheless 1 is guaranteed to be an admissible solution. Hence, we minimize an upper bound of the original primal cost which guarantees that any admissible update (providing a decrease of the cost) of the intermediate problem will provide a decrease of the cost of the original problem. The intermediate optimization problem is stated as follows:                               )    : <  ? < ? > < > @   < B  $7!  : <   < subject to > < : <  ? > @   <CB  798    <  BB   <    BB  &  :  &% &% %  ! '  BB &  (4) Solving this problem is still difficult since the cost is a complex non-linear function of scale factors. Hence, as stated above, will be updated by a descent algorithm. The latter requires the evaluation of the cost and its gradient with respect to . In particular, this means that we should be able to compute  <   < and   <   < ' for any value of . For given values of and , is the solution of the following problem:               )    : <   < subject to > <    :  ? > @   <*B  798      <  BB   < !   BB  B (5) whose dual formulation is                 : <   <    > <    : " ? > @   < B       subject to  : <   < > <     < !   BB  (6) This linear problem is solved directly by the following algorithm: 1) sort  > <   " ? > @   <0B  in descending order for all positive examples on the one side and for all negative examples on the other side; 2) compute the pairwise sum of sorted values; 3) set  <  for all positive and negative examples whose sum is positive. With  ,  <   < and its derivative with respect to are easily computed. Parameters are then updated by a conjugate reduced gradient technique, i.e. a conjugate gradient algorithm ensuring that the set of constraints on are always verified. 3.5 Updating Lagrange multipliers Assume now that only the support vectors remain fixed while optimizing . This assumption is used to derive a rule to update at reasonable computing cost the Lagrange multipliers together with by computing   ' . At  B B 8 , the following holds [3]: 1. for support vectors of the first category    < 2> <  :  ? > @   <*B  7 8  > < (7) 2. for support vectors of the second category (such that  <  ) ?"<  . From these equations, and the assumption that support vectors remain support vectors (and that their category do not change) one derives a system of linear equations defining the derivatives of and 8 with respect to [3]: 1. for support vectors of the first category  :   ?  > @   < B  $7  :  ? >  @   < B  7 #8    (8) 2. for support vectors of the second category  ?< '   3. Finally, the system is completed by stating that the Lagrange multipliers should obey the constraint  :  ? >   :  :   ?  >   (9) The value of is updated from these equations, and the step size is limited to ensure that ? <   for support vectors of the first category. Hence, in this version, 1 is also an admissible sub-optimal solution regarding problem (3). 4 Experiments In the experiments reported below, we used (  for the constraint on (3). The scale parameters were optimized with the last version, where the set of support vectors is assumed to be fixed. Finally, the hyper-parameters   % B were chosen using the span bound [3]. Although the value of the bound itself was not a faithful estimate of test error, the average loss induced by using the minimizer of these bounds was quite small. 4.1 Toy experiment In [9], Weston et al. compared two versions of their feature selection algorithm, to standard SVMs and filter methods (i.e. preprocessing methods selecting features either based on Pearson correlation coefficients, Fisher criterion score, or the Kolmogorov-Smirnov statistic). Their artificial data benchmarks provide a basis for comparing our approach with their, which is based on the minimization of error bounds. Two types of distributions are provided, whose detailed characteristics are not given here. In the linear problem, 6 dimensions out of 202 are relevant. In the nonlinear problem, two features out of 52 are relevant. For each distribution, 30 experiments are conducted, and the average test recognition rate measures the performance of each method. For both problems, standard SVM achieve a 50% error rate in the considered range of training set sizes. Our results are shown in Figure 1. 10 20 30 40 50 75 100 0 0.1 0.2 0.3 0.4 0.5 10 20 30 40 50 75 100 0 0.1 0.2 0.3 0.4 0.5 Figure 1: Results obtained on the benchmarks of [9]. Left: linear problem; right nonlinear problem. The number of training examples is represented on the -axis, and the average test error rate on the > -axis. Our test performances are qualitatively similar to the ones obtained by gradient descent on the radius/margin bound in [9], which are only improved by the forward selection algorithm minimizing the span bound. Note however that Weston et al. results are obtained after a correct number of features was specified by the user, whereas the present results were obtained fully automatically. Knowing the number of features that should be selected by the algorithm is somewhat similar to select the optimal value of parameter ( for each '% . In the non-linear problem, for    training examples, an average of 26.5 features are selected; for   8 , an average of 6.6 features are selected. These figures show that although our feature selection scheme is effective, it should be more stringent: a smaller value of ( would be more appropriate for this type of problem. The two relevant variables are selected in  of cases for    , in    for n=50, and in    for    and     . For these two sample sizes, they are even always ranked first and second. Regarding training times, the optimization of required an average of over 100 times more computing time than standard SVM fitting for the linear problem and 40 times for the nonlinear problem. These increases scale less than linearly with the number of variables, and are certainly yet to be improved. 4.2 Expression recognition We also tested our algorithm on a more demanding task to test its ability to handle a large number of features. The considered problem consists in recognizing the happiness expression among the five other facial expressions corresponding to universal emotions (disgust, sadness, fear, anger, and surprise). The data sets are made of    8 gray level images of frontal faces, with standardized positions of eyes, nose and mouth. The training set comprises 8 positive images, and  negative ones. The test set is made of  positive images and  negative ones. We used the raw pixel representation of images, resulting in 4200 highly correlated features. For this task, the accuracy of standard SVMs is 92.6% (11 test errors). The recognition rate is not significantly affected by our feature selection scheme (10 errors), but more than 1300 pixels are considered to be completely irrelevant at the end of the iterative procedure (estimating required about 80 times more computing time than standard SVM). This selection brings some important clues for building relevant attributes for the facial recognition expression task. Figure 2 represents the scaling factors , where black is zero and white represents the highest value. We see that, according to the classifier, the relevant areas for recognizing the happiness expression are mainly in the mouth area, especially on the mouth wrinkles, and to a lesser extent in the white of the eyes (which detects open eyes) and the outer eyebrows. On the right hand side of this figure, we displayed masked support faces, i.e. support faces scaled by the expression mask. Although we lost many important features regarding the identity of people, the expression is still visible on these faces. Areas irrelevant for the recognition task (forehead, nose, and upper cheeks) have been erased or softened by the expression mask. 5 Conclusion We have introduced a method to perform automatic relevance determination and feature selection in nonlinear SVMs. Our approach considers that the metric in input space defines a set of parameters of the SVM classifier. The update of the scale factors is performed by iteratively minimizing an approximation of the SVM cost. The latter is efficiently minimized with respect to slack variables when the metric varies. The approximation of the cost function is tight enough to allow large update of the metric when necessary. Furthermore, because at each step our algorithm guaranties the global cost to decrease, it is stable. Figure 2: Left: expression mask of happiness provided by the scaling factors ; Right, top row: the two positive masked support face; Right, bottom row: four negative masked support faces. Preliminary experimental results show that the method provides sensible results in a reasonable time, even in very high dimensional spaces, as illustrated on a facial expression recognition task. In terms of test recognition rates, our method is comparable with [9, 3]. Further comparisons are still needed to demonstrate the practical merits of each paradigm. Finally, it may also be beneficial to mix the two approaches: the method of Cristianini et al. [4] could be used to determine  % and . The resulting algorithm would differ from [9, 3], since the relative relevance of each feature (as measured by   % ) would be estimated by empirical risk minimization, instead of being driven by an estimate of generalization error. References [1] P. S. Bradley and O. L. Mangasarian. Feature selection via concave minimization and support vector machines. In Proc. 15th International Conf. on Machine Learning, pages 82–90. Morgan Kaufmann, San Francisco, CA, 1998. [2] L. Breiman. Heuristics of instability and stabilization in model selection. The Annals of Statistics, 24(6):2350–2383, 1996. [3] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46(1):131–159, 2002. [4] N. Cristianini, C. Campbell, and J. Shawe-Taylor. Dynamically adapting kernels in support vector machines. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11. MIT Press, 1999. [5] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: data mining , inference, and prediction. Springer series in statistics. Springer, 2001. [6] T. Jebara and T. Jaakkola. Feature selection and dualities in maximum entropy discrimination. In Uncertainity In Artificial Intellegence, 2000. [7] R. M. Neal. Bayesian Learning for Neural Networks, volume 118 of Lecture Notes in Statistics. Springer, 1996. [8] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer Series in Statistics. Springer, 1995. [9] J. Weston, S. Mukherjee, O. Chapelle, M. Pontil, T. Poggio, and V. Vapnik. Feature selection for SVMs. In Advances in Neural Information Processing Systems 13. MIT Press, 2000.
2002
191
2,204
Adaptive Classification by Variational Kalman Filtering Peter Sykacek Department of Engineering Science University of Oxford Oxford, OX1 3PJ, UK psyk@robots.ox.ac.uk Stephen Roberts Department of Engineering Science University of Oxford Oxford, OX1 3PJ, UK sjrob@robots.ox.ac.uk Abstract We propose in this paper a probabilistic approach for adaptive inference of generalized nonlinear classification that combines the computational advantage of a parametric solution with the flexibility of sequential sampling techniques. We regard the parameters of the classifier as latent states in a first order Markov process and propose an algorithm which can be regarded as variational generalization of standard Kalman filtering. The variational Kalman filter is based on two novel lower bounds that enable us to use a non-degenerate distribution over the adaptation rate. An extensive empirical evaluation demonstrates that the proposed method is capable of infering competitive classifiers both in stationary and non-stationary environments. Although we focus on classification, the algorithm is easily extended to other generalized nonlinear models. 1 Introduction The demand for adaptive learning methods, e.g. for use in brain computer interfaces (BCIs) [15] has recently triggered a considerable interest in such algorithms. We may approach adaptive learning with algorithms that were designed for stationary environments and use learning rates to make these methods adaptive. These approaches can be traced back to early work on learning algorithms (e.g. [1]). A more recent account to this approach is [17], who combines the probabilistic method of sequential variational inference ([9]) and a forgetting factor to obtain an adaptive learning method. Probabilistic or Bayesian methods allow also for a completely different interpretation of adaptive learning. We may regard the model coefficients as latent (i.e. unobserved) states of a first order Markov process.    (1)           The posterior distribution,     , at state ! summarizes all information obtained about the model. This posterior and the conditional distribution,   "#$% , represent the prior for the following state. The conditional distribution can be thought of as additive process or state noise with precision  . Predictions are obtained by a probabilistic observation model &    '   . Using this model, we obtain an appropriate adaptation rate by hierarchical Bayesian inference of the process noise precision  . Equation (1) suggests that we may interpret adaptive Bayesian inference as generalization of the well known Kalman filter ([12]). This view of adaptive learning has been used by [6], who use extended Kalman filtering to obtain a Laplace approximation of the posterior over   and maximum likelihood II ([3]) for inference of the adaptation rate. Another generalization of Kalman filtering are the recently quite popular particle filters (e.g. [7]). Being Monte Carlo methods, particle filters have over Laplace approximations the advantage of much greater flexibility. This comes however at the expense of a higher representational and computational complexity. To combine the flexibility of particle filtering with the computational advantage of parametric methods, we propose a variational approximation (e.g. [11] , [2] and [8]) for inference of the Markov process in Equation (1). Unlike maximum likelihood II, the variational Kalman filter allows us to have a non degenerate distribution over the process noise precision. We derive in this paper a variational Kalman filter classifier and show with an extensive empirical evaluation that the resulting classifiers obtain excellent generalization accuracies both in stationary and non-stationary domains. 2 Methods 2.1 A generalized nonlinear classifier Classification is a prediction problem, where some regressor,   , predicts the expectation of a response variable   . Since a categorical polytomous solution is easily recovered from  dichotomous solutions ([16], pages 44-45), we restrict all further discussions to dichotomos classification using  responses. We thus have only one degree of freedom and predict the binary probability,      #   , which depends on the model parameters  . To obtain a flexible discriminant, we use a generalized nonlinear model, i.e. a radial basis function (RBF) network ([14] and [5]), with logistic output transformation (Equation (3)).    '       ,        (2)       '      ' "        (3) The classifier has a nonlinear feature space   which for reasons of adaptivity depends on   and a linear mapping into latent space   . We allow for Gaussian basis functions, i.e. !#"        $&%('*)+"    -, " /.  or thin plate splines, i.e. !0"         , " 132*4 '    , "  . Both basis functions are parameterized by their center locations , " . Since we want to have a simple unimodal posterior over model parameters, we update the coefficients of the basis set   randomly according to a Metropolis Hastings kernel ([13]) and solve for the conditional posterior       analytically. 2.2 The variational Kalman filter In order to ease discussion of adaptive inference, we illustrate the dependencies implied by Equation (1) in figure 1 as a directed acyclic graph (DAG). In accordance with Kalman filtering, we assume a Gaussian posterior at time   with mean 5   and precision 6  and zero mean Gaussian state noise with isotropic precision  . Inference of  is based on a “flat” proper Gamma prior specified by parameters 7 and 8 . In order to obtain reasonable posteriors over  , we follow [10] and assume constant adaptation within a window of size 9 . The proposed variational Bayesian approach ignores the anti-causal information flow and is thus based on maximizing a lower bound on the logarithmic model evidence of a windowed Kalman filter. Following these assumptions, we obtain the expression for the log evidence in Equation (4) by substituting the generalized nonlinear model (Equations (2) to (3)) into the formulation of adaptive Bayesian learning (1). We have then to make all β α       y wn w n n−1 observation n λ I wn−1 n−1 Λ Figure 1: This figure illustrates adaptive inference as a directed acyclic graph. The coefficients of the classifier,   , are assumed to be Gaussian, following a first order Markov process. The hyper parameter  is given a Gamma prior specified by parameters 7 and 8 . distributions explicit and integrate over all model coefficients, which is done analytically over all prior states   . 132*4%   '  132*4     &          6         (4)     $ % '   5     6             5       "               8   7      "!    8 $  $# % The structure of Equation (4) suggests that the approximate posterior %     can be chosen to be Gaussian and the approximate posterior %  $ can be chosen to be a Gamma distribution. These functional forms do however not simply result from a mean field approximation of the posterior as %  $$&    %   . In order to obtain the required conjugacy, we have to use lower bounds for the probability of the target label,    ' "         and for both 6       '  and   $&%('    5      6            5    . 2.3 Variational lower bounds In order to achieve conjugacy with a Gaussian distribution, we use the lower bound for the logistic sigmoid proposed in [9] 1 2 4$         ')(   "           132*4%    132*4*,+ 2.-/0*21  43'3 (5)  576 8 / :9  .  ; 1  <=?>       @ .  1 .  AB  in which 1  are the variational parameters of a locally linear expansion in '            . of every prediction contained in the window. In order to get expressions that are conjugate with a Gamma distribution over the process noise precision  , we derive two novel lower bounds. Assuming a  -dimensional parameter vector   , we get $&%(' 132*4 6       (   1 2 4    132*4  6     (6)        5  6    and $ % '    5      6             5    ( (7) $ % '    5     6           5   $ % '         5     6       .     5    which are expressions in  and 132*4    and thus conjugate with a Gamma distribution. Both bounds are expanded in the identical parameter  which is justified since both are linear expansions in     and maximization must thus lead to identical values. Using these lower bounds together with a mean field assumption, %    &   & %     , and the usual Jensens inequalities, we immediately obtain a negative free energy as lower bound of the log evidence in Equation (4). For reasons of brevity we do not include this expression here. 2.4 Parameter updates In order to distinguish between the parameters of the prior and posterior distributions, we henceforth denote the latter with superscript  . Inference requires to maximize the negative free energy with respect to all variational parameters. These are the coefficients of the 9 Gaussian distributions, %    , the 9 parameters in the bounds of the logistic sigmoid, 1  , the coefficients of the Gamma posterior over the noise process precision, %    and the parameter in the Gamma conjugacy bounds,  . Maximization with respect to %    results in a Gaussian distribution with precision 6   and mean 5    . 6     6          576 8 /&9  .   1       (8) 5      6       6         5    "      # Maximization with respect to %    results in a Gamma distribution with location parameter 7  and scale parameter 8  . 7   7  9   (9) 8   8      &  5   6      5     5      6       .   5     5     5  ' 6      6       .   According to [9], maximization with respect to 1  leads to 1       5     .     6     % (10) Maximization with respect to the variational parameter  leads for both bounds to   7  8  % (11) In order to allow the basis mapping in Equation (2) to track modifications in the input data distributions, we propose the perturbation      , where  9    is drawn from a Gaussian and accept the proposal according to probability   8 < =       & .   .      ! 9      5    5     6       #       & .   .       ! 9      5    5     6       # A! B (12) If we assume that the negative free energy describes the log evidence exactly, this is a Metropolis Hastings kernel ([13]) that leaves the marginal posterior     invariant. We could thus represent the marginal posterior with random samples. For computational reasons however, we use the scheme only for random updates of   . An algorithm for parameter inference will first propose a random update of   and then iterate maximizations according to Equation (8) to Equation (11) until we observe convergence of the negative free energy. Alternatively we can use a fixed number of iterations, for which our experiments suggest that ' iterations suffice. 2.5 Model predictions Since we do not know the response when predicting, we have to sum the negative free energy over   . This results in a new expression for 5    which we obtain from Equation (8) by dropping the term that depends on   . Due to the dependency on 1  , maximization with respect to %    has to alternate with maximization with respect to 1  , the latter again being done according to Equation (10). Having reached convergence, we obtain an approximate log probability for   by taking the expectation of the bound of the sigmoid in Equation (5) with respect to %     and maximizing with respect to 1  . 1 2 4$#"           "           132*4     132*4 * +2-"/ * 1   3'3 % (13) Exponentiating the approximate log probabilities results in a sub probability measure over   with    "      %$ , with the difference     "        representing an additional uncertainty about   , introduced by the approximation of the logistic sigmoid. 3 Experiments All experiments reported in this section use a model with  Gaussian basis functions with precision ) "  &%  ' . For updating the basis, we use zero mean Gaussian random variates with precision   **  . The initial prior over parameters is a zero mean Gaussian with isotropic precision 6'&  &%  . For maximizing the negative free energy we use ' iterations. The first experiment aims at obtaining a parametrization for 7 , 8 and the window length, 9 , that allows us to make inferences of the process noise  that are insensitive to the actual “drift” of the problem. We use for that purpose the test set from the synthetic problem in [16]1. The samples of this balanced problem are reshuffled such that consecutive class labels differ. In order to get a non-stationarity, we swap the class labels in the second half of the data. The results shown in figure 2 are obtained with 7  &%  and 8    . We propose these settings together with a window size 9   , because this is a good compromise between fast tracking and high stationary accuracy. We are now ready to compare the algorithm with an equivalent static classifier using several public data sets and classification of single trial EEG which, due to learning effects in humans, is known to be non-stationary. In order to avoid that the model has an influence on 1This data set can be obtained at http://www.stats.ox.ac.uk/pub/PRNN/. 0 200 400 600 800 1000 0 50 100 150 200 250 300 350 Simulations using σλ=1e+003 window sz. 1 window sz. 5 window sz. 10 window sz. 15 window sz. 20 0 200 400 600 800 1000 0 0.2 0.4 0.6 0.8 1 Simulations using σλ=1e+003 window sz. 1 window sz. 5 window sz. 10 window sz. 15 window sz. 20 Figure 2: Results obtained on Ripleys’ synthetic data set with swapped class labels after sample 500. The top graph shows the expected value of the precision of the noise process,     !      for different window sizes (i.e. for different numbers of samples used for infering the adaptation rate). The bottom graph shows the instantaneous generalization accuracy estimated in a window of size * . The prior over  is a Gamma distribution with expectation   and variance  . the results, we compare the generalization accuracy of the variational Kalman filter classifier (vkf) with an identical non-adaptive model. Inference of the static model is based on sequential variational learning ([9]). We obtain sequential variational inference (svi) from our approach by setting  in Equation (1) to infinity. The comparisons are evaluated for significance using McNemar’s test, a method for analyzing paired results that is suggested in [16]. The comparison uses vehicle data2, satellite image data, Johns Hopkins University ionosphere data, balance scale weight and distance data and the wine recognition database, all taken from the StatLog database which is available at the UCI repository ([4]). The satellite image data set is used as is provided with 4435 samples in the training and 2000 samples in the test set. Vehicle data are merged such that we have 500 samples in the training and 252 in the test set. The other data were split into two equal sized data sets, which were both used as training and independent test sets respectively. We also use the pima diabetes data set from [16]3. Table 1 compares the generalization accuracies (in fractions) obtained with the variational Kalman filter with generalization accuracies obtained with sequential variational inference. The probability of the null hypothesis,    , that both classifiers are equal suggests that only the differences for the Balance scale and the Pima Indian data sets are significant, with either method being better in one case. Since the generalization accuracies of both methods are almost identical, we conclude that if applied to 2Vehicle data was donated to StatLog by the Turing Institute Glasgow, Scotland. 3This data set can be obtained at http://www.stats.ox.ac.uk/pub/PRNN/. Data sets Generalization results vkf svi    J.H.U. ionosphere 0.87 0.88 0.41 Satellite image 0.81 0.81 0.29 Balance scale 0.89 0.87 0.03 Pima diabetes 0.76 0.80 0.03 Vehicle 0.77 0.77 0.42 Wine 0.97 0.95 0.25 Table 1: Generalization accuracies obtained with the variational Kalman filter (vkf) and sequential variational inference (svi). Cognitive task Generalization results vkf svi    rest/move, no feedback 0.69 0.61 0.00 rest/move, feedback 0.71 0.70 0.39 move/math, no feedback 0.69 0.62 0.00 move/math, feedback 0.64 0.60 0.00 Table 2: Generalization accuracies obtained for classification of single trial EEG show that the variational Kalman filter significantly improves the results in three out of four cases. stationary problems, we may expect the variational Kalman filter to obtain generalization accuracies that are similar to those of static methods. In order to assess the variational Kalman filter on a non-stationary problem, we apply it to classification of single trial EEG, a problem which is part of BCIs. The data for this experiment has been obtained from eight untrained subjects that perform two different task combinations (rest EEG vs. imagined movements and imagined movements vs. a mathematical task), once without and once with visual feedback. For one cognitive experiment each pair of tasks is repeated ten times. We classify on a one second basis an thus have per subject and task combination  * samples. The regressors in this experiment are three reflection coefficients (a parametrization of autoregressive models, see e.g. [18]). The comparison in table 2 reports within subject results obtained by two fold cross testing. Using half of the data, we allow for convergence of the methods before estimating the generalization accuracy on the other half of the data. The generalization accuracies in table 2 are averaged across subjects. We obtain in three out of four experiments a significant improvement with the variational Kalman filter. 4 Discussion We propose in this paper a parametric approach for adaptive inference of nonlinear classification. Our algorithm can be regarded as variational generalization of Kalman filtering which we obtain by using two novel lower bounds that allow us to have a non-degenerate distribution over the adaptation rate. Inference is done by iteratively maximizing a lower bound of the log evidence. As a result we obtain an approximate posterior that is a product of a multivariate Gaussian and a Gamma distribution. Our simulations have shown that the approach is capable of infering classifiers that have good generalization performance both in stationary and non-stationary domains. In situations with moderate sized latent spaces, e.g. in the BCI experiments reported above, prediction and parameter updates can be done in real time on conventional PCs. Although we focus on classification, the algorithm is based on general ideas and thus easily applicable to other generalized nonlinear models. Acknowledgements We would like to express gratitude to the anonymous reviewers of this paper for their valuable suggestions for improving the paper. Peter Sykacek is currently supported by grant Nr. F46/399 kindly provided by the BUPA foundation. References [1] S.-I. Amari. A theory of adaptive pattern classifiers. IEEE Transactions on Electronic Computers, 16:299–307, 1967. [2] H. Attias. Inferring parameters and structure of latent variable models by variational Bayes. In Proc. 15th Conf. on Uncertainty in AI, 1999, 1999. [3] J. O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, New York, 1985. [4] C.L. Blake and C.J. Merz. UCI repository of machine learning databases. http://www.ics.uci.edu/ mlearn/MLRepository.html, 1998. University of California, Irvine, Dept. of Information and Computer Sciences. [5] D. S. Broomhead and D. Lowe. Multivariable functional interpolation and adaptive networks. Complex Systems, 2:321–355, 1988. [6] J.F.G. de Freitas, M. Niranjan, and A.H. Gee. Regularisation in Sequential Learning Algorithms. In M. Jordan, M. Kearns, and S. Solla, editors, Advances in Neural Information Processing Systems (NIPS 10), pages 458–464, 1998. [7] A. Doucet, J. F. G. de Freitas, and N. Gordon, editors. Sequential Monte Carlo Methods in Practice. Springer-Verlag, 2001. [8] Z. Ghahramani and M. J. Beal. Variational inference for Bayesian mixture of factor analysers. In Advances in Neural Information Processing Systems 12, pages 449–455, 2000. [9] T. S. Jaakkola and M. I. Jordan. Bayesian parameter estimation via variational methods. Statistics and Computing, 10:25–37, 2000. [10] A.H. Jazwinski. Adaptive filtering. Automatica, pages 475–485, 1969. [11] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. In M. I. Jordan, editor, Learning in Graphical Models. MIT Press, Cambridge, MA, 1999. [12] R. E. Kalman. A new approach to linear filtering and prediction problems. Trans. ASME, J. Basic Eng., 82:35–45, 1960. [13] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equations of state calculations by fast computing machines. Journal of Chemical Physics, 21:1087–1091, 1953. [14] J. Moody and C. J. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, 1:281–294, 1989. [15] W. Penny, S. Roberts, E. Curran, and M. Stokes. EEG-based communication: a pattern recognition approach. IEEE Trans. Rehab. Eng., pages 214–216, 2000. [16] B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996. [17] Masa-aki Sato. Online model selection based on the variational Bayes. Neural Computation, pages 1649–1681, 2001. [18] P. Sykacek and S. Roberts. Bayesian time series classification. In T.G. Dietterich, S. Becker, and Z. Gharamani, editors, Advances in Neural Processing Systems 14, pages 937–944. MIT Press, 2002.
2002
192
2,205
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages D¨orthe Malzahn  Manfred Opper   Informatics and Mathematical Modelling, Technical University of Denmark, R.-Petersens-Plads Building 321, DK-2800 Lyngby, Denmark   Neural Computing Research Group, School of Engineering and Applied Science, Aston University, Birmingham B4 7ET, United Kingdom dm@imm.dtu.dk opperm@aston.ac.uk Abstract We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages obtained by Monte-Carlo sampling. 1 Introduction The application of tools from Statistical Mechanics to analyzing the average case performance of learning algorithms has a long tradition in the Neural Computing and Machine Learning community [1, 2]. When data are generated from a highly symmetric distribution and the dimension of the data space is large, methods of statistical mechanics of disordered systems allow for the computation of learning curves for a variety of interesting and nontrivial models ranging from simple perceptrons to Support-vector Machines. Unfortunately, the specific power of this approach, which is able to give explicit distribution dependent results represents also a major drawback for practical applications. In general, data distributions are unknown and their replacement by simple model distributions might only reveal some qualitative behavior of the true learning performance. In this paper we suggest a novel application of the Statistical Mechanics techniques to a topic within Machine Learning for which the distribution over data is well known and controlled by the experimenter. It is given by the resampling of an existing dataset in the so called bootstrap approach [3]. Creating bootstrap samples of the original dataset by random resampling with replacement and retraining the statistical model on the bootstrap sample is a widely applicable statistical technique. By replacing averages over the true unknown distribution of data with suitable averages over the bootstrap samples one can estimate various properties such as the bias, the variance and the generalization error of a statistical model. While in general bootstrap averages can be approximated by Monte-Carlo sampling, it is useful to have also analytical approximations which avoid the time consuming retraining of the model for each sample. Existing analytical approximations (based on asymptotic techniques) such as the delta method and the saddle point method (see e.g.[5]) require usually explicit analytical formulas for the estimators of the parameters for a trained model. These may not be easily obtained for more complex models in Machine Learning. In this paper, we discuss an application of the replica method of Statistical Physics [4] which combined with a variational method [6] can produce approximate averages over the random drawings of bootstrap samples. Explicit formulas for parameter estimates are avoided and replaced by the implicit condition that such estimates are expectations with respect to a certain Gibbs distribution to which the methods of Statistical Physics can be well applied. We demonstrate the method for the case of regression with Gaussian processes (GP) (which is a kernel method that has gained high popularity in the Machine Learning community in recent years [7]) and compare our analytical results with results obtained by Monte-Carlo sampling. 2 Basic setup and Gibbs distribution We will keep the notation in this section fairly general, indicating that most of the theory can be developed for a broader class of models. We assume that a fixed set of data          is modeled by a likelihood of the type      !  " #%$ '&     # )(* (1) where the “training error” & is parametrized by a parameter  (which can be a finite or even infinite dimensional object) which must be estimated from the data. We will later specialize to supervised learning problems where each data point   ,+ . consists of an input + (usually a finite dimensional vector) and a real label - . In this case,  stands for a function  ,+' which models the outputs, or for the parameters (like the weights of a neural network) which parameterize such functions. We will later apply our approach to the mean square error given by &     #   / 0 1    + #  #   (2) The first basic ingredient of our approach is the assumption that the estimator for the unknown “true” function  can be represented as the mean with respect to a posterior distribution over all possible  ’s. This avoids the problem of writing down explicit, complicated formulas for estimators. To be precise, we assume that the statistical estimator 2  354 (which is based on the training set 6 ) can be represented as the expectation of  with respect to the measure 758 ! :9  / ;=< 8  9    " #%$ >&     # )(* (3) which is constructed from a suitable prior distribution < 8  9 and the likelihood (1). ; @?BAC< 8  9    " #%$  &     #  (* (4) denotes a normalizing partition function. Our choice of (3) does not mean that we restrict ourselves to Bayesian estimators. By introducing specific (“temperature” like) parameters in the prior and the likelihood, the measure (3) can be strongly concentrated at its mean such that maximum likelihood/MAP estimators can be included in our framework. 3 Bootstrap averages We will explain our analytical approximation to resampling averages for the case of supervised learning problems. If we are interested in, say, estimating the expected error on test points 1 which are not contained in the training set = of size and if we have no hold out data, we can create artificial data sets by resampling (with replacement)  data from the original set  , where each data point   is taken with equal probability /  . Hence, some of the  ’s will appear several times in the bootstrap sample and others not at all. A proxy for the true average test error can be obtained by retraining the model on each bootstrap training set , calculating the test error only on those points which are not contained in and finally averaging over many sets . In practice, the case  maybe of main importance, but we will also allow for estimating a lager part of the “learning curve” by allowing for   and  . We will not discuss the statistical properties of such bootstrap estimates and their refinements (such as Efron’s .632 estimate) in this paper, but refer the reader to the standard literature [3, 5]. For any given set  , we represent a bootstrap sample by the vector of “occupation” numbers         with   $     .   is the number of times example  appears in the set . Denoting the expectation over random bootstrap samples by  3 , Efron’s estimator for the bootstrap generalization error is      /  "  $   3   2  3 +)    3 8      9 (5) where we specialized to the square error for testing. Eq.(5) computes the average bootstrap test error at each data point ! from 6 . The Kronecker symbol, defined by    #  / for ! #" and $ else, guarantees that only realizations of bootstrap training sets contribute which do not contain the test point. Introducing the abbreviation %       @ +    (6) (which is a linear function of  ), and using the definition of the estimator 2  3 as an average of  ’s over the Gibbs distribution (3), the bootstrap estimate (5) can be rewritten as      /  "  $  /  3 8      9  3   / ;  ? A < 8   9 AC< 8   9 %     &  %      ' '6    " #%$   #  &      # )( &      # . ( *+*, (7) which involves 0 copies (or replicas)   and   of the variable  . More complicated types of test errors which are polynomials or can be approximated by polynomials in 2  3 can be rewritten in a similar way, involving more replicas of the variable  . 4 Analytical averages using the “replica trick” For fixed  , the distribution of   ’s is multinomial. It is simpler (and does not make a big difference when  is sufficiently large) when we work with a Poisson distribution for the size of the set with  as the mean number of data points in the sample. In this case we get the simpler, factorizing joint distribution       $  < .0/2143  05 (8) for the occupation numbers   where <   . With Eq. (8) follows  3 8      9  /6173 . 1The average is over the unknown distribution of training data sets. To enable the analytical average over the vector (which is the “quenched disorder” in the language of Statistical Physics) it is necessary to introduce the auxiliary quantity     / / 17    "  $   3   ; 1  ? A < 8   9 A < 8   9 %       %     &  ' '    " #%$   #  &      #  ( &      # . (* * , (9) for  real, which allows to write            . The advantage of this definition is that for integers  0 ,    can be represented in terms of  replicas of the original variable  for which an explicit average over   ’s is possible. At the end of all calculations an analytical continuation to arbitrary real  and the limit  $ must be performed. Using the definition of the partition function (4), we get for integer  0      / / 14    "  $   3       ?  $  A < 8   9 %       %        ' (10) '    " #%$   # "  $  &      #  (* * , Exchanging the expectation over datasets with the expectation over  ’s and using the explicit form of the distribution (8) we obtain      / / 14    "  $      $  / 1      %        %        (11) where the brackets  ! denote an average with respect to a Gibbs measure for replicas which is given by 7         /   $  < 8   9  8 " 9 (12) where "     " #%$   $  / 1#    %$  (13) and where the partition function  has been introduced for convenience to normalize the measure for '&  $ . In most nontrivial cases, averages with respect to the measure (12) can not be calculated exactly. Hence, we have to apply a sensible approximation. Our idea is to use techniques which have been frequently applied to probabilistic models [10] such as the variational approximation, the mean field approximation and the TAP approach. In this paper, we restrict ourself to a variational Gaussian approximation. More advanced approximations will be given elsewhere. 5 Variational approximation A method, frequently used in Statistical Physics which has also attracted considerable interest in the Machine Learning community, is the variational approximation [8]. Its goal is to replace an intractable distribution like (12) by a different, sufficiently close distribution from a tractable class which we will write in the form 7    $  < 8   9  8 "  9 (14) 7  will be used in (11) instead of 7 to approximate the average. "  will be chosen (see e.g. [10]) to minimize the relative entropy between 7  and 7 resulting in a minimization of the variational free energy   ?  $  AC< 8   9  8 "  9 (  " "   (15) being an upper bound to the true free energy   for any integer  . The brackets  !  denote averages with respect to the variational distribution (14). For our application to Gaussian process models, we will now specialize to Gaussian priors < 8  9 . For "  , we choose the quadratic expression "   /  " #%$    / 0 "    $  2    ,+ #    + #    ,+ # )( "  $  2   + #    ,+ # )( * (16) as a suitable trial Hamiltonian, leading to a Gaussian distribution (14). The functions 2    + #  and 2   ,+ #  are the variational parameters to be optimized. To continue the variational solutions to arbitrary real  , we assume that the optimal parameters should be replica symmetric, i.e. we set 2   ,+ #   2  ,+ #  as well as 2    ,+ #   2  ,+ #  for  &  and 2   ,+ #   2   ,+ #  . The variational free energy can then be expressed by the local moments (“order parameters” in the language of Statistical Physics) + #      ,+ #   , ,+  + #      +    ,+ #      ,+      ,+ #   for  &  and  +  + #      ,+    + #      +    ,+ #   which have the same replica symmetric structure. Since each of the  '  matrices (such as 2    ) are assumed to have only two types of entries, it is possible to obtain variational equations which contain the number  of replicas as a simple parameter for which the limit  $ can be explicitly performed (see appendix). In this limit, the limiting order parameters + #  , ,+ #  + #  are found to have simple interpretations as the (approximate) mean and variance of the predictor 2  3 ,+ #  with respect to the average over bootstrap data sets while  +  + #  becomes the (approximate) bootstrap averaged posterior covariance. 6 Explicit results for regression with Gaussian processes We consider a GP model for regression with training energy given by Eq. (2). In this case, the prior measure < 8  9 can be simply represented by an dimensional Gaussian distribution for the vector   +      +  . having zero mean and covariance matrix  +   + #  , where  + . is the covariance kernel of the GP. Using the limiting (for  $ ) values of order parameters, and by approximating 7 by 7  in Eq.(11), the explicit result for the bootstrap mean square generalization error is found to be      / / 143  "  $  8  +      ( ,+   +   9  "  $   <    5 /  / (   ,+   +    1    (17) The entire analysis can be repeated for testing (keeping the training energy fixed) with a general loss function of the type   2  3 ,+7   . The result is        / / 143  3   "  $   .     2  3 ,+7     (18)  / / 143  "  $  ? A /61    0  "  $   <    5  +  ( "! ,+7  +  / (   ,+   +    1  # 0 500 1000 1500 2000 Size m of Bootstrap Sample 4 5 6 7 8 Bootstrap Test Error Simulation Theory m=N } N=1000 0 500 1000 1500 2000 Size m of Bootstrap Sample 1.4 1.5 1.6 1.7 1.8 1.9 2.0 Bootstrap Test Error Simulation Theory N=1000 } m=N Figure 1: Average bootstrapped generalization error on Abalone data using square error loss (left) and epsilon insensitive loss (right). Simulation (circles) and theory (lines) based on the same data set  with  / $6$2$ data points. The GP model uses an RBF kernel  +  +       + +    0   with    on whitened inputs. For the data noise we set 1   $ / . We have applied our theory to the Abalone data set [11] where we have computed the approximate bootstrapped generalization errors for the square error loss and the so-called  -insensitive loss which is defined by        0 if     8 $   /   9  1  1       if     8  /      / (   9     if     8  / (    9 (19) with   2  3 ,+    . We have set  $  and   $ / . The bootstrap average from our theory is obtained from Eq.(18). Figure 1 shows the generalization error measured by the square error loss (Eq.(17), left panel) as well as the one measured by the  -insensitive loss (right panel). Our theory (line) is compared with simulations (circles) which were based on Monte-Carlo sampling averages that were computed using the same data set  having  / $2$6$ . The Monte-Carlo training sets of size  are obtained by sampling from  with replacement. We find a good agreement between theory and simulations in the region were   . When we oversample the data set  , however, the agreement is not so good and corrections to our variational Gaussian approximation would be required. Figure 2 shows the bootstrap average of the posterior variance     $   ,+7  +  over the whole data set  ,  / $2$2$ , and compares our theory (line) with simulations (circles) which were based on Monte-Carlo sampling averages. The overall approximation looks better than for the bootstrap generalization error. Finally, it is important to note that all displayed theoretical learning curves have been obtained computationally much faster than their respective simulated learning curves. 7 Outlook The replica approach to bootstrap averages can be extended in a variety of different directions. Besides the average generalization error, one can compute its bootstrap sample fluctuations by introducing more complicated replica expressions. It is also straightforward to apply the approach to more complex problems in supervised learning which are related to Gaussian processes, such as GP classifiers or Support-vector Machines. Since 0 500 1000 1500 2000 Size m of Bootstrap Sample 10 -2 10 -1 Posterior Variance Simulation Theory } N=1000 Figure 2: Bootstrap averaged posterior variance for Abalone data. Simulation (circles) and theory (line) based on the same data set 6 with  / $2$6$ data points. our method requires the solution of a set of variational equations of the size of the original training set, we can expect that its computational complexity should be similar to the one needed for making the actual predictions with the basic model. This will also apply to the problem of very large datasets, where one may use a variety of well known sparse approximations (see e.g. [9] and references therein). It will also be important to assess the quality of the approximation introduced by the variational method and compare it to alternative approximation techniques in the computation of the replica average (11), such as the mean field method and its more complex generalizations (see e.g. [10]). Acknowledgement We would like to thank Lars Kai Hansen for stimulating discussions. DM thanks the Copenhagen Image and Signal Processing Graduate School for financial support. Appendix: Variational equations For reference, we will give the explicit form of the equations for variational and order parameters in the limit  $ . The derivations will be given elsewhere. We obtain ,+   /  " #%$   ,+7  + #  2  ,+ #  (20) +   +    /  " #%$   ,+   + #   ,+   + #  2  + #  (21) where the matrix  ,+7  + #  is given by   1  ( 1  (22) where   #   ,+   + #  is the kernel matrix. Finally   #      #  2   +   2  ,+  . . The order parameter equations Eqs.(20-22) must be solved together with the variational equations which are given by  2  +      1  (  ,+  + . (23) 2  +      2  ,+   (24) 2  +)  8  +     ( ,+  +7  9   2  ,+  %   (25) with  2  +    2   ,+7  2  + . . Combining Eqs.(22) and (23), a self consistent matrix equation   (  ' 1   is obtained where  depends on the diagonal elements  +   +   . Its iterative solution (based on a good initial guess for  +   +   ) requires usually only a few iterations. The order parameters ,+7  and +  +  can then be solved subsequently using Eq.(20,21) with (24,25). References [1] A. Engel and C. Van den Broeck, Statistical Mechanics of Learning (Cambridge University Press, 2001). [2] H. Nishimori, Statistical Physics of Spin Glasses and Information Processing (Oxford Science Publications, 2001). [3] B. Efron and R. J. Tibshirani, An Introduction to the Bootstrap, Monographs on Statistics and Applied Probability 57 (Chapman  Hall, 1993). [4] M. M´ezard, G. Parisi, and M. A. Virasoro, Spin Glass Theory and Beyond, Lecture Notes in Physics 9 (World Scientific, 1987). [5] J. Shao and D. Tu, The Jackknife and Bootstrap, Springer Series in Statistics (Springer Verlag, 1995). [6] D. Malzahn and M. Opper, A variational approach to learning curves, NIPS 14, Editors: T.G. Dietterich, S. Becker, Z. Ghahramani, (MIT Press, 2002). [7] R. Neal, Bayesian Learning for Neural Networks, Lecture Notes in Statistics 118 (Springer, 1996). [8] R. P. Feynman and A. R. Hibbs, Quantum mechanics and path integrals (Mc GrawHill Inc., 1965). [9] L. Csat´o and M. Opper, Sparse Gaussian Processes, Neural Computation 14, No 3, 641 - 668 (2002). [10] M. Opper and D. Saad (editors), Advanced Mean Field Methods: Theory and Practice, (MIT Press, 2001). [11] From http://www1.ics.uci.edu/  mlearn/MLSummary.html. The data set contains 4177 examples. We used a representative fraction (the forth block (a 1000 data) from the list).
2002
193
2,206
Developing Topography and Ocular Dominance Using two aVLSI Vision Sensors and a Neurotrophic Model of Plasticity Terry Elliott Dept. Electronics & Computer Science University of Southampton Highfield Southampton, SO17 1BJ United Kingdom te@ecs.soton.ac.uk J¨org Kramer Institute of Neuroinformatics University of Z¨urich and ETH Z¨urich Winterthurerstrasse 190 8057 Z¨urich Switzerland kramer@ini.phys.ethz.ch Abstract A neurotrophic model for the co-development of topography and ocular dominance columns in the primary visual cortex has recently been proposed. In the present work, we test this model by driving it with the output of a pair of neuronal vision sensors stimulated by disparate moving patterns. We show that the temporal correlations in the spike trains generated by the two sensors elicit the development of refined topography and ocular dominance columns, even in the presence of significant amounts of spontaneous activity and fixed-pattern noise in the sensors. 1 Introduction A large body of evidence suggests that the development of the retinogeniculocortical pathway, which leads in higher vertebrates to the emergence of eye-specific laminae in the lateral geniculate nucleus (LGN), the formation of ocular dominance columns (ODCs) in the striate cortex and the establishment of retinotopic representations in both structures, is a competitive, activity-dependent process (see Ref. [1] for a review). Experimental findings indicate that at least in the case of ODC formation, this competition may be mediated by retrograde neurotrophic factors (NTFs) [2]. A computational model for synaptic plasticity based on this hypothesis has recently been proposed [1]. This model has successfully been applied to the development and refinement of retinotopic representations in the LGN and striate cortex, and to the formation of ODCs in the striate cortex due to competition between the eye-specific laminae of the LGN. In this model, the activity within the afferent cell sheets was simulated either as interocularly uncorrelated spontaneous retinal waves or, as a coarse model of visually evoked activity, as interocularly correlated Gaussian noise. Gaussian noise, however, is not a realistic model of evoked retinal activity, nor do the interocular correlations introduced adequately capture the correlations that arise due to the spatial disparity between the two retinas. For this study, we tested the ability of the plasticity model to generate topographic refinement and ODCs in response to afferent activity provided by a pair of biologically-inspired artificial vision sensors. These sensors capture some of the properties of biological retinas. They convert optical images into analog electrical signals and perform brightness adaptation and logarithmic contrast-encoding. Their output is encoded in asynchronous, binary spike trains, as provided by the retinal ganglion cells of biological retinas. Mismatch of processing elements and temporal noise are a natural by-product of biological retinas and such vision sensors alike. One goal of this work was to determine the robustness of the model towards such nonidealities. While the refinement of topography from the temporal correlations provided by one vision sensor in response to moving stimuli has already been explored [3], the present work focuses on the co-development of topography and ODCs in response to the correlations between the signals from two vision sensors stimulated by disparate moving bars. In particular, the dependence of ODC formation on disparity and noise is considered. 2 Vision Sensor The vision sensor used in the experiments is a two-dimensional array of 16 16 pixels fabricated with standard CMOS technology, where each pixel performs a two-way rectified temporal high-pass filtering operation on the incoming visual signal in the focal plane [4, 5]. The sensor adapts to background illuminance and responds to local positive and negative illuminance transients at separately coded terminals. The transients are converted into a stream of asynchronous binary pulses, which are multiplexed onto a common, arbitrated address bus, where the address encodes the location of the sending pixel and the sign of the transient. In the absence of any activity on the communication bus for a few hundred milliseconds the bus address decays to zero. A block diagram of a reduced-resolution array of pixels with peripheral arbitration and communication circuitry is shown in Fig. 1. Handshaking with external data acquisition circuitry is provided via the request (  ) and acknowledge (  ) terminals. Handshaking 111 000 001 010 011 100 101 110 OFF ON 11 00 01 10 OFF ON OFF ON OFF ON Arbiter tree Handshaking Arbiter tree X address Y address ACK REQ Figure 1: Block diagram of the sensor architecture (reduced resolution). If the array is used for imaging purposes under constant or slowly-varying ambient lighting conditions, it only responds to boundaries or edges of moving objects or shadows of sufficient contrast and not to static scenes. Depending on the settings of different bias controls the imager can be used in different modes. Separate gain controls for ON and OFF transients permit the imager to respond to only one type of transient or to both types with adjustable weighting. Together with these gain controls, a threshold bias sets the contrast response threshold and the rate of spontaneous activity. For sufficiently large thresholds, spontaneous activity is completely suppressed. Another bias control sets a refractory period that limits the maximum spike rate of each pixel. For short refractory periods, each contrast transient at a given pixel triggers a burst of spikes; for long refractory periods, a typical transient only triggers a single spike in the pixel, resulting in a very efficient, one-bit edge coding. 3 Sensor-Computer Interface The two vision sensors were coupled to a computer via two parallel ports. The handshaking terminals of each chip were shorted, so that the sensors could operate at their own speed without being artificially slowed down by the computer. This avoided the risk of overloading the multiplexer and thereby distorting the data. Furthermore, this scheme was simpler to implement than a handshaking scheme. The lack of synchronization entailed several problems: missing out on events, reading events more than once, and reading spurious zero addresses in the absence of recent activity in the sensors. The first two problems could satisfactorily be solved by choosing a long refractory period, so that each moving-edge stimulus only evoked a single spike per pixel. For a typical stimulus this resulted in interspike intervals on the multiplexed bus of a few milliseconds, which made it unlikely that events would be missed. Furthermore, the refractory period prevented any given pixel from spiking more than once in a row in response to a moving edge, so that multiple reads of the same address were always due to the same event being read several times and therefore could be discarded. The ambiguity of the (0,0) address readings, namely whether such a reading meant that the (0,0) pixel was active or that the address on the bus had decayed to zero due to lack of activity, could not be resolved. It was therefore decided to ignore the (0,0) address and to exclude the (0,0) cell from each map. Using this strategy it was found that the data read by the computer reflected the optical stimuli with a small error rate. 4 Visual Stimulation Two separate windows within the display of the LCD monitor of the computer used for data acquisition were each imaged onto one of the vision chips via a lens to provide the optical stimulation. The stimuli in each window consisted in eight separate sequences of images that were played without interruption, each new sequence being selected randomly after the completion of the previous one. Each sequence simulated a white bar sweeping across a black background. The sequences were distinguished only by the orientation and direction of motion of the bar, while the speed, as measured perpendicularly to the bar’s orientation, was constant and identical for each sequence. The bar could have four different orientations, aligned to the rows or columns of the vision sensor or to one of the two diagonals, and move in either direction. The bars had a finite width of 20 pixels on the LCD display, corresponding to about 8 pixel periods on the image sensors, and they were sufficiently long entirely to fill the field of view of the chips. The displays in the two windows stimulating the two chips were identical save for a fixed relative displacement between the bars along the direction of motion during the entire run, simulating the disparity seen by two eyes looking at the same object. The used displacements were 0, 10, and 15 pixels on the LCD display, corresponding to no disparity and disparities of 1/2 the bar width (4 sensor pixels) and 3/4 of the bar width (6 sensor pixels), respectively. The speed of the bar was largely unimportant, because the output spikes of the chip were sampled into bins of fixed sizes, rather than bins representing fixed time windows. The chosen white bar on a black background stimulated the vision sensor with a leading ON edge and a trailing OFF edge. However, because the spurious activity of the chip, mainly in the form of crosstalk, was increased if both ON and OFF responses were activated and because we required only the response to one edge type for this work, the ON responses from the chip were suppressed. 5 Neurotrophic Model of Plasticity Let the letters and  label afferent cells within an afferent sheet, letters  and  label the afferent sheets, and letters  and  label target cells. The two afferent sheets represent the two chips’ arrays of pixels and are therefore 16 16 square arrays of cells. For convenience, the target array is also a 16 16 square array of cells. Let  denote an afferent cell’s activity. For each time step of simulated development, we capture a fixed number of spikes from each chip. A pixel that has not spiked gives    , while one that has gives   . If   represents the number of synapses projected from cell in afferent sheet  to target  , then   evolves according to the equation             "!  #%$'&  $  &  ( $ & "! $ &*),+. +0/ 132  154 # $'&  $ + &  $ & #%$,&  $ + &687 :9<; (1) Here, 132 and 154 represent, respectively, an activity-independent and a maximum activitydependent release of NTF from target cells; the parameter  a resting NTF uptake capacity by afferent cells;  + a function characterising NTF diffusion between target cells, which we take for convenience to be a Gaussian of width = . The function !  > @? A B #    is a simple model for the number of NTF receptors supported by an afferent cell, where ?   denotes average afferent activity. The parameter  sets the overall rate of development. Consistent with previous work [3], we set C D E; GF , =  E;IHKJ , 1 2 D , 1 4 LF, and  M . Although this model appears complex, it can be shown to be equivalent to a non-linear Hebbian rule with competition implemented via multiplicative synaptic normalisation [6]. For a full discussion, derivation and justification of the model, see Ref. [7]. Both afferent sheets initially project roughly equally to all cells in the target sheet. The initial pattern of connectivity between the sheets is established following Goodhill’s method [8]. For a given afferent cell, let  be the distance between some target cell and the target cell to which the afferent cell would project were topography perfect; let ENPORQ be the maximum such distance. Then the number of synapses projected by the afferent cell to this target cell is initially set to be proportional to S  7  NPOTQ3U    7  "VPW (2) where V>X8Y EWZ\[ is a randomly selected number for each such pair of afferent and target cells. The parameter  X]Y WZZ[ determines the quality of the projections, with  ^ giving initially greatest topographical bias, so that an afferent cell projects maximally to its topographically preferred target cell, and  _ giving initially completely random projections. Here we set  ` ;IJ ; the impact of decreasing  on the final structure of the topographic map has been thoroughly explored elsewhere [3]. The topographic representation of an afferent sheet on the target sheet is depicted using standard methods [1, 8]: the centres of mass of afferent projections to all target cells are calculated, and these are then connected by lines that preserve the neighbourhood relations among the target cells. 6 Results For each iteration step of the algorithm a fixed number of spikes was captured. The bin size determines the correlation space constants of the afferent cell sheets and therefore influences the final quality of the topographic mapping [3]. Unless otherwise noted the bin size was 32 per sensor, which corresponds to about two successive pixel rows stimulated by a moving contrast boundary. The presented simulations were performed for 15,000 to 20,000 iteration steps, sufficient for map development to be largely complete. (a) (b) (c) Figure 2: Distribution of ODCs in the target cell sheet for different disparities between the bar stimuli driving the two afferent sheets. The gray level of each target cell indicates the relative strengths of projections from the two afferent sheets, where ‘black’ represents one and ‘white’ the other afferent sheet. (a) No disparity; (b) disparity: 50% of bar width (4 sensor pixels); (c) disparity: 75% of bar width (6 sensor pixels). Several runs were performed for the three different disparities of the stimuli presented to the two sensors. Since the results for a given disparity were all qualitatively similar, we only show the results of one representative run for each value. The distribution of the formed ODCs in the target sheet is shown in Fig. 2, where the shading of each neuron indicates the relative numbers of projections from the two afferent sheets. In the absence of any disparity the formation of ODCs was suppressed. The residual ocular dominance modulations may be attributed to a small misalignment of the two chips with respect to the display. With the introduction of a disparity a very clear structure of ODCs emerges. The distribution of ODCs strongly depends on the disparity and does not vary significantly between runs for a given disparity. With increasing disparity the boundaries between ODCs become more distinct [9, 10]. The obtained maps are qualitatively similar to those obtained with simulated afferent inputs [1]. 0 0.05 0.1 0.15 0.2 0.25 0.3 0 2 4 6 8 10 12 14 16 Power Frequency Figure 3: Power spectra of the spatial frequency distribution of ODCs in the target cell sheet for different disparities and data sets. A ‘solid’ line denotes data with disparity of 75% of bar width (6 sensor pixels); a ‘dashed’ line denotes a disparity of 50% of bar width (4 sensor pixels); a ‘dotted’ line denotes no disparity. The power spectra obtained from two-dimensional Fourier transforms of the ODC distributions, represented in Fig. 3, show that the spatial frequency content of the ODCs is a function of disparity, consistent with experimental findings in the cat [8, 11, 12, 13], and that its variability between different runs of the same disparity is significantly smaller than between different disparities. The principal spatial frequency along each dimension of the target sheet is mainly determined by the NTF diffusion parameter [1] and the disparity. For the NTF diffusion parameter used here, it ranges between two and four cycles; increasing (decreasing) the diffusion parameter decreases (increases) the spatial frequency. The heights of the peaks show the degree of segregation, which increases with disparity, as already mentioned. (a) (b) (c) Figure 4: Topographic mapping between afferent sheets and target sheet for different disparities between the stimuli driving the two afferent sheets. The data are from the same runs as the ODC data of Fig. 2. (a) No disparity; (b) disparity: 50% of bar width (4 sensor pixels); (c) disparity: 75% of bar width (6 sensor pixels). The resulting topographic maps for the same runs are shown in Fig. 4. In the absence of disparity the topographic map is almost perfect, with nearly one-to-one mapping between the afferent sheets and the target sheet, apart from remaining edge effects. However, disruptions appear at ODC boundaries in the runs with disparate stimuli, these disruptions becoming more distinct with increasing disparity due to the increasing sharpness of ODC boundaries. The data presented above were obtained under suppression of spontaneous firing, so that each pixel generated exactly one spike in response to each moving bright-to-dark contrast boundary with an error rate of about 5%. By turning up the spontaneous firing rate we can test the robustness of the system to increased noise levels. We set the spontaneous firing rate to approximately 50%, so that roughly half of all spikes are not associated with an edge event. We also increased the bin size from 32 to 48 spikes per chip to compensate for the reduced intraocular correlations as a result of increased noise [3]. Fig. 5 shows a typical pattern of ODCs and the corresponding topographic map in the presence of 50% spontaneous activity. Although there are some distortions in the topographic map, in general it compares very favourably to maps developed in the absence of spontaneous activity. At an approximately 60% level of noise major disruptions in topographic map formation and attenuated ODC development are exhibited. Increasing the level of noise still further causes a complete breakdown of topographic and ODC map formation (data not shown). (a) (b) Figure 5: The pattern of ODCs and the topographic map that develop in the presence of approximately 50% noise. (a) The OD map; (b) the topographic map. The disparity is 50% of the bar width (4 sensor pixels). 7 Discussion The refinement of topography and the development of ODCs can be robustly simulated with the considered hybrid system, consisting of an integrated analog visual sensing system that captures some of the key features of retinal processing and a mathematical model of activity-dependent synaptic competition. Despite the different structure of the input stimuli and the different noise characteristics of the real sensors from those used in the pure simulations [1], the results are comparable. Several parameters of the vision sensors, such as refractory period and spontaneous firing rate, can be continuously varied with input bias voltages. This facilitates the evaluation of the performance of the model under different input conditions. The sensors were operated at long refractory periods, so that each pixel responded with a single spike to a contrast boundary moving across it. In this non-bursting mode the coding of the stimulus is very sparse, which makes the topographic refinement process more efficient [3]. The noise induced by the vision sensors manifests itself in occasionally missing responses of some pixels to a moving edge, in temporal jitter and a tunable level of spontaneous activity. With an optimal suppression of spontaneous firing, the error rate (number of missed and spurious events divided by total number of events) can be reduced to approximately 5%. Increased spontaneous activity levels show a strongly anisotropic distribution across the sensing arrays because of the inherent fixed-pattern noise present in the integrated sensors due to random mismatches in the fabricated circuits. This type of inhomogeneity has not been modeled in previous work. Spontaneous activity and mismatches between cells with the same functional role are prominent features of biological neural systems and biological information processing systems therefore have to deal with these nonidealities. The plasticity algorithm proves to be sufficiently robust with respect to these types of noise. The developed ODC and topographic maps depend quite strongly on the disparity between the two sensors. At zero disparity, the formation of ODCs is practically suppressed and topography becomes very smooth. As the disparity increases, the period of the resulting ODCs increases, consistent with experimental results in the cat [8, 11, 12, 13], and, as expected, the degree of segregation also increases [9, 10]. In the presence of high levels of spontaneous activity in the afferent pathways, with as much as half of all spikes not being stimulus–related, the maps continue to exhibit well developed ODCs and topography. Although there are indications of distortions in the topographic maps in the presence of approximately 50% spontaneous activity, the maps remain globally well structured. As spontaneous activity is increased further, map development becomes increasingly disrupted until it breaks down completely. 8 Conclusions We examined the refinement of topographic mappings and the formation of ocular dominance columns by coupling a pair of integrated vision sensors to a neurotrophic model of synaptic plasticity. We have shown that the afferent input from real sensors looking at moving bar stimuli yields similar results as simulated partially randomized input and that these results are insensitive to the presence of significant noise levels. Acknowledgments Tragically, J¨org Kramer died in July, 2002. TE dedicates this work to his memory. TE thanks the Royal Society for the support of a University Research Fellowship. JK was supported in part by the Swiss National Foundation Research SPP grant. We thank David Lawrence of the Institute of Neuroinformatics for his invaluable help with interfacing the chip to the PC. References [1] T. Elliott and N. R. Shadbolt, “A neurotrophic model of the development of the retinogeniculocortical pathway induced by spontaneous retinal waves,” Journal of Neuroscience, vol. 19, pp. 7951–7970, 1999. [2] A.K. McAllister, L.C. Katz, and D.C. Lo, “Neurotrophins and synaptic plasticity,” Annual Review of Neuroscience, vol. 22, pp. 295–318, 1999. [3] T. Elliott and J. Kramer, “Coupling an aVLSI neuromorphic vision chip to a neurotrophic model of synaptic plasticity: the development of topography,” Neural Computation, vol. 14, pp. 2353–2370, 2002. [4] J. Kramer, “An integrated optical transient sensor,” IEEE Trans. Circuits and Systems II: Analog and Digital Signal Processing, 2002, submitted. [5] J. Kramer, “An on/off transient imager with event-driven, asynchronous read-out,” in Proc. 2002 IEEE Int. Symp. on Circuits and Systems, Phoenix, AZ, May 2002, vol. II, pp. 165–168, IEEE Press. [6] T. Elliott and N. R. Shadbolt, “Multiplicative synaptic normalization and a nonlinear Hebb rule underlie a neurotrophic model of competitive synaptic plasticity,” Neural Computation, vol. 14, pp. 1311–1322, 2002. [7] T. Elliott and N. R. Shadbolt, “Competition for neurotrophic factors: Mathematical analysis,” Neural Computation, vol. 10, pp. 1939–1981, 1998. [8] G.J. Goodhill, “Topography and ocular dominance: a model exploring positive correlations,” Biological Cybernetics, vol. 69, pp. 109–118, 1993. [9] D.H. Hubel and T.N. Wiesel, “Binocular interaction in striate cortex of kittens reared with artificial squint,” Journal of Neurophysiology, vol. 28, pp. 1041–1059, 1965. [10] C.J. Shatz, S. Lindstr¨om, and T.N. Wiesel, “The distribution of afferents representing the right and left eyes in the cat’s visual cortex,” Brain Research, vol. 131, pp. 103–116, 1977. [11] S. L¨owel, “Ocular dominance column development: Strabismus changes the spacing of adjacent columns in cat visual cortex,” Journal of Neuroscience, vol. 14, pp. 7451–7468, 1994. [12] G.J. Goodhill and S. L¨owel, “Theory meets experiment: correlated neural activity helps determine ocular dominance column periodicity,” Trends in Neurosciences, vol. 18, pp. 437–439, 1995. [13] S.B. Tieman and N. Tumosa, “Alternating monocular exposure increases the spacing of ocularity domains in area 17 of cats,” Visual Neuroscience, vol. 14, pp. 929–938, 1997.
2002
194
2,207
Gaussian Process Priors With Uncertain Inputs Application to Multiple-Step Ahead Time Series Forecasting Agathe Girard Department of Computing Science University of Glasgow Glasgow, G12 8QQ agathe@dcs.gla.ac.uk Carl Edward Rasmussen Gatsby Unit University College London London, WC1N 3AR edward@gatsby.ucl.ac.uk Joaquin Qui˜nonero Candela Informatics and Mathematical Modelling Technical University of Denmark Richard Petersens Plads, Building 321 DK-2800 Kongens, Lyngby, Denmark jqc@imm.dtu.dk Roderick Murray-Smith Department of Computing Science University of Glasgow, Glasgow, G12 8QQ & Hamilton Institute National University of Ireland, Maynooth rod@dcs.gla.ac.uk Abstract We consider the problem of multi-step ahead prediction in time series analysis using the non-parametric Gaussian process model. -step ahead forecasting of a discrete-time non-linear dynamic system can be performed by doing repeated one-step ahead predictions. For a state-space model of the form        , the prediction of  at time  is based on the point estimates of the previous outputs. In this paper, we show how, using an analytical Gaussian approximation, we can formally incorporate the uncertainty about intermediate regressor values, thus updating the uncertainty on the current prediction. 1 Introduction One of the main objectives in time series analysis is forecasting and in many real life problems, one has to predict ahead in time, up to a certain time horizon (sometimes called lead time or prediction horizon). Furthermore, knowledge of the uncertainty of the prediction is important. Currently, the multiple-step ahead prediction task is achieved by either explicitly training a direct model to predict steps ahead, or by doing repeated one-step ahead predictions up to the desired horizon, which we call the iterative method. There are a number of reasons why the iterative method might be preferred to the ‘direct’ one. Firstly, the direct method makes predictions for a fixed horizon only, making it computationally demanding if one is interested in different horizons. Furthermore, the larger , the more training data we need in order to achieve a good predictive performance, because of the larger number of ‘missing’ data between  and  . On the other hand, the iterated method provides any -step ahead forecast, up to the desired horizon, as well as the joint probability distribution of the predicted points. In the Gaussian process modelling approach, one computes predictive distributions whose means serve as output estimates. Gaussian processes (GPs) for regression have historically been first introduced by O’Hagan [1] but started being a popular non-parametric modelling approach after the publication of [7]. In [10], it is shown that GPs can achieve a predictive performance comparable to (if not better than) other modelling approaches like neural networks or local learning methods. We will show that for a -step ahead prediction which ignores the accumulating prediction variance, the model is not conservative enough, with unrealistically small uncertainty attached to the forecast. An alternative solution is presented for iterative -step ahead prediction, with propagation of the prediction uncertainty. 2 Gaussian Process modelling We briefly recall some fundamentals of Gaussian processes. For a comprehensive introduction, please refer to [5], [11], or the more recent review [12]. 2.1 The GP prior model Formally, the random function, or stochastic process,   is a Gaussian process, with mean   and covariance function     , if its values at a finite number of points,        , are seen as the components of a normally distributed random vector. If we further assume that the process is stationary: it has a constant mean and a covariance function only depending on the distance between the inputs . For any , we have            (1) with                 giving the covariance between the points    and    , which is a function of the inputs corresponding to the same cases  and . A common choice of covariance function is the Gaussian kernel1     "!$#&% ')(+* ,./0)1   0 (  0 2 3 2 0 4  (2) where 5 is the input dimension. The 3 parameters (correlation length) allow a different distance measure for each input dimension 6 . For a given problem, these parameters will be adjusted to the data at hand and, for irrelevant inputs, the corresponding 3 0 will tend to zero. The role of the covariance function in the GP framework is similar to that of the kernels used in the Support Vector Machines community. This particular choice corresponds to a prior assumption that the underlying function  is smooth and continuous. It accounts for a high correlation between the outputs of cases with nearby inputs. 1This choice was motivated by the fact that, in [8], we were aiming at unified expressions for the GPs and the Relevance Vector Machines models which employ such a kernel. More discussion about possible covariance functions can be found in [5]. 2.2 Predicting with Gaussian Processes Given this prior on the function  and a set of data     1  , our aim, in this Bayesian setting, is to get the predictive distribution of the function value    corresponding to a new (given) input . If we assume an additive uncorrelated Gaussian white noise, with variance  , relates the targets (observations) to the function outputs, the distribution over the targets is Gaussian, with zero mean and covariance matrix such that         . We then adjust the vector of hyperparameters  3   3   so as to maximise the log-likelihood    ! #"%$  , where & is the vector of observations. In this framework, for a new ' , the predictive distribution is simply obtained by conditioning on the training data. The joint distribution of the variables being Gaussian, this conditional distribution,     $ ( is also Gaussian with mean and variance )    *   ,+  " (3) 2      ( *      *    (4) where * .  /  .     .  0 is the 132 * vector of covariances between the new point and the training targets and     . .   * , with      as given by (2). The predictive mean serves as a point estimate of the function output, 4 .  with uncertainty .  . And it is also a point estimate for the target, 4 5 , with variance 2 .    . 3 Prediction at a random input If we now assume that the input distribution is Gaussian,   )687  67  , the predictive distribution is now obtain by integrating over    $ ) 6 7 ) 6 7   9    $     6  (5) where    . $ .  is Normal, as specified by (3) and (4). 3.1 Gaussian approximation Given that this integral is analytically intractable (    $ .  is a complicated function of ), we opt for an analytical Gaussian approximation and only compute the mean and variance of    ' $ )687 ) 67  . Using the law of iterated expectations and conditional variance, the ‘new’ mean and variance are given by   )67 ) 67   : 67  :<;!= 6 7?>    $ @ A: 67  )   (6)  )67 ) 67   : 67  BDCE;!= 6 7?>    $   BDC 687 F:<;!= 6 7G>    $    : 67  2    BDC 687  )    (7) where : 67 indicates the expectation under H . In our initial development, we made additional approximations ([2]). A first and second order Taylor expansions of ) '  and 2 .  respectively, around ) 67 , led to   )67  687   )  )67  (8)  )67  687   2  )687   * ,I CKJML 2 2   L L ,N N N N O 7 1PDQ 7  687R  L )   L N N N N  O 7 1PDQ 7  687 L )   L N N N N O 7 1SPDQ 7  (9) The detailed calculations can be found in [2]. In [8], we derived the exact expressions of the first and second moments. Rewriting the predictive mean ) .  as a linear combination of the covariance between the new and the training points (as suggested in [12]), with our choice of covariance function, the calculation of  '  then involves the product of two Gaussian functions:   ) 67  687   9 )   6  / 9    6 (10) with   +  " . This leads to (refer to [9] for details)   )67 ) 67    (11) with   $    6 7  $  2 ! #&% (  2  )6 7 (      6 7      )6 7 (   , where  BD 5 3 2   3 2  and is the 5 2 5 identity matrix. In the same manner, we obtain for the variance  )67 ) 67     )67  )67  I    ( +   ( I     2 (12) with    $ ,    67  $  2 !$# %! ( * , " ( )67    * ,    67   " ( )67 $# ! #&% (+* ,  ( &%    ,     ( &%  # (13) where "     ' , . 3.2 Monte-Carlo alternative Equation (5) can be solved by performing a numerical approximation of the integral, using a simple Monte-Carlo approach:     $ ) 6 7  6 7   9    $    6 )( * *  /  1     $    (14) where '  are (independent) samples from   . 4 Iterative + -step ahead prediction of time series For the multiple-step ahead prediction task of time series, the iterative method consists in making repeated one-step ahead predictions, up to the desired horizon. Consider the time series  -,   and the state-space model  /.   #H /.  10 /. where H  .     .     .   is the state at time   (we assume that the lag 2 is known) and the (white) noise has variance D . Then, the“naive” iterative -step ahead prediction method works as follows: it predicts only one time step ahead, using the estimate of the output of the current prediction, as well as previous outputs (up to the lag 2 ), as the input to the prediction of the next time step, until the prediction steps ahead is made. That way, only the output estimates are used and the uncertainty induced by each successive prediction is not accounted for. Using the results derived in the previous section, we suggest to formally incorporate the uncertainty information about the intermediate regressor. That is, as we predict ahead in time, we now view the lagged outputs as random variables. In this framework, the input at time is a random vector with mean formed by the predicted means of the lagged outputs    ,   *  2 , given by (11). The 2 2!2 input covariance matrix has the different predicted variances on its diagonal (with the estimated noise variance  added to them), computed with (12), and the off-diagonal elements are given by, in the case of the exact solution,      ?H           ( )6   , where  is as defined previously and    /      687 )687  with        687   . 4.1 Illustrative examples The first example is intended to provide a basis for comparing the approximate and exact solutions, within the Gaussian approximation of (5)), to the numerical solution (MonteCarlo sampling from the true distribution), when the uncertainty is propagated as we predict ahead in time. We use the second example, inspired from real-life problems, to show that iteratively predicting ahead in time without taking account of the uncertainties induced by each succesive prediction leads to inaccurate results, with unrealistically small error bars. We then assess the predictive performance of the different methods by computing the average absolute error ( 2  ), the average squared error ( 2 2 ) and average minus log predictive density2 ( 2 ), which measures the density of the actual true test output under the Gaussian predictive distribution and use its negative log as a measure of loss. 4.1.1 Forecasting the Mackey-Glass time series The Mackey-Glass chaotic time series constitutes a wellknown benchmark and a challenge for the multiple-step ahead prediction task, due to its strong non-linearity [4]: 0  =  > 0   (       =   >   =   > , . We have    , ,    * and   * . The series is re-sampled with period * and normalized. We choose 2  *  for the number of lagged outputs in the state vector, "! 3 #$! &% '#(! &) '#(! *  and the targets,    , are corrupted by a white noise with variance   $ * . We train a GP model with a Gaussian kernel such as (2) on * + points, taken at random from a series of ,+$$ points. Figure 1 shows the mean predictions with their uncertainties, given by the exact and approximate methods, and -( samples from the Monte-Carlo numerical approximation, from  * to  * $ steps ahead, for different starting points. Figure 2 shows the plot of the * + -step ahead mean predictions (left) and their , uncertainties (right), given by the exact and approximate methods, as well as the sample mean and sample variance obtained with the numerical solution (average over -$ points). These figures show the better performance of the exact method on the approximate one. Also, they allow us to validate the Gaussian approximation, noticing that the error bars encompass the samples from the true distribution. Table 1 provides a quantitative confirmation. Table 1: Average (over -($ test points) absolute error ( 2  ), squared error ( 2 2 ) and minus log predictive density ( 2  ) of the * $ -step ahead predictions obtained using the exact method ( .  ), the approximate one ( . 2 ) and the sampling from the true distribution ( .  ). 2  2 2 2  .    /0 /   1$1$2    ,(/  1 . 2  3-4/++2   /  +2 *  * 1+$ .5   /+,$,   1 4   ,$2  2 2To evaluate these losses in the case of Monte-Carlo sampling, we use the sample mean and sample variance. 0 10 20 30 40 50 60 70 80 90 100 −3 −2 −1 0 1 2 3 From 1 to 100 steps ahead k=1 k=100 True Data Exact m +/− 2σ Approx. m +/− 2σ MC samples 250 260 270 280 290 300 310 320 330 340 350 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 From 1 to 100 steps ahead k=1 k=100 True Data Exact m +/− 2σ Approx. m +/− 2σ Figure 1: Iterative method in action: simulation from * to * + steps ahead for different starting points in the test series. Mean predictions with , error bars given by the exact (dash) and approximate (dot) methods. Also plotted, -$ samples obtained using the numerical approximation. 100 150 200 250 300 350 400 450 500 550 600 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 100−step ahead predicted means true exact approx numerical 100 150 200 250 300 350 400 450 500 550 600 0 0.5 1 1.5 2 2.5 3 3.5 100−step ahead predicted variances exact approx. numerical Figure 2: * + -step ahead mean predictions (left) and uncertainties (right.) obtained using the exact method (dash), the approximate (dot) and the sample mean and variance of the numerical solution (dash-dot). 4.1.2 Prediction of a pH process simulation We now compare the iterative -step ahead prediction results obtained when propagating the uncertainty (using the approximate method) and when using the output estimates only (the naive approach). For doing so, we use the pH neutralisation process benchmark presented in [3]. The training and test data consist of pH values (outputs  of the process) and a control input signal ( ). With a model of the form                 , we train our GP on * , , , examples and consider a test set of  points (all data have been normalized). Figure 3 shows the *  -step ahead predicted means and variances obtained when propagating the uncertainty and when using information on the past predicted means only. The losses calculated are the following: 2     *4 * , 2 2    0 / and 2     , $, for the approximate method and 2   * 2$,$  , for the naive one! 0 2 4 6 8 10 12 1 1.2 1.4 1.6 1.8 2 22 24 26 28 30 32 34 −6 −4 −2 0 2 4 true Approx. m +/− 2σ Naive m +/− 2σ k=1 k=10 10 20 30 40 50 60 70 80 −1.5 −1 −0.5 0 0.5 1 1.5 2 10−step ahead predicted means true approx naive 10 20 30 40 50 60 70 80 10 −10 10 −5 10 0 10 5 10−step ahead predicted variances Figure 3: Predictions from * to *  steps ahead (left). *  -step ahead mean predictions with the corresponding variances, when propagating the uncertainty (dot) and when using the previous point estimates only (dash). 5 Conclusions We have presented a novel approach which allows us to use knowledge of the variance on inputs to Gaussian process models to achieve more realistic prediction variance in the case of noisy inputs. Iterating this approach allows us to use it as a method for efficient propagation of uncertainty in the multi-step ahead prediction task of non-linear time-series. In experiments on simulated dynamic systems, comparing our Gaussian approximation to Monte Carlo simulations, we found that the propagation method is comparable to Monte Carlo simulations, and that both approaches achieved more realistic error bars than a naive approach which ignores the uncertainty on current state. This method can help understanding the underlying dynamics of a system, as well as being useful, for instance, in a model predictive control framework where knowledge of the accuracy of the model predictions over the whole prediction horizon is required (see [6] for a model predictive control law based on Gaussian processes taking account of the prediction uncertainty). Note that this method is also useful in its own right in the case of noisy model inputs, assuming they have a Gaussian distribution. Acknowledgements Many thanks to Mike Titterington for his useful comments. The authors gratefully acknowledge the support of the Multi-Agent Control Research Training Network - EC TMR grant HPRN-CT-1999-00107 and RM-S is grateful for EPSRC grant Modern statistical approaches to off-equilibrium modelling for nonlinear system control GR/M76379/01. References [1] O’Hagan, A. (1978) Curve fitting and optimal design for prediction. Journal of the Royal Statistical Society B 40:1-42. [2] Girard, A. & Rasmussen, C. E. & Murray-Smith, R. (2002) Gaussian Process Priors With Uncertain Inputs: Multiple-Step Ahead Prediction. Technical Report, TR-2002-119, Dept. of Computing Science, University of Glasgow. [3] Henson, M. A. & Seborg, D. E. (1994) Adaptive nonlinear control of a pH neutralisation process. IEEE Trans Control System Technology 2:169-183. [4] Mackey, M. C. & Glass, L. (1977) Oscillation and Chaos in Physiological Control Systems. Science 197:287-289. [5] MacKay, D. J. C. (1997) Gaussian Processes - A Replacement for Supervised Neural Networks?. Lecture notes for a tutorial at NIPS 1997. [6] Murray-Smith, R. & Sbarbaro-Hofer, D. (2002) Nonlinear adaptive control using non-parametric Gaussian process prior models. 15th IFAC World Congress on Automatic Control, Barcelona [7] Neal, R. M. (1995) Bayesian Learning for Neural Networks PhD thesis, Dept. of Computer Science, University of Toronto. [8] Qui˜nonero Candela, J & Girard, A. & Larsen, J. (2002) Propagation of Uncertainty in Bayesian Kernels Models – Application to Multiple-Step Ahead Forecasting Submitted to ICASSP 2003. [9] Qui˜nonero Candela, J. & Girard, A. (2002) Prediction at an Uncertain Input for Gaussian Processes and Relevance Vector Machines - Application to Multiple-Step Ahead Time-Series Forecasting. Technical Report, IMM, Danish Technical University. [10] Rasmussen, C. E. (1996) Evaluation of Gaussian Processes and other Methods for Non-Linear Regression PhD thesis, Dept. of Computer Science, University of Toronto. [11] Williams, C. K. I. & Rasmussen, C. E. (1996) Gaussian Processes for Regression Advances in Neural Information Processing Systems 8 MIT Press. [12] Williams, C. K. I. (2002) Gaussian Processes To appear in The handbook of Brain Theory and Neural Networks, Second edition MIT Press.
2002
195
2,208
Maximum Likelihood and the Information Bottleneck Noam Slonim Yair Weiss School of Computer Science & Engineering, Hebrew University, Jerusalem 91904, Israel noamm,yweiss  @cs.huji.ac.il Abstract The information bottleneck (IB) method is an information-theoretic formulation for clustering problems. Given a joint distribution  , this method constructs a new variable that defines partitions over the values of  that are informative about  . Maximum likelihood (ML) of mixture models is a standard statistical approach to clustering problems. In this paper, we ask: how are the two methods related ? We define a simple mapping between the IB problem and the ML problem for the multinomial mixture model. We show that under this mapping the problems are strongly related. In fact, for uniform input distribution over  or for large sample size, the problems are mathematically equivalent. Specifically, in these cases, every fixed point of the IB-functional defines a fixed point of the (log) likelihood and vice versa. Moreover, the values of the functionals at the fixed points are equal under simple transformations. As a result, in these cases, every algorithm that solves one of the problems, induces a solution for the other. 1 Introduction Unsupervised clustering is a central paradigm in data analysis. Given a set of objects  , one would like to find a partition  which optimizes some score function. Tishby et al. [1] proposed a principled information-theoretic approach to this problem. In this approach, given the joint distribution  , one looks for a compact representation of  , which preserves as much information as possible about (see [2] for a detailed discussion). The mutual information, !#" $ , between the random variables  and is given by [3] !%#"& ('*),+.-./10 23- 45%687 %:9<;.=?>3@ 2BA +DC >E@ 2FC . In [1] it is argued that both the compactness of the representation and the preserved relevant information are naturally measured by mutual information, hence the above principle can be formulated as a trade-off between these quantities. Specifically, Tishby et al. [1] suggested to introduce a compressed representation  of  , by defining GHF7 % . The compactness of the representation is then determined by !?"I , while the quality of the clusters,  , is measured by the fraction of information they capture about , !%?"& JB!#" $ . The IB problem can be stated as finding a (stochastic) mapping GHF7 % such that the IB-functional KL'M!%?"ONQP!%?"&  is minimized, where P is a positive Lagrange multiplier that determines the trade-off between compression and precision. It was shown in [1] that this problem has an exact optimal (formal) solution without any assumption about the origin of the joint distribution IR . The standard statistical approach to clustering is mixture modeling. We assume the measurements  for each  come from one of 7 $7 possible statistical sources, each with its own parameters  (e.g.   in Gaussian mixtures). Clustering corresponds to first finding the maximum likelihood estimates of  and then using these parameters to calculate the posterior probability that the measurements at  were generated by each source. These posterior probabilities define a “soft” clustering of  values. While both approaches try to solve the same problem the viewpoints are quite different. In the information-theoretic approach no assumption is made regarding how the data was generated but we assume that the joint distribution  is known exactly. In the maximumlikelihood approach we assume a specific generative model for the data and assume we have samples (IR , not the true probability. In spite of these conceptual differences we show that under a proper choice of the generative model, these two problems are strongly related. Specifically, we use the multinomial mixture model (a.k.a the one-sided [4] or the asymmetric clustering model [5]), and provide a simple “mapping” between the concepts of one problem to those of the other. Using this mapping we show that in general, searching for a solution of one problem induces a search in the solution space of the other. Furthermore, for uniform input distribution 6% or for large sample sizes, we show that the problems are mathematically equivalent. Hence, in these cases, any algorithm which solves one problem, induces a solution for the other. 2 Short review of the IB method In the IB framework, one is given as input a joint distribution IR . Given this distribution, a compressed representation  of  is introduced through the stochastic mapping GHF7 % . The goal is to find GHF7 % such that the IB-functional, K '*!?"I ONP!?"  is minimized for a given value of P . The joint distribution over   and  is defined through the IB Markovian independence relation,   . Specifically, every choice of GHF7 % defines a specific joint probability G  %HI$' IGHF7 % . Therefore, the distributions GHI and G 7 HI that are involved in calculating the IB-functional are given by    GHI(' ) +.0 2 G  IHI('*) + % GHF7  G87 HI('   @  C ) +(G  IIHI '   @  C ) + 6IR GHF7  (1) In principle every choice of GHF7  is possible but as shown in [1], if GHI and G 7 HI are given, the choice that minimizes K is defined through, GHF7 % ' GHI  P(% "! @ >E@ 2BA + C A A  @ 2BA  CC  (2) where  P(I is the normalization (partition) function and #%$'&6 67 7 GB1' )  9 ; = >  is the Kullback-Leibler divergence. Iterating over this equation and the !( -step defined in Eq.(1) defines an iterative algorithm that is guaranteed to converge to a (local) fixed point of K [1]. 3 Short review of ML for mixture models In a multinomial mixture model, we assume that takes on discrete values and sample it from a multinomial distribution )  7 H I , where H % denotes  ’s label. In the onesided clustering model [4] [5] we further assume that there can be multiple observations  correspondingto a single  but they are all sampled from the same multinomial distribution. This model can be described through the following generative process: For each  choose a unique label H  by sampling from (HI . For  ' – choose  by sampling from  . – choose  by sampling from )  7 H  I and increase ( I  by one. Let  H ' H    <IH A / A  denotes the random vector that defines the (typically hidden) labels, or topics for all   . The complete likelihood is given by: I H  )R 8 '  A / A   (H        ) 7 H  I (3) '  A / A   (H    A / A    A 4 A    O   )   7 H   I! @ +#" 0 2%$ C  (4) where (  I   is a count matrix. The (true) likelihood is defined through summing over all the possible choices of  H , &  (IR'  )R 8(')(+*  6 H,  )  8  (5) Given ( , the goal of ML estimation is to find an assignment for the parameters (HI  ) 87 HI and  such that the likelihood is (at least locally) maximized. Since it is easy to show that the ML estimate for % is just the empirical counts (J- (where ( ' ) 2 (IR ), we further focus only on estimating   ) . A standard algorithm for this purpose is the EM algorithm [6]. Informally, in the . -step we replace the missing value of H % by its distribution H F7 % which we denote by G + HI . In the / -step we use that distribution to reestimate   ) . Using standard derivation it is easy to verify that in our context the . -step is defined through G + HI ' 0 %(HI  @ +DC )21 @ 2.A +DC-3 4%576 @ 2BA  C (6) ' 08 %(HI  @ +DC:9 ) 1 @ 2BA + C-3 4576 @ 2BA  C  ) 1 @ 2BA + C-3 4%5 @ 2BA + C!; (7) ' 08 %(HI  @ + C  "! @ @ 2BA + C A A 6 @ 2BA  CC  (8) where 0  and 0 8 % are normalization factors and (87  ' @ +.0 2 C @ + C . The / -step is simply given by < (HI>=*),+ G + HI )  7 HI?= ) + (IG + HI  (9) Iterating over these EM steps is guaranteed to converge to a local fixed point of the likelihood. Moreover, every fixed point of the likelihood defines a fixed point of this algorithm. An alternative derivation [7] is to define the free energy functional: @  (R'.G:A )  ' NB(  0 + G + HIC 9<;.=D(HIFEG( 2 (9 ; = ) 87 HIIH (10) EJ(  0 + G + HI:9<;.= G + HI  (11) The . -step then involves minimizing @ with respect to G while the / -step minimizes it with respect to  ) . Since this functional is bounded (under mild conditions), the EM algorithm will converge to a local fixed point of @ which corresponds to a fixed point of the likelihood. At these fixed points, @ will become identical to N 9 ; = &  (IR   )  . 4 The ML IB mapping As already mentioned, the IB problem and the ML problem stem from different motivations and involve different “settings”. Hence, it is not entirely clear what is the purpose of “mapping” between these problems. Here, we define this mapping to achieve two goals. The first is theoretically motivated: using the mapping we show some mathematical equivalence between both problems. The second is practically motivated, where we show that algorithms designed for one problem are (in some cases) suitable for solving the other. A natural mapping would be to identify each distribution with its corresponding one. However, this direct mapping is problematic. Assume that we are mapping from ML to IB. If we directly map G + HI A(HI  )87 HI to GHF7 % GHI G87 HI , respectively, obviously there is no guarantee that the IB Markovian independence relation will hold once we complete the mapping. Specifically, using this relation to extract GHI through Eq.(1) will in general result with a different prior over  then by simply defining GHI ' (HI . However, we notice that once we defined GHF7 % and  , the other distributions could be extracted by performing the IB-step defined in Eq.(1). Moreover, as already shown in [1], performing this step can only improve (decrease) the corresponding IB-functional. A similar phenomenon is present once we map from IB to ML. Although in principle there are no “consistency” problems by mapping directly, we know that once we defined G + HI and (IR , we can extract  and ) by a simple / -step. This step, by definition, will only improve the likelihood, which is our goal in this setting. The only remaining issue is to define a correspondingcomponent in the ML setting for the trade-off parameter P . As we will show in the next section, the natural choice for this purpose is the sample size,  'M)L+.0 2 (R . Therefore, to summarize, we define the / & ! ( mapping by G + HI  GHF7 %    (IR      P  (12) where  is a positive (scaling) constant and the mapping is completed by performing an IB-step or an / -step according to the mapping direction. Notice that under this mapping, every search in the solution space of the IB problem induces a search in the solution space of the ML problem, and vice versa (see Figure 2). Observation 4.1 When  is uniformly distributed (i.e., (% or 6% are constant), the / & !( mapping is equivalent for a direct mapping of each distribution to its corresponding one. This observation is a direct result from the fact that if  is uniformly distributed, then the IB-step defined in Eq.(1) and the / -step defined in Eq.(9) are mathematically equivalent. Observation 4.2 When  is uniformly distributed, the EM algorithm is equivalent to the IB iterative optimization algorithm under the / & ! ( mapping with  ' 7  7 . Again, this observation is a direct result from the equivalence of the IB-step and the / -step for uniform prior over  . Additionally, we notice that in this case (1'  A / A '   '*P , hence Eq.(6) and Eq.(2) are also equivalent. It is important to emphasize, though, that this equivalence holds only for a specific choice of P*' (% . While clearly the IB iterative algorithm (and problem) are meaningful for any value of P , there is no such freedom (for good or worse) in the ML setting, and the exponential factor in EM must be (% . 5 Comparing ML and IB Claim 5.1 When  is uniformly distributed and  ' 7  7 , all the fixed points of the likelihood & are mapped to all the fixed points of the IB-functional K with P ' (% . Moreover, at the fixed points, N 9 ; = & = KJE 0 , with 0 constant.  Corollary 5.2 When  is uniformly distributed, every algorithm which finds a fixed point of & , induces a fixed point of K with P ' (% , and vice versa. When the algorithm finds several fixed points, the solution that maximizes & is mapped to the one that minimizes K . Proof: We prove the direction from ML to IB. the opposite direction is similar. We assume that we are given observations (R where (% is constant, and   ) that define a fixed point of the likelihood & . As a result, this is also a fixed point of the EM algorithm (where G + HI is defined through an . -step). Using observation 4.2 it follows that this fixed-point is mapped to a fixed-point of K with P ' ( , as required. Since at the fixed point, N 9<;.= & ' @ , it is enough to show the relationship between @ and K . Rewriting @ from Eq.( 10) we get @  (R'.G:A )  ' (  0 + G + HI:9 ; = G + HI (HI N (  0 2 9 ; = )  7 HI ( + (IRIG + HI  (13) Using the / & ! ( mapping and observation 4.1 we get @ ' (  0 + GHF7 %9 ; = GHF7 % GHI N  P (  0 2 9<;.= G87 HI ( + 6R GHF7 %  (14) Multiplying both sides by 6 '  A / A '    and using the IB Markovian independence relation, we find that    @ ' (  0 + 6%IGHF7 %:9<;.= GHF7 % GHI NP (  0 2 GHI G87 HI:9<;.= G87 HI (15) Reducing a (constant) P#  'MN P )  0 26GHIIG87 HI9 ; =R to both sides gives:    @ N P# $ 'L!?"I 6NP!%" ('LK  (16) as required. We emphasize again that this equivalence is for a specific value of P ' (% . Corollary 5.3 When  is uniformly distributed and  ' 7  7 , every algorithm decreases @ , iff it decreases K with P ' (% . This corollary is a direct result from the above proof that showed the equivalence of the free energy of the model and the IB-functional (up to linear transformations). The previous claims dealt with the special case of uniform prior over  . The following claims provide similar results for the general case, when the  (or P ) are large enough. Claim 5.4 For   (or P  ), all the fixed points of & are mapped to all the fixed points of K , and vice versa. Moreover, at the fixed points, N 9<;.= & =LKJE 0 . Corollary 5.5 When   every algorithm which finds a fixed point of & , induces a fixed point of K with P  , and vice versa. When the algorithm finds several different fixed points, the solution that maximizes & is mapped to the solution that minimize K .  A similar result was recently obtained independently in [8] for the special case of “hard”clustering. It is also important to keep in mind that in many clustering applications, a uniform prior over  is “forced” during the pre-process to avoid non-desirable bias. In particular this was done in several previous applications of the IB method (see [2] for details). 0 10 20 30 40 50 4. 6 4. 5 4. 4 4. 3 4. 2 Small b (iIB) L IB F/r b H(Y) 0 20 40 60 1.195 1.2 1.205 1.21 1.215 1.22x 10 4 Small N (EM) F r(L IB+b H(Y)) 0 10 20 30 40 50 44.5 44 43.5 43 Large b (iIB) L IB F/r b H(Y) 0 10 20 30 40 2.826 2.827 2.828 2.829 x 10 5 Large N (EM) F r(L IB+b H(Y)) Figure 1: Progress of K and @ for different P and  values, while running iIB and EM. Proof: Again, we prove only the direction from ML to IB as the opposite direction is similar. We are given (IR where  ' ) +.0 2 (R  and   ) that define a fixed point of & . Using the . -step in Eq.(6) we extract G + HI , ending up with a fixed point of the EM algorithm. We notice that from   follows (%   B  . Therefore, the mapping G + HI becomes deterministic: G + HI '   H('    # $'&O ( 7 %D7 7 ) 87 H   otherwise. (17) Performing the / & ! ( mapping (including the IB-step), it is easy to verify that we get G87 HI1' ) 87 HI (but GHI ' (HI if the prior over  is not uniform). After completing the mapping we try to update GHF7  through Eq.(2). Since now P  it follows that GHF7 % will remain deterministic. Specifically, G  HF7 % '   H '    # $'&  87 F7 7 G87 H I  otherwise, (18) which is equal to its previous value. Therefore, we are at a fixed point of the IB iterative algorithm, and by that at a fixed point of the IB-functional K , as required. To show that N 9<;.= & = K EG0 we notice again that at the fixed point @ ' N 9 ; = & . From Eq.(13) we see that 9  @ ' N (  0 2 9<;.= )87 HI ( + (IG + HI  (19) Using the / & ! ( mapping and similar algebra as above, we find that 9  @ ' 9   N  P!%" $+E  P# (' 9    K E P # I (20) Corollary 5.6 When   every algorithm decreases @ iff it decreases K with P  . How large must  (or P ) be? We address this question through numeric simulations. Yet, roughly speaking, we notice that the value of  for which the above claims (approximately) hold is related to the “amount of uniformity” in (% . Specifically, a crucial step in the above proof assumed that each (% is large enough such that G + HI becomes deterministic. Clearly, when (% is less uniform, achieving this situation requires larger  values. 6 Simulations We performed several different simulations using different IB and ML algorithms. Due to the lack of space, only one example is reported below; In this example we used the IB ~ min DKL(q  x,y,t)||Q(x,y,t)) ML ~ min DKL(p(x,y)||L(n(x,y): π,θ)) IB “real” world T ↔X ↔Y ML “ideal” world X ↔T ↔Y + + + + + + ML ↔IB mapping Iterative IB EM ^ Figure 2: In general, ML (for mixture models) and IB operate in different solution spaces. Nonetheless, a sequence of probabilities that is obtained through some optimization routine (e.g., EM) in the “ML space”, can be mapped to a sequence of probabilities in the “IB space”, and vice versa. The main result of this paper is that under some conditions these two sequences are completely equivalent. /  H    subset of the 20-Newsgroups corpus [9], consisted of  documents randomly chosen from   different discussion groups. Denoting the documents by  and the words by , after pre-processing [10] we have 7  7 '    7 7 '     '    7 $7 '   . Since our main goal was to check the differences between IB and ML for different values of  (or P ), we further produced another dataset. In this data we randomly choose only about  of the word occurrences for every document B  , ending up with  '    . For both datasets we clustered the documents into   clusters, using both EM and the iterative IB (iIB) algorithm (where we took IR '   (IR  P '     ' 7  7 ). For each algorithm we used the / &  ! ( mapping to calculate @ and K during the process (e.g., for iIB, after each iteration we mapped from ! ( to / & , including the / -step, and calculated @ ). We repeated this procedure for    different initializations, for each dataset. In these   runs we found that usually both algorithms improved both functionals monotonically. Comparing the functionals during the process, we see that for the smaller sample size the differences are indeed more evident (Figure 1). Comparing the final values of the functionals (after   iterations, which typically yielded convergence), we see that in   out of    runs iIB converged to a smaller value of @ than EM. In  runs, EM converged to a smaller value of K . Thus, occasionally, iIB finds a better ML solution or EM finds a better IB solution. This phenomenon was much more common for the large sample size case. 7 Discussion While we have shown that the ML and IB approaches are equivalent under certain conditions, it is important to keep in mind the different assumptions both approaches make regarding the joint distribution over %H . The mixture model (1) assumes that is independent of  given   and (2) assumes that 87 % is one of a small number ( 7 $7 ) of possible conditional distributions. For this reason, the marginal probability over  (i.e.,    )  ) is usually different from  IR '   (R . Indeed, an alternative view of ML estimation is as minimizing # $'&  6IRF7 7 &  (    )  . On the other hand, in the IB framework,  is defined through the IB Markovian independence relation:   . Therefore, the solution space is the family of distributions for which this relation holds and the marginal distribution over I is consistent with the input. Interestingly, it is possible to give an alternative formulation for the IB problem which also involves KL minimization [11]. In this formulation the IB problem is related to minimizing # $'&  G  %HIF7 7 IIHI , where IIHI denotes the family of distributions for which the mixture model assumption holds,   . 8 In this sense, we may say that while solving the IB problem, one tries to minimize the KL with respect to the “ideal” world, in which  separates  from . On the other hand, while solving the ML problem, one assumes an “ideal” world, and tries to minimize the KL with respect to the given marginal distribution  IR . Our theoretical analysis shows that under the / & !( mapping, these two procedures are in some cases equivalent (see Figure 2). Once we are able to map between ML and IB, it should be interesting to try and adopt additional concepts from one approach to the other. In the following we provide two such examples. In the IB framework, for large enough P , the quality of a given solution is measured through @ 4C @ /  4C  [1]. This measure provides a theoretical upper bound, which can be used for purposes of model selection and more. Using the / &  ! ( mapping, we can now adopt this measure for the ML estimation problem (for large enough  ); In EM, the exponential factor (% in general depends on  . However, its analogous component in the IB framework, P , obviously does not. Nonetheless, in principle it is possible to reformulate the IB problem while defining P ' P  (without changing the form of the optimal solution). We leave this issue for future research. We have shown that for the multinomial mixture model, ML and IB are equivalent in some cases. It is worth noting that in principle, by choosing a different generative model, one may find further equivalences. Additionally, the IB method was recently extended into the multivariate case, where a new family of IB-like variational problems was presented and solved [11]. A natural question is to look for further generative models that can be mapped to this multivariate IB problems, and we are working in this direction. Acknowledgments Insightful discussions with Nir Friedman, Naftali Tishby and Gal Elidan are greatly appreciated. References [1] N. Tishby, F. Pereira, and W. Bialek. The Information Bottleneck method. In Proc. 37th Allerton Conference on Communication and Computation, 1999. [2] N. Slonim. The Information Bottleneck: theory and applications. Ph.D. thesis, The Hebrew University, 2002. [3] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, New York, 1991. [4] T. Hofmann, J. Puzicha, and M. I. Jordan. Learning from dyadic data. In Proc. of NIPS-11, 1998. [5] J. Puzicha, T. Hofmann, and J. M. Buhmann. Histogram clustering for unsupervised segmentation and image retrieval. In Pattern Recognition Letters 20(9), 899-909, 1999. [6] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum Likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, vol. 39, pp. 1-38, 1977. [7] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan (editor), Learning in Graphical Models, pp. 355-368, 1998. [8] L. Hermes, T. z¨oller, and J. M. Buhmann. Parametric distributional clustering for image segmentation. In Proc. of European Conference on Computer Vision (ECCV), 2002 [9] K. Lang. Learning to filter netnews. In Proc. of the 12th Int. Conf. on Machine Learning, 1995. [10] N. Slonim, N. Friedman, and N. Tishby. Unsupervised document classification using sequential information maximization. In Proc. of SIGIR-25, 2002. [11] N. Friedman, O. Mosenzon, N. Slonim, and N. Tishby. Multivariate Information Bottleneck. In Proc. of UAI-17, 2001.  The KL with respect to  is defined as the minimum over all the members in  . Therefore, here, both arguments of the KL are changing during the process, and the distributions involved in the minimization are over all the three random variables.
2002
196
2,209
The Stability of Kernel Principal Components Analysis and its Relation to the Process Eigenspectrum John Shawe-Taylor Royal Holloway University of London john©cs.rhul.ac.uk Christopher K. I. Williams School of Informatics University of Edinburgh c.k.i.williams©ed.ac.uk Abstract In this paper we analyze the relationships between the eigenvalues of the m x m Gram matrix K for a kernel k(·, .) corresponding to a sample Xl, ... ,Xm drawn from a density p(x) and the eigenvalues of the corresponding continuous eigenproblem. We bound the differences between the two spectra and provide a performance bound on kernel peA. 1 Introduction Over recent years there has been a considerable amount of interest in kernel methods for supervised learning (e.g. Support Vector Machines and Gaussian Process prediction) and for unsupervised learning (e.g. kernel peA, Sch61kopf et al. (1998)). In this paper we study the stability of the subspace of feature space extracted by kernel peA with respect to the sample of size m, and relate this to the feature space that would be extracted in the infinite sample-size limit. This analysis essentially "lifts" into (a potentially infinite dimensional) feature space an analysis which can also be carried out for peA, comparing the k-dimensional eigenspace extracted from a sample covariance matrix and the k-dimensional eigenspace extracted from the population covariance matrix, and comparing the residuals from the k-dimensional compression for the m-sample and the population. Earlier work by Shawe-Taylor et al. (2002) discussed the concentration of spectral properties of Gram matrices and of the residuals of fixed projections. However, these results gave deviation bounds on the sampling variability the eigenvalues of the Gram matrix, but did not address the relationship of sample and population eigenvalues, or the estimation problem of the residual of peA on new data. The structure the remainder of the paper is as follows. In section 2 we provide background on the continuous kernel eigenproblem, and the relationship between the eigenvalues of certain matrices and the expected residuals when projecting into spaces of dimension k. Section 3 provides inequality relationships between the process eigenvalues and the expectation of the Gram matrix eigenvalues. Section 4 presents some concentration results and uses these to develop an approximate chain of inequalities. In section 5 we obtain a performance bound on kernel peA, relating the performance on the training sample to the expected performance wrt p(x). 2 Background 2.1 The kernel eigenproblern For a given kernel function k(·,·) the m x m Gram matrix K has entries k(Xi,Xj), i, j = 1, ... ,m, where {Xi: i = 1, ... ,m} is a given dataset. For Mercer kernels K is symmetric positive semi-definite. We denote the eigenvalues of the Gram matrix as Al 2: A2 .. . 2: Am 2: 0 and write its eigendecomposition as K = zAz' where A is a diagonal matrix of the eigenvalues and Z' denotes the transpose of matrix Z. The eigenvalues are also referred to as the spectrum of the Gram matrix. We now describe the relationship between the eigenvalues of the Gram matrix and those of the underlying process. For a given kernel function and density p(x) on a space X, we can also write down the eigenfunction problem Ix k(x,Y)P(X)¢i(X) dx = AiC/Ji(Y)· (1) Note that the eigenfunctions are orthonormal with respect to p(x), i.e. J x (Pi(x)p(x)¢j (x)dx = 6ij. Let the eigenvalues be ordered so that Al 2: A2 2: .... This continuous eigenproblem can be approximated in the following way. Let {Xi: i = 1, . .. , m} be a sample drawn according to p(x). Then as pointed out in Williams and Seeger (2000), we can approximate the integral with weight function p(x) by an average over the sample points, and then plug in Y = Xj for j = 1, ... ,m to obtain the matrix eigenproblem. Thus we see that J.1i d;j ~ Ai is an obvious estimator for the ith eigenvalue of the continuous problem. The theory of the numerical solution of eigenvalue problems (Baker 1977, Theorem 3.4) shows that for a fixed k, J.1k will converge to Ak in the limit as m -+ 00. For the case that X is one dimensional, p(x) is Gaussian and k(x, y) = exp -b(xy)2, there are analytic results for the eigenvalues and eigenfunctions of equation (1) as given in section 4 of Zhu et al. (1998). A plot in Williams and Seeger (2000) for m = 500 with b = 3 and p(x) '" N(O, 1/4) shows good agreement between J.1i and Ai for small i, but that for larger i the matrix eigenvalues underestimate the process eigenvalues. One of the by-products of this paper will be bounds on the degree of underestimation for this estimation problem in a fully general setting. Koltchinskii and Gine (2000) discuss a number of results including rates of convergence of the J.1-spectrum to the A-spectrum. The measure they use compares the whole spectrum rather than individual eigenvalues or subsets of eigenvalues. They also do not deal with the estimation problem for PCA residuals. 2.2 Projections, residuals and eigenvalues The approach adopted in the proofs of the next section is to relate the eigenvalues to the sums of squares of residuals. Let X be a random variable in d dimensions, and let X be a d x m matrix containing m sample vectors Xl, ... , X m . Consider the m x m matrix M = XIX with eigendecomposition M = zAz'. Then taking X = Z VA we obtain a finite dimensional version of Mercer's theorem. To set the scene, we now present a short description of the residuals viewpoint. The starting point is the singular value decomposition of X = UY',Z' , where U and Z are orthonormal matrices and Y', is a diagonal matrix containing the singular values (in descending order). We can now reconstruct the eigenvalue decomposition of M = X'X = Z~U'U~Z' = zAz', where A = ~2. But equally we can construct a d x d matrix N = X X' = U~Z' Z~U' = u Au', with the same eigenvalues as M. We have made a slight abuse of notation by using A to represent two matrices of potentially different dimensions, but the larger is simply an extension of the smaller with O's. Note that N = mCx , where Cx is the sample correlation matrix. Let V be a linear space spanned by k linearly independent vectors. Let Pv(x) (PV(x)) be the projection of x onto V (space perpendicular to V), so that IlxW = IIPv(x)112 + IIPv(x)112. Using the Courant-Fisher minimax theorem it can be proved (Shawe-Taylor et al., 2002, equation 4) that m m m m m k m L )...i(M) L IIxjl12 - L )...i(M) = min L IlPv(xj)112. (2) dim(V)=k i=k+1 j=l i=l j=l Hence the subspace spanned by the first k eigenvectors is characterised as that for which the sum of the squares of the residuals is minimal. We can also obtain similar results for the population case, e.g. L7=1 Ai = maXdim(V)=k lE[IIPv (x) 112]. 2.3 Residuals in feature space Frequently, we consider all of the above as occurring in a kernel defined feature space, so that wherever we have written a vector x we should have put 'l/J(x), where 'l/J is the corresponding feature map 'l/J : x E X f---t 'l/J(x) E F to a feature space F. Hence, the matrix M has entries Mij = ('l/J(Xi),'l/J(Xj)). The kernel function computes the composition of the inner product with the feature maps, k(x, z) = ('l/J(x) , 'l/J(z)) = 'l/J(x)''l/J(z) , which can in many cases be computed without explicitly evaluating the mapping 'l/J. We would also like to evaluate the projections into eigenspaces without explicitly computing the feature mapping 'l/J. This can be done as follows. Let Ui be the i-th singular vector in the feature space, that is the i-th eigenvector of the matrix N, with the corresponding singular value being O"i = ~ and the corresponding eigenvector of M being Zi. The projection of an input x onto Ui is given by 'l/J(X)'Ui = ('l/J(X)'U)i = ('l/J(x)' X Z)W;l = k'ZW;l, where we have used the fact that X = U~Z' and kj = 'l/J(x)''l/J(Xj) = k(x,xj). Our final background observation concerns the kernel operator and its eigenspaces. The operator in question is K(f)(x) = Ix k(x, z)J(z)p(z)dz. Provided the operator is positive semi-definite, by Mercer's theorem we can decompose k(x,z) as a sum of eigenfunctions, k(x,z) = L :1 AiC!Ji(X)¢i(Z) = ('l/J(x), 'l/J(z)), where the functions (¢i(X)) ~l form a complete orthonormal basis with respect to the inner product (j, g)p = Ix J(x)g(x)p(x)dx and 'l/J(x) is the feature space mapping 'l/J : x --+ (1Pi(X)):l = ( A¢i(X)):l E F. Note that ¢i(X) has norm 1 and satisfies Ai¢i(x) = Ix k(x, z)¢i(z)p(z)dz (equation 1), so that Ai = r k(y, Z)¢i(Y)¢i (Z)p(Z)p(y)dydz. iX2 (3) If we let cf>(x) = (cPi(X)):l E F, we can define the unit vector U i E F corresponding to Ai by Ui = Ix cPi(x)cf>(x)p(x)dx. For a general function J(x) we can similarly define the vector f = Ix J(x)cf>(x)p(x)dx. Now the expected square of the norm of the projection Pr(1jJ(x)) onto the vector f (assumed to be of norm 1) of an input 1jJ(x) drawn according to p(x) is given by lE [llPr(1jJ(x)) 112] = L IlPr(1jJ(x))Wp(x)dx = L (f'1jJ(X))2 p(x)dx = L L L J(y) cf>(y)'1jJ (x)p(y)dyJ(z)cf> (z)'1jJ (x)p(z)dzp(x)dx = L3 J(y)J(z) t, A cPj(Y)cPj(x)p(y)dy ~ v>:ecPe(z)cPe(x)p(z)dzp(x)dx = L2 J(y)J(z) j~l AcPj(y)p(y)dyv'):ecPe(z)p(z)dz Ix cPj(x)cPe(x)p(x)dx = L2 J(y)J(z) ~ AjcPj (Y)cPj (z)p(y)dyp(z)dz = r J(y)J(z)k(y,z)p(y)p(z)dydz. iX2 Since all vectors f in the subspace spanned by the image of the input space in F can be expressed in this fashion, it follows using (3) that the sum of the finite case characterisation of eigenvalues and eigenvectors is replaced by an expectation Ak = max min lE[llPv (1jJ(x)) 112], dim(V)=k O#vEV where V is a linear subspace of the feature space F. Similarly, k (4) L:Ai max lE [llPv(1jJ(x)) 112] = lE [111jJ(x)112] min lE [IIPv(1jJ(x))112] , dim(V)=k dim(V)=k i=l 00 (5) where Pv(1jJ(x)) (PV(1jJ(x))) is the projection of 1jJ(x) into the subspace V (the projection of 1jJ(x) into the space orthogonal to V). 2.4 Plan of campaign We are now in a position to motivate the main results ofthe paper. We consider the general case of a kernel defined feature space with input space X and probability density p(x). We fix a sample size m and a draw of m examples S = (Xl, X2 , ... , xm ) according to p. Further we fix a feature dimension k. Let Vk be the space spanned by the first k eigenvectors of the sample kernel matrix K with corresponding eigenvalues '\1, '\2,"" '\k, while Vk is the space spanned by the first k process eigenvectors with corresponding eigenvalues A1 , A2 , ... , Ak' Similarly, let E[J(x)] denote expectation with respect to the sample, E[J(x)] = ~ 2:::1 J(Xi), while as before lE[·] denotes expectation with respect to p. We are interested in the relationships between the following quantities: (i) E [IIPVk (x)112] = ~ 2:7=1 ~i = 2:7=1 ILi , (ii) lE [IIPVk(X)112] = 2:7=1 Ai (iii) lE [IIPVk (x)112] and (iv) IE [IIPVk (x)112]. Bounding the difference between the first and second will relate the process eigenvalues to the sample eigenvalues, while the difference between the first and third will bound the expected performance of the space identified by kernel PCA when used on new data. Our first two observations follow simply from equation (5), k IE [IIPYk (x) 112] 1 l: A A [ 2] (6) Ai ~ lE IIPVk (x) II , m i=l k and lE [IIPVk (x) 112] l: Ai ~ lE [IIPYk (x)112] . (7) i=l Our strategy will be to show that the right hand side of inequality (6) and the left hand side of inequality (7) are close in value making the two inequalities approximately a chain of inequalities. We then bound the difference between the first and last entries in the chain. 3 A veraging over Samples and Population Eigenvalues The sample correlation matrix is ex = ~XXI with eigenvalues ILl ~ IL2··· ~ ILd. In the notation of the section 2 ILi = (l/m),\i ' The corresponding population correlation matrix has eigenvalues Al ~ A2 ... ~ Ad and eigenvectors ul , . .. , U d. Again by the observations above these are the process eigenvalues. Let lE.n [.] denote averages over random samples of size m. The following proposition describes how lE.n [ILl] is related to Al and lE.n [ILd] is related to Ad. It requires no assumption of Gaussianity. Proposition 1 (Anderson, 1963, pp 145-146) lE.n [ILd ~ Al and lE.n[ILd] :s: Ad' Proof: By the results of the previous section we have We now apply the expectation operator lE.n to both sides. On the RHS we get lE.nIE [llFul (x) 112] = lE [llFul (x)112] = Al by equation (5), which completes the proof. Correspondingly ILd is characterized by ILd = mino#c IE [llFc(Xi) 112] (minor components analysis). D Interpreting this result, we see that lE.n [ILl] overestimates AI, while lE.n [ILd] underestimates Ad. Proposition 1 can be generalized to give the following result where we have also allowed for a kernel defined feature space of dimension N F :s: 00. Proposition 2 Using the above notation, for any k, 1 :s: k :s: m, lE.n [L: ~= l ILi] ~ L:~=l Ai and lE.n [L::k+l ILi] :s: L:~k+l Ai· Proof: Let Vk be the space spanned by the first k process eigenvectors. Then from the derivations above we have k l:ILi = v: ::~= k IE [11Fv('I/J(x))W] ~ IE [llFvk('I/J(x ))112]. i=l Again, applying the expectation operator Em to both sides of this equation and taking equation (5) into account, the first inequality follows. To prove the second we turn max into min, Pinto pl. and reverse the inequality. Again taking expectations of both sides proves the second part. 0 Applying the results obtained in this section, it follows that Em [ILl] will overestimate A1, and the cumulative sum 2::=1 Em [ILi] will overestimate 2::=1 Ai. At the other end, clearly for N F ::::: k > m, ILk == 0 is an underestimate of Ak. 4 Concentration of eigenvalues We now make use of results from Shawe-Taylor et al. (2002) concerning the concentration of the eigenvalue spectrum of the Gram matrix. We have Theorem 3 Let K(x, z) be a positive semi-definite kernel function on a space X, and let p be a probability density function on X. Fix natural numbers m and 1 :::; k < m and let S = (Xl, ... ,Xm) E xm be a sample of m points drawn according to p. Then for all t > 0, p{ I ~~~k(S)_Em [~~9(S)] 1 :::::t} :::; 2exp(-~:m), where ~~k (S) is the sum of the largest k eigenvalues of the matrix K(S) with entries K(S)ij = K(Xi,Xj) and R2 = maxxEX K(x, x). This follows by a similar derivation to Theorem 5 in Shawe-Taylor et al. (2002). Our next result concerns the concentration of the residuals with respect to a fixed subspace. For a subspace V and training set S, we introduce the notation Fv(S) = t [llPv('IjJ(x)) 112] . Theorem 4 Let p be a probability density function on X. Fix natural numbers m and a subspace V and let S = (Xl' ... ' Xm) E xm be a sample of m points drawn according to a probability density function p. Then for all t > 0, P{Fv(S) - Em [Fv(S)] 1 ::::: t} :::; 2exp (~~~) . This is theorem 6 in Shawe-Taylor et al. (2002). The concentration results of this section are very tight. In the notation of the earlier sections they show that with high probability and k L Ai ~ t [IIPVk ('IjJ(x))W] , (9) i = l where we have used Theorem 3 to obtain the first approximate equality and Theorem 4 with V = Vk to obtain the second approximate equality. This gives the sought relationship to create an approximate chain of inequalities k ~ IE [IIPVk('IjJ(x))112] = L Ai::::: IE [IIPVk ('IjJ(X)) 112] . (10) i = l This approximate chain of inequalities could also have been obtained using Proposition 2. It remains to bound the difference between the first and last entries in this chain. This together with the concentration results of this section will deliver the required bounds on the differences between empirical and process eigenvalues, as well as providing a performance bound on kernel peA. 5 Learning a projection matrix The key observation that enables the analysis bounding the difference between t [IIPvJ!p(X)) 112] and IE [IIPvJ'I/J(x)) 112] is that we can view the projection norm IIPvJ'I/J(x))112 as a linear function of pairs offeatures from the feature space F. Proposition 5 The projection norm IIPVk ('I/J(X)) 112 is a linear function j in a feature space F for which the kernel function is given by k(x, z) = k(x, Z)2. Furthermore the 2-norm of the function j is Vk. Proof: Let X = Uy:.Z' be the singular value decomposition of the sample matrix X in the feature space. The projection norm is then given by j(x) = IIPVk ('I/J(X)) 112 = 'I/J(x)'UkUk'I/J(x), where Uk is the matrix containing the first k columns of U. Hence we can write NF NF IIPvJ'I/J(x))112 = l: (Xij'I/J(X) i'I/J(X)j = l: (Xij1p(X)ij, ij=l ij=l where 1p is the projection mapping into the feature space F consisting of all pairs of F features and (Xij = (UkUk)ij. The standard polynomial construction gives k(x, z) NF NF l: 'I/J(X)i'I/J(Z)i'I/J(X)j'I/J(z)j = l: ('I/J(X)i'I/J(X)j)('I/J(Z)i'I/J(Z)j) i,j=l i,j=l It remains to show that the norm of the linear function is k. The norm satisfies (note that II . IIF denotes the Frobenius norm and U i the columns of U) Ilill' i~' a1j ~ IIU,U;II} ~ (~",U; , t, Ujuj) F ~ it, (U;Uj)' ~ k as required. D We are now in a position to apply a learning theory bound where we consider a regression problem for which the target output is the square of the norm of the sample point 11'I/J(x)112. We restrict the linear function in the space F to have norm Vk. The loss function is then the shortfall between the output of j and the squared norm. Using Rademacher complexity theory we can obtain the following theorems: Theorem 6 If we perform peA in the feature space defined by a kernel k(x, z) then with probability greater than 1 - 6, for all 1 :::; k :::; m, if we project new data onto the space 11k , the expected squared residual is bounded by ,\,>. :<: IE [ IIPt; ("'(x)) II' 1 < '~'~k [ ~ \>l(S) + 7# ,----------------, +R2 ~ln C:) where the support of the distribution is in a ball of radius R in the feature space and Ai and .xi are the process and empirical eigenvalues respectively. Theorem 7 If we perform peA in the feature space defined by a kernel k(x, z) then with probability greater than 1 - 5, for all 1 :s: k :s: m, if we project new data onto the space 11k , the sum of the largest k process eigenvalues is bounded by A<!,k ;::: lE [IIPVk ("IjJ(x))W] > max [~.x<!'f(S) - 1 + v'£ ! f k(Xi' Xi)2 l <!,f<!,k m Vm m i=l _R2 ~ln C(mt 1)) where the support of the distribution is in a ball of radius R in the feature space and Ai and .xi are the process and empirical eigenvalues respectively. The proofs of these results are given in Shawe-Taylor et al. (2003). Theorem 6 implies that if k « m the expected residuallE [11Pt;, ("IjJ(x)) 112 ] closely matches the average sample residual of IE [11Pt;,("IjJ(x))112] = (1/m)E:k+ 1 .xi , thus providing a bound for kernel peA on new data. Theorem 7 implies a good fit between the partial sums of the largest k empirical and process eigenvalues when Jk/m is small. References Anderson, T. W. (1963). Asymptotic Theory for Principal Component Analysis. Annals of Mathematical Statistics, 34( 1): 122- 148. Baker, C. T. H. (1977). The numerical treatment of integral equations. Clarendon Press, Oxford. Koltchinskii, V. and Gine, E. (2000). Random matrix approximation of spectra of integral operators. Bernoulli,6(1):113- 167. Sch6lkopf, B., Smola, A. , and Miiller, K-R. (1998). Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319. Shawe-Taylor, J., Cristianini, N., and Kandola, J. (2002). On the Concentration of Spectral Properties. In Diettrich, T. G., Becker, S., and Ghahramani, Z., editors, Advances in Neural Information Processing Systems 14. MIT Press. Shawe-Taylor, J., Williams, C. K I., Cristianini, N., and Kandola, J. (2003). On the Eigenspectrum of the Gram Matrix and the Generalisation Error of Kernel PCA. Technical Report NC2-TR-2003-143, Dept of Computer Science, Royal Holloway, University of London. Available from http://www.neurocolt.com/archi ve . html. Williams, C. K I. and Seeger, M. (2000). The Effect of the Input Density Distribution on Kernel-based Classifiers. In Langley, P., editor, Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000). Morgan Kaufmann. Zhu, H., Williams, C. K I., Rohwer, R. J., and Morciniec, M. (1998). Gaussian regression and optimal finite dimensional linear models. In Bishop, C. M., editor, Neural Networks and Machine Learning. Springer-Verlag, Berlin.
2002
197
2,210
Adaptive Quantization and Density Estimation in Silicon David Hsu Seth Bridges Miguel Figueroa Chris Diorio Department of Computer Science and Engineering University of Washington 114 Sieg Hall, Box 352350 Seattle, WA 98195-2350 USA {hsud, seth, miguel, diorio}@cs.washington.edu Abstract We present the bump mixture model, a statistical model for analog data where the probabilistic semantics, inference, and learning rules derive from low-level transistor behavior. The bump mixture model relies on translinear circuits to perform probabilistic inference, and floating-gate devices to perform adaptation. This system is low power, asynchronous, and fully parallel, and supports various on-chip learning algorithms. In addition, the mixture model can perform several tasks such as probability estimation, vector quantization, classification, and clustering. We tested a fabricated system on clustering, quantization, and classification of handwritten digits and show performance comparable to the E-M algorithm on mixtures of Gaussians. 1 Introduction Many system-on-a-chip applications, such as data compression and signal processing, use online adaptation to improve or tune performance. These applications can benefit from the low-power compact design that analog VLSI learning systems can offer. Analog VLSI learning systems can benefit immensely from flexible learning algorithms that take advantage of silicon device physics for compact layout, and that are capable of a variety of learning tasks. One learning paradigm that encompasses a wide variety of learning tasks is density estimation, learning the probability distribution over the input data. A silicon density estimator can provide a basic template for VLSI systems for feature extraction, classification, adaptive vector quantization, and more. In this paper, we describe the bump mixture model, a statistical model that describes the probability distribution function of analog variables using low-level transistor equations. We intend the bump mixture model to be the silicon version of mixture of Gaussians [1], one of the most widely used statistical methods for modeling the probability distribution of a collection of data. Mixtures of Gaussians appear in many contexts from radial basis functions [1] to hidden Markov models [2]. In the bump mixture model, probability computations derive from translinear circuits [3] and learning derives from floating-gate device equations [4]. The bump mixture model can perform different functions such as quantization, probability estimation, and classification. In addition this VLSI mixture model can implement multiple learning algorithms using different peripheral circuitry. Because the equations for system operation and learning derive from natural transistor behavior, we can build large bump mixture model with millions of parameters on a single chip. We have fabricated a bump mixture model, and tested it on clustering, classification, and vector quantization of handwritten digits. The results show that the fabricated system performs comparably to mixtures of Gaussians trained with the E-M algorithm [1]. Our work builds upon several trends of research in the VLSI community. The results in this paper are complement recent work on probability propagation in analog VLSI [5-7]. These previous systems, intended for decoding applications in communication systems, model special forms of probability distributions over discrete variables, and do not incorporate learning. In contrast, the bump mixture model performs inference and learning on probability distributions over continuous variables. The bump mixture model significantly extends previous results on floating-gate circuits [4]. Our system is a fully realized floating-gate learning algorithm that can be used for vector quantization, probability estimation, clustering, and classification. Finally, the mixture model’s architecture is similar to many previous VLSI vector quantizers [8, 9]. We can view the bump mixture model as a VLSI vector quantizer with well-defined probabilistic semantics. Computations such as probability estimation and maximum-likelihood classification have a natural statistical interpretation under the mixture model. In addition, because we rely on floating-gate devices, the mixture model does not require a refresh mechanism unlike previous learning VLSI quantizers. 2 The adaptive bump circuit The adaptive bump circuit [4], depicted in Fig.1(a-b), forms the basis of the bump mixture model. This circuit is slightly different from previous versions reported in the literature. Nevertheless, the high level functionality remains the same; the adaptive bump circuit computes the similarity between a stored variable and an input, and adapts to increase the similarity between the stored variable and input. Fig.1(a) shows the computation portion of the circuit. The bump circuit takes as input, a differential voltage signal (+Vin, −Vin) around a DC bias, and computes the similarity between Vin and a stored value, µ. We represent the stored memory µ as a voltage: 2 w w V V µ + − = (1) where Vw+ and Vw− are the gate-offset voltages stored on capacitors C1 and C2. Because C1 and C2 isolate the gates of transistors M1 and M2 respectively, these transistors are floating-gate devices. Consequently, the stored voltages Vw+ and Vw− are nonvolatile. We can express the floating-gate voltages Vfg1 and Vfg2 as Vfg1=Vin+Vw+ and Vfg2=Vw−−Vin, and the output of the bump circuit as [10]: ( )( ) ( ) ( )( ) ( ) 2 2 1 2 cosh 8 / cosh 4 / b b out t in t fg fg I I I SU V SU V V κ µ κ = = − − (2) where Ib is the bias current, κ is the gate-coupling coefficient, Ut is the thermal voltage, and S depends on the transistor sizes. Fig.1(b) shows Iout for three different stored values of µ. As the data show, different µ’s shift the location of the peak response of the circuit. Fig.1(b) shows the circuit that implements learning in the adaptive bump circuit. We implement learning through Fowler-Nordheim tunneling [11] on tunneling junctions M5-M6 and hot electron injection [12] on the floating-gate transistors M3-M4. Transistor M3 and M5 control injection and tunneling on M1’s floating-gate. Transistors M4 and M6 control injection and tunneling on M2’s floating-gate. We activate tunneling and injection by a high Vtun and low Vinj respectively. In the adaptive bump circuit, both processes increase the similarity between Vin and µ. In addition, the magnitude of the update does not depend on the sign of (Vin −µ) because the differential input provides common-mode rejection to the input differential pair. The similarity function, as seen in Fig.1(b), has a Gaussian-like shape. Consequently, we can equate the output current of the bump circuit with the probability of the input under a distribution parameterized by mean µ: ( ) | in out P V I µ = (3) In addition, increasing the similarity between Vin and µ is equivalent to increasing P(Vin |µ). Consequently, the adaptive bump circuit adapts to maximize the likelihood of the present input under the circuit’s probability distribution. 3 The bump mixture model We now describe the computations and learning rule implemented by the bump mixture model. A mixture model is a general class of statistical models that approximates the probability of an analog input as the weighted sum of probability of the input under several simple distributions. The bump mixture model comprises a set of Gaussian-like probability density functions, each parameterized by a mean vector, µµµµi. Denoting the jth dimension of the mean of the ith density as µij, we express the probability of an input vector x as: V in −V in V casc Iout V b V w− V fg1 V fg2 V 1 V 2 V fg1 V fg2 V 2 V 1 C 1 C 2 V b V tun V tun M 1 M 2 M 3 M 4 M 5 M 6 (a) V inj V inj (b) V w+ -0.4 0 8 10 6 4 2 0 -0.2 0.2 0.4 Vin Iout (nA) bump circuit's transfer function for three µ's µ1 µ2 µ3 (c) Figure 1. (a-b) The adaptive bump circuit. (a) The original bump circuit augmented by capacitors C1 and C2, and cascode transistors (driven by Vcasc). (b) The adaptation subcircuit. M3 and M4 control injection on the floating-gates and M5 and M6 control tunneling. (b) Measured output current of a bump circuit for three programmed memories. ( ) ( ) ( ) ( ) ( ) ( ) 1/ | 1/ | j ij i i j P N P i N P x µ = =   ∏ x x (4) where N is the number of densities in the model and i denotes the ith density. P(x|i) is the product of one-dimensional densities P(xj|µij) that depend on the jth dimension of the ith mean, µij. We derive each one-dimensional probability distribution from the output current of a single bump circuit. The bump mixture model makes two assumptions: (1) the component densities are equally likely, and (2) within each component density, the input dimensions are independent and have equal variance. Despite these restrictions, this mixture model can, in principle, approximate any probability density function [1]. The bump mixture model adapts all µµµµi to maximize the likelihood of the training data. Learning in the bump mixture model is based on the E-M algorithm, the standard algorithm for training Gaussian mixture models. The E-M algorithm comprises two steps. The E-step computes the conditional probability of each density given the input, P(i|x). The M-step updates the parameters of each distribution to increase the likelihood of the data, using P(i|x) to scale the magnitude of each parameter update. In the online setting, the learning rule is: ( ) ( ) ( ) ( ) log | log | | ( | ) | j ij j ij ij ij ij k P x P x P i P i P k µ µ µ η η µ µ ∂ ∂ ∆ = = ∂ ∂  x x x (5) where η is a learning rate and k denotes component densities. Because the adaptive bump circuit already adapts to increase the likelihood of the present input, we approximate E-M by modulating injection and tunneling in the adaptive bump circuit by the conditional probability: ( ) ( ) | ij j ij P i f x µ η µ ∆ = − x (6) where f() is the parameter update implemented by the bump circuit. We can modulate the learning update in (6) with other competitive factors instead of the conditional probability to implement a variety of learning rules such as online K-means. 4 Silicon implementation We now describe a VLSI system that implements the silicon mixture model. The high level organization of the system detailed in Fig.2, is similar to VLSI vector quantization systems. The heart of the mixture model is a matrix of adaptive bump circuits where the ith row of bump circuits corresponds to the ith component density. In addition, the periphery of the matrix comprises a set of inhibitory circuits for performing probability estimation, inference, quantization, and generating feedback for learning. We send each dimension of an input x down a single column. Unity-gain inverting amplifiers (not pictured) at the boundary of the matrix convert each single ended voltage input into a differential signal. Each bump circuit computes a current that represents (P(xj|µij))σ, where σ is the common variance of the one-dimensional densities. The mixture model computes P(x|i) along the ith row and inhibitory circuits perform inference, estimation, or quantization. We utilize translinear devices [3] to perform all of these computations. Translinear devices, such as the subthreshold MOSFET and bipolar transistor, exhibit an exponential relationship between the gate-voltage and source current. This property allows us to establish a power-law relationship between currents and probabilities (i.e. a linear relationship between gate voltages and log-probabilities). We compute the multiplication of the probabilities in each row of Fig.2 as addition in the log domain using the circuit in Fig.3(a). This circuit first converts each bump circuit’s current into a voltage using a diode (e.g. M1). M2’s capacitive divider computes Vavg as the average of the scalar log probabilities, logP(xj|µij): ( ) ( ) / log | avg j ij j V N P x σ µ =  (7) where σ is the variance, N is the number of input dimensions, and voltages are in units of κ/Ut (Ut is the thermal voltage and κ is the transistor-gate coupling coefficient). Transistors M2- M5 mirror Vavg to the gate of M5. We define the drain voltage of M5 as log P(x|i) (up to an additive constant) and compute: ( ) ( ) ( ) ( ) ( ) ( ) 1 2 1 2 1 1 log | log | avg j ij j C C C C C C N P i V P x k σ µ + + = = +  x (8) where k is a constant dependent on Vg (the control gate voltage on M5), and C1 and C2 are capacitances. From eq.8 we can derive the variance as: ( ) 1 1 2 / NC C C σ = + (9) The system computes different output functions and feedback signals for learning by operating on the log probabilities of eq.8. Fig.3(b) demonstrates a circuit that computes P(i|x) for each distribution. The circuit is a k-input differential pair where the bias transistor M0 normalizes currents representing the probabilities P(x|i) at the ith leg. Fig.3(c) demonstrates a circuit that computes P(x). The ith transistor exponentiates logP(x|i), and a single wire sums the currents. We can also apply other inhibitory circuits to the log probabilities such as winner-take-all circuits (WTA) [13] and resistive networks [14]. In our fabricated chip, we implemented probability estimation,conditional probability computation, and WTA. The WTA outputs the index of the most likely component distribution for the present input, and can be used to implement vector quantization and to produce feedback for an online K-means learning rule. At each synapse, the system combines a feedback signal, such as the conditional probability P(i|x), computed at the matrix periphery, with the adaptive bump circuit to implement learning. We trigger adaptation at each bump circuit by a rate-coded spike signal generated from the inhibitory circuit’s current outputs. We generate this spike train with a current-to-spike converter based on Lazzaro’s low-powered spiking neuron [15]. This rate-coded signal toggles Vtun and Vinj at each bump circuit. Consequently, adaptation is proportional to the frequency of the spike train, which is in turn a linear function of the inhibitory feedback signal. The alternative to the rate code would be to transform the inhibitory circuit’s output directly into analog Inh() Inh() Output x1 x2 xn P(x|µ11) P(x|µ12) P(x|µ1n) P(x|µ21) P(x|µ22) P(x|µ2n) P(x|µµµµ1) P(x|µµµµ2) Vtun,Vinj Figure 2. Bump mixture model architecture. The system comprises a matrix of adaptive bump circuits where each row computes the probability P(x|µµµµi). Inhibitory circuits transform the output of each row into system outputs. Spike generators also transform inhibitory circuit outputs into rate-coded feedback for learning. Vtun and Vinj signals. Because injection and tunneling are highly nonlinear functions of Vinj and Vtun respectively, implementing updates that are linear in the inhibitory feedback signal is quite difficult using this approach. 5 Experimental Results and Conclusions We fabricated an 8 x 8 mixture model (8 probability distribution functions with 8 dimensions each) in a TSMC 0.35µm CMOS process available through MOSIS, and tested the chip on synthetic data and a handwritten digits dataset. In our tests, we found that due to a design error, one of the input dimensions coupled to the other inputs. Consequently, we held that input fixed throughout the tests, effectively reducing the input to 7 dimensions. In addition, we found that the learning rule in eq.6 produced poor performance because the variance of the bump distributions was too large. Consequently, in our learning experiments, we used the hard winner-take-all circuit to control adaptation, resulting in a K-means learning rule. We trained the chip to perform different tasks on handwritten digits from the MNIST dataset [16]. To prepare the data, we first perform PCA to reduce the 784-pixel images to sevendimensional vectors, and then sent the data on-chip. We first tested the circuit on clustering handwritten digits. We trained the chip on 1000 examples of each of the digits 1-8. Fig.4(a) shows reconstructions of the eight means before and after training. We compute each reconstruction by multiplying the means by the seven principal eigenvectors of the dataset. The data shows that the means diverge to associate with different digits. The chip learns to associate most digits with a single probability distribution. The lone exception is digit 5 which doesn’t clearly associate with one distribution. We speculate that the reason is that 3’s, 5’s, and 8’s are very similar in our training data’s seven-dimensional representation. Gaussian mixture models trained with the E-M algorithm also demonstrate similar results, recovering only seven out of the eight digits. We next evaluated the same learned means on vector quantization of a set of test digits (4400 examples of each digit). We compare the chip’s learned means with means learned by the batch E-M algorithm on mixtures of Gaussians (with σ=0.01), a mismatch E-M algorithm that models chip nonidealities, and a non-adaptive baseline quantizer. The purpose of the mismatch E-M algorithm was to assess the effect of nonuniform injection and tunneling strengths in floating-gate transistors. Because tunneling and injection magnitudes can vary by a large amount on different floatinggate transistors, the adaptive bump circuits can learn a mean that is somewhat offcenter. We measured the offset of each bump circuit when adapting to a constant input and constructed the mismatch E-M algorithm by altering the learned means during the M-step by the measured offset. We constructed the baseline quantizer by selecting, at random, an example of each digit for the quantizer codebook. For each quantizer, we computed the reconstruction error on the digit’s seven-dimensional Vs Vg ... C1 C2 P(x1|µi1)σ (a) (b) (c) M1 M2 M3 M4 Vavg Vavg M5 ... Vb M0 ... P(i|x) logP(x|i) ... P(x) Vs ... logP(x|i) P(xn|µin)σ Figure 3. (a) Circuit for computing logP(x|i). (b) Circuit for computing P(i|x). The current through the ith leg represents P(i|x). (c) Circuit for computing P(x). representation when we represent each test digit by the closest mean. The results in Fig.4(b) show that for most of the digits the chip’s learned means perform as well as the E-M algorithm, and better than the baseline quantizer in all cases. The one digit where the chip’s performance is far from the E-M algorithm is the digit “1”. Upon examination of the E-M algorithm’s results, we found that it associated two means with the digit “1”, where the chip allocated two means for the digit “3”. Over all the digits, the E-M algorithm exhibited a quantization error of 9.98, mismatch E-M gives a quantization error of 10.9, the chip’s error was 11.6, and the baseline quantizer’s error was 15.97. The data show that mismatch is a significant factor in the difference between the bump mixture model’s performance and the E-M algorithm’s performance in quantization tasks. Finally, we use the mixture model to classify handwritten digits. If we train a separate mixture model for each class of data, we can classify an input by comparing the probabilities of the input under each model. In our experiment, we train two separate mixture models: one on examples of the digit 7, and the other on examples of the digit 9. We then apply both mixtures to a set of unseen examples of digits 7 and 9, and record the probability score of each unseen example under each mixture model. We plot the resulting data in Fig.4(c). Each axis represents the probability under a different class. The data show that the model probabilities provide a good metric for classification. Assigning each test example to the class model that outputs the highest probability results in an accuracy of 87% on 2000 unseen digits. Additional software experiments show that mixtures of Gaussians (σ=0.01) trained by the batch E-M algorithm provide an accuracy of 92.39% on this task. Our test results show that the bump mixture model’s performance on several learning tasks is comparable to standard mixtures of Gaussians trained by E-M. These experiments give further evidence that floating-gate circuits can be used to build effective learning systems even though their learning rules derive from silicon physics instead of statistical methods. The bump mixture model also represents a basic building block that we can use to build more complex silicon probability models (a) (b) average squared quantization error digit chip baseline E-M E-M/mismatch 1 2 3 4 5 6 7 8 0 10 20 before after Figure 4. (a) Reconstruction of chip means before and after training with handwritten digits. (b) Comparison of average quantization error on unseen handwritten digits, for the chip’s learned means and mixture models trained by standard algorithms. (c) Plot of probability of unseen examples of 7’s and 9’s under two bump mixture models trained solely on each digit. Probability under 9's model (µA) Probability under 7's model (µA) (c) 0.5 0.5 1 1.5 2 2.5 1 1.5 2 2.5 7 + 9 o over analog variables. This work can be extended in several ways. We can build distributions that have parameterized covariances in addition to means. In addition, we can build more complex, adaptive probability distributions in silicon by combining the bump mixture model with silicon probability models over discrete variables [5-7] and spike-based floating-gate learning circuits [4]. Acknowledgments This work was supported by NSF under grants BES 9720353 and ECS 9733425, and Packard Foundation and Sloan Fellowships. References [1] C. M. Bishop, Neural Networks for Pattern Recognition. Oxford, UK: Clarendon Press, 1995. [2] L. R. Rabiner, "A tutorial on hidden Markov models and selected applications in speech recognition," Proceedings of the IEEE, vol. 77, pp. 257-286, 1989. [3] B. A. Minch, "Analysis, Synthesis, and Implementation of Networks of MultipleInput Translinear Elements," California Institute of Technology, 1997. [4] C.Diorio, D.Hsu, and M.Figueroa, "Adaptive CMOS: from biological inspiration to systems-on-a-chip," Proceedings of the IEEE, vol. 90, pp. 345-357, 2002. [5] T. Gabara, J. Hagenauer, M. Moerz, and R. Yan, "An analog 0.25 µm BiCMOS tailbiting MAP decoder," IEEE International Solid State Circuits Conference (ISSCC), 2000. [6] J. Dai, S. Little, C. Winstead, and J. K. Woo, "Analog MAP decoder for (8,4) Hamming code in subthreshold CMOS," Advanced Research in VLSI (ARVLSI), 2001. [7] M. Helfenstein, H.-A. Loeliger, F. Lustenberger, and F. Tarkoy, "Probability propagation and decoding in analog VLSI," IEEE Transactions on Information Theory, vol. 47, pp. 837-843, 2001. [8] W. C. Fang, B. J. Sheu, O. Chen, and J. Choi, "A VLSI neural processor for image data compression using self-organization neural networks," IEEE Transactions on Neural Networks, vol. 3, pp. 506-518, 1992. [9] J. Lubkin and G. Cauwenberghs, "A learning parallel analog-to-digital vector quantizer," Journal of Circuits, Systems, and Computers, vol. 8, pp. 604-614, 1998. [10] T. Delbruck, "Bump circuits for computing similarity and dissimilarity of analog voltages," California Institute of Technology, CNS Memo 26, 1993. [11] M. Lenzlinger, and E. H. Snow, "Fowler-Nordheim tunneling into thermally grown SiO2," Journal of Applied Physics, vol. 40, pp. 278-283, 1969. [12] E. Takeda, C. Yang, and A. Miura-Hamada, Hot Carrier Effects in MOS Devices. San Diego, CA: Academic Press, 1995. [13] J. Lazzaro, S. Ryckebusch, M. Mahowald, and C. A. Mead, "Winner-take-all networks of O(n) complexity," in Advances in Neural Information Processing, vol. 1, D. Tourestzky, Ed.: MIT Press, 1989, pp. 703-711. [14] K. Boahen and A. Andreou, "A contrast sensitive silicon retina with reciprocal synapses," in Advances in Neural Information Processing Systems 4, S. H. J. Moody, and R. Lippmann, Ed.: MIT Press, 1992, pp. 764-772. [15] J. Lazzaro, "Low-power silicon spiking neurons and axons," IEEE International Symposium on Circuits and Systems, 1992. [16] Y. Lecun, "The MNIST database of handwritten digits, http://yann_lecun.com/exdb/mnist."
2002
198
2,211
Improving Transfer Rates in Brain Computer Interfacing: A Case Study Peter Meinicke, Matthias Kaper, Florian Hoppe, Manfred Heumann and Helge Ritter University of Bielefeld Bielefeld, Germany {pmeinick, mkaper, fhoppe, helge} @techfak.uni-bielefeld.de Abstract In this paper we present results of a study on brain computer interfacing. We adopted an approach of Farwell & Donchin [4], which we tried to improve in several aspects. The main objective was to improve the transfer rates based on offline analysis of EEG-data but within a more realistic setup closer to an online realization than in the original studies. The objective was achieved along two different tracks: on the one hand we used state-of-the-art machine learning techniques for signal classification and on the other hand we augmented the data space by using more electrodes for the interface. For the classification task we utilized SVMs and, as motivated by recent findings on the learning of discriminative densities, we accumulated the values of the classification function in order to combine several classifications, which finally lead to significantly improved rates as compared with techniques applied in the original work. In combination with the data space augmentation, we achieved competitive transfer rates at an average of 50.5 bits/min and with a maximum of 84.7 bits/min. 1 Introduction Some neurological diseases result in the so-called locked-in syndrome. People suffering from this syndrom lost control over their muscles, and therefore are unable to communicate. Consequently, their brain-signals should be used for communication. Besides the clinical application, developing such a brain-computer interface (BCI) is in itself an exciting goal as indicated by a growing research interest in this field. Several EEG-based techniques have been proposed for realization of BCIs (see [6, 12], for an overview). There are at least four distinguishable basic approaches, each with its own advantages and shortcomings: 1. In the first approach, participants are trained to control their EEG frequency pattern for binary decisions. Whether specific frequencies (the and  rhythms) in the power range are heightened or not results in upward or downward cursor movements. A further version extended this basic approach for 2D-movements. Transfer rates of 20-25 bits/min were reported [12]. 2. Imaginations of movements, resulting in the “Bereitschaftspotential” over sensorimotor cortex areas, are used to transmit information in the device of Pfurtscheller Figure 1: Stimulusmatrix with one column highlighted. et al. [8], which is in use by a tetraplegic patient. Blankertz et al. [2] applied sophisticated methods for data-analysis to this approach and reached fast transfer rates of 23 bits/min when classifying brain signals preceding overt muscle activity. 3. The thought translation device by Birbaumer et al. [5, 1] is based on slow cortical potentials, i.e. large shifts in the EEG-signal. They trained people in a biofeedback scenario to control this component. It is rather slow (<6 bits/min) and requires intensively trained participants but is in practical use. 4. Farwell & Donchin [4, 3, 10] developed a BCI-System by utilizing specific positive deflections (P300) in EEG-signals accompanying rare events (as discussed in detail below). It is moderately fast (up to 12 bits/min) and needs no practice of the participant, but requires visual attention. For BCIs, it is very desirable to have fast transfer rates. In our own studies, we therefore tried to accelerate the fourth approach by using state-of-the-art machine learning techniques and fusing data from different electrodes for data-analysis. For that purpose we utilized the basic setup of Farwell & Donchin (referred to as F&D) [4] who used the well-studied P300-Component to create a BCI-system. They presented a 6 6-matrix (see Fig. 1), filled with letters and digits, and highlighted all rows and columns sequentially in random order. People were instructed to focus on one symbol in the matrix, and mentally count its highlightings. From EEG-research it is known, that counting a rare specific event (oddballstimulus) in a series of background stimuli evokes a P300 for the oddball stimulus. Hence, highlighting the attended symbol in the 6 6-matrix should result in a P300, a characteristic positive deflection with a latency of around 300ms in the EEG-signal. It is therefore possible to infer the selected symbol by detecting the P300 in EEG-signals. Under suitable circumstances, most brains expose a P300. Thus, no training of the participants is necessary. For identification of the right column and row associated with a P300, Farwell & Donchin used the model-based techniques Area and Peak picking (both described in section 2) to detect the P300. In addition, as a data-driven approach, they used Stepwise Discriminant Analysis (SWDA). Using SWDA in a later study [3] resulted in transfer rates between 4.8 and 7.8 symbols per minute at an accuracy of 80% with a temporal distance of 125ms between two highlightings. In our work reported here we could improve several aspects of the F&D-approach by utilizing very recent machine learning techniques and a larger number of EEG-electrodes. First of all, we could increase the transfer rate by using Support Vector Machines (SVM) [11] for classification. Inspired by a recent approach to learning of discriminative densities [7] we utilized the values of the SVM classification function as a measure of confidence which we accumulate over certain classifications in order to speed up the transfer rate. In addition, we enhanced classification rates by augmenting the data-space. While Farwell & Donchin employed only data from a single electrode for classification, we used the data from 10 electrodes simultaneously. 2 Methods In the following we describe the techniques used for acquisition, preprocessing and analysis of the EEG-data. Data acquisition. All results of this paper stem from offline analyses of data acquired during EEG-experiments. The experimental setup was the following: participants were seated in front of a computer screen presenting the matrix (see Fig. 1) and user instructions. EEG-data were recorded with 10 Ag/AgCl electrodes at positions of the extended international 10-20 system (Fz, Cz, Pz, C3, C4, P3, P4, Oz, OL, OR1) sampled at 200Hz and low-pass filtered at 30Hz. The participants had to perform a certain number of trials. For the duration of a trial, they were instructed to focus their attention on a target symbol specified by the program, to mentally count the highlightings of the target symbol, and to avoid any body movement (especially eye moves and blinks). Each trial is subdivided into a certain number of subtrials. During each subtrial, 12 stimuli are presented, i.e. the 6 rows and the 6 columns are highlighted in random order. For different BCI-setups, the time between stimulus onsets, the interstimulus interval (ISI), was either 150, 300 or 500ms, while a highlighting always lasts 150ms. To each stimulus correspondes an epoch, a time frame of 600ms after stimulus onset 2During this interval a P300 should be evoked if the stimulus contains the target symbol. There is no pause between subtrials, but between trials. During the pause, the participants had time to focus on the next target symbol, before they initiated the next trial. The target symbol was chosen randomly from the available set of symbols and was presented by the program in order to create a data set of labelled EEG-signals for the subsequent offline analysis. Data preprocessing. To compensate for slow drifts of the DC potential, in a first step the linear trend of the raw data in each electrode over the duration of a trial was eliminated. In a second step, the data was normalized to zero mean and unit standard deviation. This was separately done for each electrode taking the data of all trials into account. Classification of Epochs. Test- and trainingsets were created by choosing the data according to one symbol as testset, and the data of the other symbols as trainingset in a crossvalidation scheme. The task of classifying a subtrial for the identification of a target symbol has to be distinguished from the classification of a single epoch for detection of a signal, correlated with oddball-stimuli, which we briefly refer to as a “P300 component” in a simplified manner in the following. In case of using a subtrial to select a symbol, two P300 components have to be detected within epochs: one corresponding to a row-, another to a column-stimulus. The detection algorithm works on the data of an epoch and has to compute a score which reflects the presence of a P300 within that epoch. Therefore, 12 epochs have to be evaluated for the selection of one target symbol. For the P300-detection, we utilized two model-based methods which had been proposed by F&D, and one completely data-driven method based on Support Vector Machines (SVMs) [11]. For training of the classifiers, we built up a sets of epochs containing an equal number of positive and negative examples, i.e. epochs with and without a P300 component. 1OL denotes the position halfway between O1 and T5, and OR between O2 and T6 respectively. 2With an ISI shorter than 450ms, there is a time overlap of consecutive epochs. stimulus onsets epoch of 600ms subtrial 1 subtrial 2 subtrial 3 trial time course model−based methods Figure 2: Trials, subtrials and epochs in the course of time (left). Model-based methods for analysis. Area calculates surface in the P300-window, Peak picking calculates differences between peaks. The first model-based method uses as its score as shown in Fig. 2 the area in the P300window (“Area method”,  ), while the second model-based method uses the difference between the lowest point before, and the highest point within the P300-window (“Peak picking method”,  ). Hyperparameters of the model-based methods were the boundaries of the P300-window. They were selected regarding the average of epochs containing the P300 by taking the boundaries of the largest area. For the completely data-driven approach, SVMs were optimized to distinguish between the two classes (w/o P300) implied by the training set. As compared with many traditional classifiers, such as the SWDA method used by F&D, SVMs can realize Bayes-consistent classifiers under very general conditions without requiring any specific assumptions about the underlying data distributions and decision boundaries. Thereby convergence to the Bayes optimum can be achieved by a suitable choice of hyperparameters. When using SVMs, it is not clear what measure to take as the score of an epoch. The problem is that the SVM has first of all been designed to assign binary class labels to its input without any measure of confidence on the resulting decision. However, a recent approach to learning of discriminative densities [7] suggests an interpretation of the usual discrimination function for SVMs with positive kernels in terms of scaled density differences. This finding provides us with a well-motivated score of an epoch: with  as the data vector of an epoch and     as the corresponding class label which is positive/negative for epochs with/without target stimulus the SVM-score is computed as    "!  !$#%!'&     ! )(+* (1) where & -,   !  in our case is a Gaussian Kernel function with bandwidth . (selected as the weight / for the soft-margin penalties by 0 -fold crossvalidation) evaluated at the 1 -th data example. The mixing weights # ! were estimated by quadratic optimization for an SVM objective with linear soft-margin penalties where we used the SMO-algorithm [9]. Combination of subtrials. Because EEG-data possess a very poor signal-to-noise ratio (SNR), identification of the target symbol from a single subtrial is usually not reliable enough to achieve a reasonable classification rate. Therefore, several subtrials have to be combined for classification, slowing down the transfer rate. Thus, an important goal is to decrease the amount of subtrials which have to be combined for a satisfactory classification rate. An important constraint for the development of the specific offline-analysis programs was to realize a testing scheme which should be as close as possible to a corresponding online evaluation. Therefore, we tested a method for certain -combinations of subtrials in the following way: different series of successive subtrials were taken out of a test set and the corresponding single classifications were combined as explained below. Thereby, the test series contained only subtrials belonging to identical symbols and these were combined in their original temporal order3. In contrast, Farwell & Donchin randomly chose samples from a test set, built from subtrials taken from different trials and belonging to different symbols. With this procedure, they broke up the time course of the recorded data and did not distinguish between different symbols, i.e. different positions in the matrix on the screen. Based on the data of subtrials, one has to choose a row and a column in order to identify the target symbol, i.e. to classify a trial. Therefore, in a first step, the single scores4    !   of the epoch   !  correspondingto the stimulus associated to the 1 -th row of the -th subtrial were summed up to the total score  !       !   . Then, the target row was chosen as  !  ! with 1    . Equivalent steps were performed to choose the target column. Based on these decisions the target symbol was finally selected in accordance to the presented matrix. 3 Experimental Results Before going into details, we outline our investigations about improving the usability of the F&D-BCI. First, the different methods were compared to classify the data of the Pz electrode, which was originally used by Farwell & Donchin. Second, further single electrodes were taken as input source. This revealed information about interesting scalp positions to record a P300 and on the other hand indicated which channels may contain a useful signal. Third, the SVM classification rate with respect to epochs was improved by increasing the data-space. Therefore, the input vector for the classifier was extended by combining data from the same epoch but from different electrodes. These tests indicated that the best classification rates could be achieved using as detection method an SVM with all ten electrodes as input sources. Since the results of the first three steps were established based on the data of one initial experiment with only one participant, we evaluated the generality of these techniques by testing different subjects and BCI parameters. Finally, the BCI performance in terms of attainable communication rates is estimated from these analyses. Method comparison using the Pz electrode as input source. All four methods were applied to the data of one initial experiment with an ISI of 500ms and 3 subtrials per trial. Figure 3 presents the classification rates of up to 10 subtrials. The SVM method achieved best performance, its epoch classification rate was 76.3% (SD=1.0) in a 10-fold crossvalidation with about 380 subtrials samples in the training sets, and about 40 in the test sets. Of each subtrial in the training set, 4 epochs (2 with, 2 without a P300) were taken as training samples, whereas all 12 epochs of the subtrials of the test set were classified. For each training set, hyperparameters were selected by another 3-fold crossvalidation on this set. 3For a higher number of subtrial combinations, subtrials from different trials had to be combined. However, real-world-application of this BCI don’t require such combinations with respect to the finally achieved transfer rates reported in section 3. 4The method index is omitted in the following. Figure 3: (left) Method comparison on the Pz electrode: The three techniques were applied to the data of the initial experiment. (right) Classification rates for different number of electrodes. 0 10 20 30 40 50 60 70 80 90 100 classification rate (%) 6 12 18 24 30 36 42 48 54 60 66 72 78 84 90 time (s) 0 10 20 30 40 50 60 70 80 90 100 classification rate (%) 6 12 18 24 30 36 42 48 54 60 66 72 78 84 90 time (s) Fz Cz Pz C3 C4 P3 P4 OL OR OZ Peak picking SVM Figure 4: Electrode comparison on the data of the initial experiment. Different electrodes as input source. The method comparison tests were repeated for each electrode. The results of the Peak picking and SVM method are shown in Figure 3. The SVM is able to extract useful information from all ten electrodes, whereas the Peak picking performance varies for different scalp positions. Especially, the electrodes over the visual cortex areas OZ, OR and OL are useless for the model-based techniques, as the same characteristics are revealed by tests with the Area method. Higher-dimensional data-space. While Farwell & Donchin used only one electrode for data-analysis, we extended the data-space by using larger numbers of electrodes. We calculated classification rates for Pz alone, three, seven, and ten electrodes. A signal correlated with oddball-stimuli was classified at rates of 76.8%, 76.8%, 90.9%, and 94.5%, respectively for the different data-spaces of 120, 360, 840, and 1200 dimensions. These rates were calculated with 850 positive and 850 negative epoch samples and a 3-fold crossvalidation. This classified signal might be more than solely the traditional P300 component. Applying data-space augmentation for classification to infer symbols in the matrix results in the classification rates depicted in Figure 3 (right) for an ISI of 500ms. Using ten electrodes simultaneously, combined in one data vector, outperforms lower-dimensional data-spaces. Figure 5: Mean-classification rates (left) and transfer rates (right) for different ISIs. Error bars range from best to worst results. Note that a subtrial takes a specific amount of time. Therefore, the time dependend transfer rates are decreasing with the number of subtrials. Reducing the ISI and using more participants. The improved classification rates encouraged further experiments. To accelerate the system, we reduced the ISI to 300ms and 150ms. Additionally, to generalize the results, we recruited four participants. Means, best and worst classification rates are presented in Figure 5, as well as average and best transfer rates. The latter were calculated according to       (   (            where  is the number of choices (36 here),  the probability for classification, and  the time required for classification. Using an ISI of 300ms results in slower transfer rates than using an ISI of 150ms. The latter ISI results on the average in classifying a symbol after 5.4s with an accuracy of 80% (disregarding delays between trials). The poorest performer needs 9s to reach this criterion, the best performer achieves an accuracy of 95.2% already after 3.6s. The transfer rates, with a maximum of 84.7 bits/min and an average of 50.5 bits/min outperform the EEG-based BCI-systems we know. 4 Conclusion With an application of the data-driven SVM-method to classification of single-channel EEG-signals, we could improve transfer rates as compared with model-based techniques. Furthermore, by increasing the number of EEG-channels, even higher classification and transfer rates could be achieved. Accumulating the value of the classification function as measure of confidence proved to be practical to handle series of classifications in order to identify a symbol. This resulted in high transfer rates with a maximum of 84.7 bits/min. 5 Acknowledgements We thank Thorsten Twellmann for supplying the SVM-algorithms and the Department of Cognitive Psychology at the University of Bielefeld for providing the experimental environment. This work was supported by Grant Ne 366/4-1 and the project SFB 360 from the German Research Council (Deutsche Forschungsgemeinschaft). References [1] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. Kübler, J. Perelmouter, E. Taub, and H. Flor. A spelling device for the paralysed. Nature, 398:297–298, 1999. [2] B. Blankertz, G. Curio, and K.-R. Müller. Classifying single trial eeg: Towards brain computer interfacing. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [3] E. Donchin, K.M. Spencer, and R. Wijeshinghe. The mental prosthesis: Assessing the speed of a p300-based brain-computer interface. IEEE Transactions on Rehabilitation Engineering, 8(2):174–179, 2000. [4] L.A. Farwell and E. Donchin. Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and clinical Neurophysiology, 70(S2):510–523, 1988. [5] A. Kübler, B. Kotchoubey, T. Hinterberger, N. Ghanayim, J. Perelmouter, M. Schauer, C. Fritsch, E. Taub, and N. Birbaumer. The thought translation device: a neurophysiological approach to commincation in total motor paralysis. Experimental Brain Research, 124:223–232, 1999. [6] A. Kübler, B. Kotchoubey, J. Kaiser, J.R. Wolpaw, and N. Birbaumer. Brain-computer communication: Unlocking the locked in. Psychological Bulletin, 127(3):358–375, 2001. [7] P. Meinicke, T. Twellmann, and H. Ritter. Maximum contrast classifiers. In Proc. of the Int. Conf. on Artificial Neural Networks, Berlin, 2002. Springer. in press. [8] G. Pfurtscheller, C. Neuper, C. Guger, B. Obermaier, M. Pregenzer, H. Ramoser, and A. Schlögl. Current trends in graz brain-computer interface (bci) research. IEEE Transactions On Rehabilitation Engineering, pages 216–219, 2000. [9] J. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods — Support Vector Learning, pages 185–208, Cambridge, MA, 1999. MIT Press. [10] J.B. Polikoff, H.T. Bunnell, and W.J. Borkowski. Toward a p300-based computer interface. RESNA ’95 Annual Conference and RESNAPRESS and Arlington Va., pages 178–180, 1995. [11] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995. [12] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, and T.M. Vaughan. Brain-computer interfaces for communication and control. Clinical Neurophysiology, 113:767–791, 2002.
2002
199
2,212
Nonparametric Representation of Policies and Value Functions: A Trajectory-Based Approach Christopher G. Atkeson Robotics Institute and HCII Carnegie Mellon University Pittsburgh, PA 15213, USA cga@cmu.edu Jun Morimoto ATR Human Information Science Laboratories, Dept. 3 Keihanna Science City Kyoto 619-0288, Japan xmorimo@atr.co.jp Abstract A longstanding goal of reinforcement learning is to develop nonparametric representations of policies and value functions that support rapid learning without suffering from interference or the curse of dimensionality. We have developed a trajectory-based approach, in which policies and value functions are represented nonparametrically along trajectories. These trajectories, policies, and value functions are updated as the value function becomes more accurate or as a model of the task is updated. We have applied this approach to periodic tasks such as hopping and walking, which required handling discount factors and discontinuities in the task dynamics, and using function approximation to represent value functions at discontinuities. We also describe extensions of the approach to make the policies more robust to modeling error and sensor noise. 1 Introduction The widespread application of reinforcement learning is hindered by excessive cost in terms of one or more of representational resources, computation time, or amount of training data. The goal of our research program is to minimize these costs. We reduce the amount of training data needed by learning models, and using a DYNA-like approach to do mental practice in addition to actually attempting a task [1, 2]. This paper addresses concerns about computation time and representational resources. We reduce the computation time required by using more powerful updates that update first and second derivatives of value functions and first derivatives of policies, in addition to updating value function and policy values at particular points [3, 4, 5]. We reduce the representational resources needed by representing value functions and policies along carefully chosen trajectories. This non-parametric representation is well suited to the task of representing and updating value functions, providing additional representational power as needed and avoiding interference. This paper explores how the approach can be extended to periodic tasks such as hopping and walking. Previous work has explored how to apply an early version of this approach to tasks with an explicit goal state [3, 6] and how to simultaneously learn a model and  also affiliated with the ATR Human Information Science Laboratories, Dept. 3 use this approach to compute a policy and value function [6]. Handling periodic tasks required accommodating discount factors and discontinuities in the task dynamics, and using function approximation to represent value functions at discontinuities. 2 What is the approach? Represent value functions and policies along trajectories. Our first key idea for creating a more global policy is to coordinate many trajectories, similar to using the method of characteristics to solve a partial differential equation. A more global value function is created by combining value functions for the trajectories. As long as the value functions are consistent between trajectories, and cover the appropriate space, the global value function created will be correct. This representation supports accurate updating since any updates must occur along densely represented optimized trajectories, and an adaptive resolution representation that allocates resources to where optimal trajectories tend to go. Segment trajectories at discontinuities. A second key idea is to segment the trajectories at discontinuities of the system dynamics, to reduce the amount of discontinuity in the value function within each segment, so our extrapolation operations are correct more often. We assume smooth dynamics and criteria, so that first and second derivatives exist. Unfortunately, in periodic tasks such as hopping or walking the dynamics changes discontinuously as feet touch and leave the ground. The locations in state space at which this happens can be localized to lower dimensional surfaces that separate regions of smooth dynamics. For periodic tasks we apply our approach along trajectory segments which end whenever a dynamics (or criterion) discontinuity is reached. We also search for value function discontinuities not collocated with dynamics or criterion discontinuities. We can use all the trajectory segments that start at the discontinuity and continue through the next region to provide estimates of the value function at the other side of the discontinuity. Use function approximation to represent value function at discontinuities. We use locally weighted regression (LWR) to construct value functions at discontinuities [7]. Update first and second derivatives of the value function as well as first derivatives of the policy (control gains for a linear controller) along the trajectory. We can think of this as updating the first few terms of local Taylor series models of the global value and policy functions. This non-parametric representation is well suited to the task of representing and updating value functions, providing additional representational power as needed and avoiding interference. We will derive the update rules. Because we are interested in periodic tasks, we must introduce a discount factor into Bellman’s equation, so value functions remain finite. Consider a system with dynamics   and a one step cost function   , where is the state of the system and  is a vector of actions or controls. The subscript  serves as a time index, but will be dropped in the equations that follow in cases where all time indices are the same or are equal to  . A goal of reinforcement learning and optimal control is to find a policy that minimizes the total cost, which is the sum of the costs for each time step. One approach to doing this is to construct an optimal value function,   . The value of this value function at a state is the sum of all future costs, given that the system started in state and followed the optimal policy (chose optimal actions at each time step as a function of the state). A local planner or controller can choose globally optimal actions if it knew the future cost of each action. This cost is simply the sum of the cost of taking the action right now and the discounted future cost of the state that the action leads to, which is given by the value function. Thus, the optimal action is given by:  !#"%$'&)(*  #+-,  . / #0 where , is the discount factor. −1 0 1 −1 −0.5 0 0.5 1 G −4 −3 −2 −1 0 1 2 3 4 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Velocity Height Figure 1: Example trajectories where the value function and policy are explicitly represented for a regulator task at goal state G (left), a task with a point goal state G (middle), and a periodic task (right). Suppose at a point   we have 1) a local second order Taylor series approximation of the optimal value function:    +   +       where    . 2) a local second order Taylor series approximation of the dynamics, which can be learned using local models of the plant (  and  correspond to the usual  and  of the linear plant model used in linear quadratic regulator (LQR) design):   #  +   +    +      +       +        where    - , and 3) a local second order Taylor series approximation of the one step cost, which is often known analytically for human specified criteria (    and   correspond to the usual  and  of LQR design):  #   +    +     +       +       +        Given a trajectory, one can integrate the value function and its first and second spatial derivatives backwards in time to compute an improved value function and policy. The backward sweep takes the following form (in discrete time):      + ,         +-,    (1)     ,      + ,     +        ,      + ,     +    (2)    ,     + ,   +    (3)   "!     $#  "!      (4) %'&)(       #*   %+&)(         # (5) After the backward sweep, forward integration can be used to update the trajectory itself: ,.-0/  1 2 # ,.-0/3  In order to use this approach we have to assume smooth dynamics and criteria, so that first and second derivatives exist. Unfortunately, in periodic tasks such as hopping or walking the dynamics changes discontinuously as feet touch and leave the ground. The locations in state space at which this happens can be localized to lower dimensional surfaces that separate regions of smooth dynamics. For periodic tasks we apply our approach along trajectory segments which end whenever a dynamics (or criterion) discontinuity is reached. We can use all the trajectory segments that start at the discontinuity and continue through the next region to provide estimates of the value function at the other side of the discontinuity. Figure 1 shows our approach applied to several types of problems. On the left we see that a task that requires steady state control about a goal point (a regulator task) can be solved with a single trivial trajectory that starts and ends at the goal and provides a value function and constant linear policy    #  in the vicinity of the goal. −4 −3 −2 −1 0 1 2 3 4 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Velocity Height −4 −3 −2 −1 0 1 2 3 4 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Velocity Height −4 −3 −2 −1 0 1 2 3 4 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Velocity Height Figure 2: The optimal hopper controller with a range of penalties on  usage        .  #      +  The middle figure of Figure 1 shows the trajectories used to compute the value function for a swing up problem [3]. In this problem the goal requires regulation about the state where the pendulum is inverted and in an unstable equilibrium. However, the nonlinearities of the problem limit the region of applicability of a linear policy, and non-trivial trajectories have to be created to cover a larger region. In this case the region where the value function is less than a target value is filled with trajectories. The neighboring trajectories have consistent value functions and thus the globally optimal value function and policy is found in the explored region [3]. The right figure of Figure 1 shows the trajectories used to compute the value function for a periodic problem, control of vertical hopping in a hopping robot. In this problem, there is no goal state, but a desired hopping height is specified. This problem has been extensively studied in the robotics literature [8] from the point of view of how to manually design a nonlinear controller with a large stability region. We note that optimal control provides a methodology to design nonlinear controllers with large stability regions and also good performance in terms of explicitly specified criteria. We describe later how to also make these controller designs more robust. In this figure the vertical axis corresponds to the height of the hopper, and the horizontal axis is vertical velocity. The robot moves around the origin in a counterclockwise direction. In the top two quadrants the robot is in the air, and in the bottom two quadrants the robot is on the ground. Thus, the horizontal axis is a discontinuity of the robot dynamics, and trajectory segments end and often begin at the discontinuity. We see that while the robot is in the air it cannot change how much energy it has (how high it goes or how fast it is going when it hits the ground), as the trajectories end with the same pattern they began with. When the robot is on the ground it thrusts with its leg to “focus” the trajectories so the set of touchdown positions is mapped to a smaller set of takeoff positions. This funneling effect is characteristic of controllers for periodic tasks, and how fast the funnel becomes narrow is controlled by the size of the penalty on  usage (Figure 2). 2.1 How are trajectory start points chosen? In our approach trajectories are refined towards optimality given their fixed starting points. However, an initial trajectory must first be created. For regulator tasks, the trajectory is trivial and simply starts and ends at the known goal point. For tasks with a point goal, trajectories can be extended backwards away from the goal [3]. For periodic tasks, crude trajectories must be created using some other approach before this approach can refine them. We have used several methods to provide initial trajectories. Manually designed controllers sometimes work. In learning from demonstration a teacher provides initial trajectories [6]. In policy optimization (aka “policy search”) a parameterized policy is optimized [9]. Once a set of initial task trajectories are available, the following four methods are used to generate trajectories in new parts of state space. We use all of these methods simultaneously, and locally optimize each of the trajectories produced. The best trajectory of the set is then stored and the other trajectories are discarded. 1) Use the global policy generated by policy optimization, if available. 2) Use the local policy from the nearest point with the same type of dynamics. 3) Use the local value function estimate (and derivatives) from the nearest point with the same type of dynamics. and 4) Use the policy from the nearest trajectory, where the nearest trajectory is selected at the beginning of the forward sweep and kept the same throughout the sweep. Note that methods 2 and 3 can change which stored trajectories they take points from on each time step, while method 4 uses a policy from a single neighboring trajectory. 3 Control of a walking robot As another example we will describe the search for a policy for walking of a simple planar biped robot that walks along a bar. The simulated robot has two legs and a torque motor between the legs. Instead of revolute or telescoping knees, the robot can grab the bar with its foot as its leg swings past it. This is a model of a robot that walks along the trusses of a large structure such as a bridge, much as a monkey brachiates with its arms. This simple model has also been used in studies of robot passive dynamic walking [10]. This arrangement means the robot has a five dimensional state space: left leg angle . , right leg angle   , left leg angular velocity  . , right leg angular velocity    , and stance foot location. A simple policy is used to determine when to grab the bar (at the end of a step when the swing foot passes the bar going downwards). The variable to be controlled is the torque at the hip. The criterion we used is quite complex. We are a long way from specifying an abstract or vague criterion such as “cover a fixed distance with minimum fuel or battery usage” or “maximize the amount of your genes in future gene pools” and successfully finding an optimal or reasonable policy. At this stage we need to include several “shaping” terms in the criterion, that reward keeping the hips at the right altitude with minimal vertical velocity, keeping the leg amplitude within reason, maintaining a symmetric gait, and maintaining the desired hip forward velocity:      +    +   +   + +    !   !   + (6) where the " are weighting factors and are #   ,  -  .    ,    , and      . The leg length is 1 meter (hence the 1 in       ). The desired leg velocity  !   %$ '&)( .  " provides a measure of how far the left or right leg has gone past its limits * +$  ,.-/0,1 in the forward or backward direction. ' is the product of the leg angles if the legs are both forward or both rearward, and zero otherwise. !  is the hip location. The integration and control time steps are 1 millisecond each. The dynamics of this walker are simulated using a commercial package, SDFAST. Initial trajectories were generated by optimizing the coefficients of a linear policy. When the left leg was in stance: 32  +42#5 +62  +6287 *+62:9   +62:;   +628<  ! +628=   (7) where  is the angle between the legs. When the right leg was in stance the same policy was used with the appropriate signs negated. 3.1 Results The trajectory-based approach was able to find a cheaper and more robust policy than the parametric policy-optimization approach. This is not surprising given the flexible and expandable representational capacity of an adaptive non-parametric representation, but it does provide some indication that our update algorithms can usefully harness the additional representation power. Cost: For example, after training the parametric policy, we measured the undiscounted cost over 1 second (roughly one step of each leg) starting in a state along the lowest cost cyclic trajectory. The cost for the optimized parametric policy was 4316. The corresponding cost for the trajectory-based approach starting from the same state was 3502. Robustness: We did a simple assessment of robustness by adding offsets to the same starting state until the optimized linear policy failed. The offsets were in terms of the stance leg and the angle between the legs, and the corresponding angular velocities. The maximum offsets for the linearized optimized parametric policy are  +$      +$  ,  %$        +$  ,  +$      %$  , and  +$        %$ . We did a similar test for the trajectory approach. In each direction the maximum offset the trajectorybased approach was able to handle was equal to or greater than the parametric policy-based approach, extending the range most in the cases of     +$  and      $  . This is not surprising, since the trajectory-based controller uses the parametric policy as one of the ways to initially generate candidate trajectories for optimization. In cases where the trajectory-based approach is not able to generate an appropriate trajectory, the system will generate a series of trajectories with start points moving from regions it knows how to handle towards the desired start point. Thus, we have not yet discovered situations that are physically possible to recover that the trajectory-based approach cannot handle if it is allowed as much computation time as it needs. Interference: To demonstrate interference in the parametric policy approach, we optimized its performance from a distribution of starting states. These states were the original state, and states with positive offsets. The new cost for the original starting position was 14,747, compared to 4316 before retraining. The trajectory approach has the same cost as before, 3502. 4 Robustness to modeling error and imperfect sensing So far we have addressed robustness in terms of the range of initial states that can be handled. Another form of robustness is robustness to modeling error (changes in masses, friction, and other model parameters) and imperfect sensing, so that the controller does not know exactly what state the robot is in. Since simulations are used to optimize policies, it is relatively easy to include simulations with different model parameters and sensor noise in the training and optimize for a robust parametric controller in policy shaping. How does the trajectory-based approach achieve comparable robustness? We have developed two approaches, a probabilistic approach with maintains distributional information about unknown states and parameters, and a game-based or minimax approach. The probabilistic approach supports actions by the controller to actively minimize uncertainty as well as achieve goals, which is known as dual control. The game-based approach does not reduce uncertainty with experience, and is somewhat paranoid, assuming the world is populated by evil spirits which choose the worst possible disturbance at each time step for the controller. This results in robust, but often overly conservative policies. In the probabilistic case, the state is augmented with any unknown parameters such as masses of parts or friction coefficients, and the covariance of all the original elements of the state as well as the added parameters. An extended Kalman filter is constructed as the new dynamics equation, predicting the new estimates of the means and covariances given the control signals to the system. The one step cost function is restated in terms of the augmented state. The value function is now a function of the augmented state, including covariances of the original state vector elements. These covariances interact with the curvature of the value function, causing additional cost in areas of the value function that have high curvature or second derivatives. Thus the system is rewarded when it moves to areas of the value function that are planar, and uncertainty has no effect on the expected cost. The system is also rewarded when it learns, which reduces the covariances of the estimates, so the system may choose actions that move away from a goal but reduce uncertainty. This probabilistic approach does dramatically increase the dimensionality of the state vector and thus the value function, but in the context of only a quadratic cost on dimensionality this is not as fatal is it would seem. A less expensive approach is to use a game-based uncertainty model with minimax optimization. In this case, we assume an opponent can pick a disturbance to maximally increase our cost. This is closely related to robust nonlinear controller design techniques based on the idea of  control [11, 12] and risk sensitive control [13, 14]. We augment the dynamics equation with a disturbance term:    #      , 5 #+ where  is a vector of disturbance inputs. To limit the size of the disturbances, we include the disturbance magnitude in a modified one step cost function with a negative sign. The opponent who controls the disturbance wants to increase our cost, so this new term gives an incentive to the opponent to choose the worse direction for the disturbance, and a disturbance magnitude that gives the highest ratio of increased cost to disturbance size:  #       , 50 #   . Initially,  is set to globally approximate the uncertainty of the model. Ultimately,  should vary with the local confidence in the model. Highly practiced movements or portions of movements should have high  , and new movements should have lower  . The optimal action is now given by Isaacs’ equation:    ! "  " $'& (  # + ,    . / # 0 . How we solve Isaacs’ equation and an application of this method are described in the companion paper [15]. 5 How to cover a volume of state space In tasks with a goal or point attractor, [3] showed that certain key trajectories can be grown backwards from the goal in order to approximate the value function. In the case of a sparse use of trajectories to cover a space, the cost of the approach is dominated by the costs of updating second derivative matrices, and thus the cost of the trajectory-based approach increases quadratically as the dimensionality increases. However, for periodic tasks the approach of growing trajectories backwards from the goal cannot be used, as there is no goal point or set. In this case the trajectories that form the optimal cycle can be used as key trajectories, with each point along them supplying a local linear policy and local quadratic value function. These key trajectories can be computed using any optimization method, and then the corresponding policy and value function estimates along the trajectory computed using the update rules given here. It is important to point out that optimal trajectories need only be placed densely enough to separate regions which have different local optima. The trajectories used in the representation usually follow local valleys of the value function. Also, we have found that natural behavior often lies entirely on a low-dimensional manifold embedded in a high dimensional space. Using these trajectories and creating new trajectories as task demands require it, we expect to be able to handle a range of natural tasks. 6 Contributions In order to accommodate periodic tasks, this paper has discussed how to incorporate discount factors into the trajectory-based approach, how to handle discontinuities in the dynamics (and equivalently, criteria and constraints), and how to find key trajectories for a sparse trajectory-based approach. The trajectory-based approach requires less design skill from humans since it doesn’t need a “good” policy parameterization, produces cheaper and more robust policies, which do not suffer from interference. References [1] Richard S. Sutton. Integrated architectures for learning , planning and reacting based on approximating dynamic programming. In Proceedings 7th International Conference on Machine Learning., 1990. [2] C. Atkeson and J. Santamaria. A comparison of direct and model-based reinforcement learning, 1997. [3] Christopher G. Atkeson. Using local trajectory optimizers to speed up global optimization in dynamic programming. In Jack D. Cowan, Gerald Tesauro, and Joshua Alspector, editors, Advances in Neural Information Processing Systems, volume 6, pages 663–670. Morgan Kaufmann Publishers, Inc., 1994. [4] P. Dyer and S. R. McReynolds. The Computation and Theory of Optimal Control. Academic Press, New York, NY, 1970. [5] D. H. Jacobson and D. Q. Mayne. Differential Dynamic Programming. Elsevier, New York, NY, 1970. [6] Christopher G. Atkeson and Stefan Schaal. Robot learning from demonstration. In Proc. 14th International Conference on Machine Learning, pages 12–20. Morgan Kaufmann, 1997. [7] C. G. Atkeson, A. W. Moore, and S. Schaal. Locally weighted learning. Artificial Intelligence Review, 11:11–73, 1997. [8] W. Schwind and D. Koditschek. Control of forward velocity for a simplified planar hopping robot. In International Conference on Robotics and Automation, volume 1, pages 691–6, 1995. [9] J. Andrew Bagnell and Jeff Schneider. Autonomous helicopter control using reinforcement learning policy search methods. In International Conference on Robotics and Automation, 2001. [10] M. Garcia, A. Chatterjee, and A. Ruina. Efficiency, speed, and scaling of two-dimensional passive-dynamic walking. Dynamics and Stability of Systems, 15(2):75–99, 2000. [11] K. Zhou, J. C. Doyle, and K. Glover. Robust Optimal Control. PRENTICE HALL, New Jersey, 1996. [12] J. Morimoto and K. Doya. Robust Reinforcement Learning. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13, pages 1061–1067. MIT Press, Cambridge, MA, 2001. [13] R. Neuneier and O. Mihatsch. Risk Sensitive Reinforcement Learning. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages 1031–1037. MIT Press, Cambridge, MA, USA, 1998. [14] S. P. Coraluppi and S. I. Marcus. Risk-Sensitive and Minmax Control of Discrete-Time FiniteState Markov Decision Processes. Automatica, 35:301–309, 1999. [15] J. Morimoto and C. Atkeson. Minimax differential dynamic programming: An application to robust biped walking. In Advances in Neural Information Processing Systems 15. MIT Press, Cambridge, MA, 2002.
2002
2
2,213
Intrinsic Dimension Estimation Using Packing Numbers Bal´azs K´egl Department of Computer Science and Operations Research University of Montreal CP 6128 succ. Centre-Ville, Montr´eal, Canada H3C 3J7 kegl@iro.umontreal.ca Abstract We propose a new algorithm to estimate the intrinsic dimension of data sets. The method is based on geometric properties of the data and requires neither parametric assumptions on the data generating model nor input parameters to set. The method is compared to a similar, widelyused algorithm from the same family of geometric techniques. Experiments show that our method is more robust in terms of the data generating distribution and more reliable in the presence of noise. 1 Introduction High-dimensional data sets have several unfortunate properties that make them hard to analyze. The phenomenon that the computational and statistical efficiency of statistical techniques degrade rapidly with the dimension is often referred to as the “curse of dimensionality”. One particular characteristic of high-dimensional spaces is that as the volumes of constant diameter neighborhoods become large, exponentially many points are needed for reliable density estimation. Another important problem is that as the data dimension grows, sophisticated data structures constructed to speed up nearest neighbor searches rapidly become inefficient. Fortunately, most meaningful, real life data do not uniformly fill the spaces in which they are represented. Rather, the data distributions are observed to concentrate to nonlinear manifolds of low intrinsic dimension. Several methods have been developed to find low-dimensional representations of high-dimensional data, including Principal Component Analysis (PCA), Self-Organizing Maps (SOM) [1], Multidimensional Scaling (MDS) [2], and, more recently, Local Linear Embedding (LLE) [3] and the ISOMAP algorithm [4]. Although most of these algorithms require that the intrinsic dimension of the manifold be explicitly set, there has been little effort devoted to design and analyze techniques that estimate the intrinsic dimension of data in this context. There are two principal areas where a good estimate of the intrinsic dimension can be useful. First, as mentioned before, the estimate can be used to set input parameters of dimension reduction algorithms. Certain methods (e.g., LLE and the ISOMAP algorithm) also require a scale parameter that determines the size of the local neighborhoods used in the algorithms. In this case, it is useful if the dimension estimate is provided as a function of the scale (see Figure 1 for an intuitive example where the intrinsic dimension of the data depends on the resolution). Nearest neighbor searching algorithms can also profit from a good dimension estimate. The complexity of search data structures (e.g., kd-trees and R-trees) increase exponentially with the dimension, and these methods become inefficient if the dimension is more than about 20. Nevertheless, it was shown by Ch´avez et al. [5] that the complexity increases with the intrinsic dimension of the data rather then with the dimension of the embedding space. Figure 1: Intrinsic dimension D at different resolutions. (a) At very small scale the data looks zero-dimensional. (b) If the scale is comparable to the noise level, the intrinsic dimension seems larger than expected. (c) The “right” scale in terms of noise and curvature. (d) At very large scale the global dimension dominates. PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 In this paper we present a novel method for intrinsic dimension estimation. The estimate is based on geometric properties of the data, and requires no parameters to set. Experimental results on both artificial and real data show that the algorithm is able to capture the scale dependence of the intrinsic dimension. The main advantage of the method over existing techniques is its robustness in terms of the generating distribution. The paper is organized as follows. In Section 2 we introduce the field of intrinsic dimension estimation, and give a short overview of existing approaches. The proposed algorithm is described in Section 3. Experimental results are given in Section 4. 2 Intrinsic dimension estimation Informally, the intrinsic dimension of a random vector X is usually defined as the number of “independent” parameters needed to represent X. Although in practice this informal notion seems to have a well-defined meaning, formally it is ambiguous due to the existence of space-filling curves. So, instead of this informal notion, we turn to the classical concept of topological dimension, and define the intrinsic dimension of X as the topological dimension of the support of the distribution of X . For the definition, we need to introduce some notions. Given a topological space X , the covering of a subset S is a collection C of open subsets in X whose union contains S. A refinement of a covering C of S is another covering C ′ such that each set in C ′ is contained in some set in C. The following definition is based on the observation that a d-dimensional set can be covered by open balls such that each point belongs to maximum (d +1) open balls. Definition 1 A subset S of a topological space X has topological dimension Dtop (also known as Lebesgue covering dimension) if every covering C of S has a refinement C ′ in which every point of S belongs to at most (Dtop +1) sets in C ′, and Dtop is the smallest such integer. The main technical difficulty with the topological dimension is that it is computationally difficult to estimate on a finite sample. Hence, practical methods use various other definitions of the intrinsic dimension. It is common to categorize intrinsic dimension estimating methods into two classes, projection techniques and geometric approaches. Projection techniques explicitly construct a mapping, and usually measure the dimension by using some variants of principal component analysis. Indeed, given a set Sn = {X1,...,Xn},Xi ∈X ,i = 1,...,n of data points drawn independently from the distribution of X, probably the most obvious way to estimate the intrinsic dimension is by looking at the eigenstructure of the covariance matrix C of Sn. In this approach, bDpca is defined as the number of eigenvalues of C that are larger than a given threshold. The first disadvantage of the technique is the requirement of a threshold parameter that determines which eigenvalues are to discard. In addition, if the manifold is highly nonlinear, bDpca will characterize the global (intrinsic) dimension of the data rather than the local dimension of the manifold. bDpca will always overestimate Dtop; the difference depends on the level of nonlinearity of the manifold. Finally, bDpca can only be used if the covariance matrix of Sn can be calculated (e.g., when X = Rd). Although in Section 4 we will only consider Euclidean data sets, there are certain applications where only a distance metric d : X ×X 7→R+ ∪{0} and the matrix of pairwise distances D = [di j] = d(xi,xj) are given. Bruske and Sommer [6] present an approach to circumvent the second problem. Instead of doing PCA on the original data, they first cluster the data, then construct an optimally topology preserving map (OPTM) on the cluster centers, and finally, carry out PCA locally on the OPTM nodes. The advantages of the method are that it works well on non-linear data, and that it can produce dimension estimates at different resolutions. At the same time, the threshold parameter must still be set as in PCA, moreover, other parameters, such as the number of OPTM nodes, must also be decided by the user. The technique is similar in spirit to the way the dimension parameter of LLE is set in [3]. The algorithm runs in O(n2d) time (where n is the number of points and d is the embedding dimension) which is slightly worse than the O(nd bDpca) complexity of the fast PCA algorithm of Roweis [7] when computing bDpca. Another general scheme in the family of projection techniques is to turn the dimensionality reduction algorithm from an embedding technique into a probabilistic, generative model [8], and optimize the dimension as any other parameter by using cross-validation in a maximum likelihood setting. The main disadvantage of this approach is that the dimension estimate depends on the generative model and the particular algorithm, so if the model does not fit the data or if the algorithm does not work well on the particular problem, the estimate can be invalid. The second basic approach to intrinsic dimension estimation is based on geometric properties of the data rather then projection techniques. Methods from this family usually require neither any explicit assumption on the underlying data model, nor input parameters to set. Most of the geometric methods use the correlation dimension from the family of fractal dimensions due to the computational simplicity of its estimation. The formal definition is based on the observation that in a D-dimensional set the number of pairs of points closer to each other than r is proportional to rD. Definition 2 Given a finite set Sn = {x1,...,xn} of a metric space X , let Cn(r) = 2 n(n−1) n ∑ i=1 n ∑ j=i+1 I{∥xi−x j∥<r} where IA is the indicator function of the event A. For a countable set S = {x1,x2,...} ⊂X , the correlation integral is defined as C(r) = limn→∞Cn(r). If the limit exists, the correlation dimension of S is defined as Dcorr = lim r→0 logC(r) logr . For a finite sample, the zero limit cannot be achieved so the estimation procedure usually consists of plotting logC(r) versus logr and measuring the slope ∂logC(r) ∂logr of the linear part of the curve [9, 10, 11]. To formalize this intuitive procedure, we present the following definition. Definition 3 The scale-dependent correlation dimension of a finite set Sn = {x1,...,xn} is bDcorr(r1,r2) = logC(r2)−logC(r1) logr2 −logr1 . It is known that Dcorr ≤Dtop and that Dcorr approximates well Dtop if the data distribution on the manifold is nearly uniform. However, using a non-uniform distribution on the same manifold, the correlation dimension can severely underestimate the topological dimension. To overcome this problem, we turn to the capacity dimension, which is another member of the fractal dimension family. For the formal definition, we need to introduce some more concepts. Given a metric space X with distance metric d(·,·), the r-covering number N(r) of a set S ⊂X is the minimum number of open balls B(x0,r) = {x ∈X |d(x0,x) < r} whose union is a covering of S. The following definition is based on the observation that the covering number N(r) of a D-dimensional set is proportional to r−D. Definition 4 The capacity dimension of a subset S of a metric space X is Dcap = −lim r→0 logN(r) logr . The principal advantage of Dcap over Dcorr is that Dcap does not depend on the data distribution on the manifold. Moreover, if both Dcap and Dtop exist (which is certainly the case in machine learning applications), it is known that the two dimensions agree. In spite of that, Dcap is usually discarded in practical approaches due to the high computational cost of its estimation. The main contribution of this paper is an efficient intrinsic dimension estimating method that is based on the capacity dimension. Experiments on both synthetic and real data confirm that our method is much more robust in terms of the data distribution than methods based on the correlation dimension. 3 Algorithm Finding the covering number even of a finite set of data points is computationally difficult. To tackle this problem, first we redefine Dcap by using packing numbers rather than covering numbers. Given a metric space X with distance metric d(·,·), a set V ⊂X is said to be r-separated if d(x,y) ≥r for all distinct x,y ∈V . The r-packing number M(r) of a set S ⊂X is defined as the maximum cardinality of an r-separated subset of S. The following proposition follows from the basic inequality between packing and covering numbers N(r) ≤M(r) ≤N(r/2). Proposition 1 Dcap = −lim r→0 logM(r) logr . For a finite sample, the zero limit cannot be achieved so, similarly to the correlation dimension, we need to redefine the capacity dimension in a scale-dependent manner. Definition 5 The scale-dependent capacity dimension of a finite set Sn = {x1,...,xn} is bDcap(r1,r2) = −logM(r2)−logM(r1) logr2 −logr1 . Finding M(r) for a data set Sn = {x1,...,xn} is equivalent to finding the cardinality of a maximum independent vertex set MI(Gr) of the graph Gr(V,E) with vertex set V = Sn and edge set E = {(xi,xj)|d(xi,xj) < r}. This problem is known to be NP-hard. There are results that show that for a general graph, even the approximation of MI(G) within a factor of n1−ε, for any ε > 0, is NP-hard [12]. On the positive side, it was shown that for such geometric graphs as Gr, MI(G) can be approximated arbitrarily well by polynomial time algorithms [13]. However, approximating algorithms of this kind scale exponentially with the data dimension both in terms of the quality of the approximation and the running time1 so they are of little practical use for d > 2. Hence, instead of using one of these algorithms, we apply the following greedy approximation technique. Given a data set Sn, we start with an empty set of centers C, and in an iteration over Sn we add to C data points that are at a distance of at least r from all the centers in C (lines 4 to 10 in Figure 2). The estimate b M(r) is the cardinality of C after every point in Sn has been visited. The procedure is designed to produce an r-packing but certainly underestimates the packing number of the manifold, first, because we are using a finite sample, and second, because in general b M(r) < M(r). Nevertheless, we can still obtain a good estimate for bDcap by using b M(r) in the place of M(r) in Definition 5. To see why, observe that, for a good estimate for bDcap, it is enough if we can estimate M(r) with a constant multiplicative bias independent of r. Although we have no formal proof that the bias of b M(r) does not change with r, the simple greedy procedure described above seems to work well in practice. Even though the bias of b M(r) does not affect the estimation of bDcap as long as it does not change with r, the variance of b M(r) can distort the dimension estimate. The main source of the variance is the dependence of b M(r) on the the order of the data points in which they are visited. To eliminate this variance, we repeat the procedure several times on random permutations of the data, and compute the estimate bDpack by using the average of the logarithms of the packing numbers. The number of repetitions depends on r1, r2, and a preset parameter that determines the accuracy of the final estimate (set to 99% in all experiments) . The complete algorithm is given formally in Figure 2. The running time of the algorithm is O nM(r)d  where r = min(r1,r2). At smaller scales, where M(r) is comparable with n, it is O n2d  . On the other hand, since the variance of the estimate also tends to be smaller at smaller scales, the algorithm iterates less for the same accuracy. 4 Experiments The two main objectives of the four experiments described here is to demonstrate the ability of the method to capture the scale-dependent behavior of the intrinsic dimension, and to underline its robustness in terms of the data generating distribution. In all experiments, the estimate bDpack is compared to the correlation dimension estimate bDcorr. Both dimensions are measured on consecutive pairs of a sequence r1,...,rm of resolutions, and the estimate is plotted halfway between the two parameters (i.e., bD(ri,ri+1) is plotted at (ri +ri+1)/2.) In the first three experiments the manifold is either known or can be approximated easily. In these experiments we use a two-sided multivariate power distribution with density p(x) = I{x∈[−1,1]d}  p 2 d d ∏ i=1 1−|x(i)| p−1 (1) 1Typically, the computation of an independent vertex set of G of size at least 1−1 k dMI(G) requires O(nkd) time. PACKINGDIMENSION(Sn,r1,r2,ε) 1 for ℓ←1 to ∞do 2 Permute Sn randomly 3 for k ←1 to 2 do 4 C ←/0 5 for i ←1 to n do 6 for j ←1 to |C| do 7 if d Sn[i],C[j]  < rk then 8 j ←n+1 9 if j < n+1 then 10 C ←C ∪{Sn[i]} 11 bLk[ℓ] = log|C| 12 bDpack = −µ(bL2)−µ(bL1) logr2 −logr1 13 if ℓ> 10 and 1.65 √ σ2(bL1)+σ2(bL2) √ ℓ(logr2−logr1) < bDpack ∗(1−ε)/2 then 14 return bDpack Figure 2: The algorithm returns the packing dimension estimate bDpack(r1,r2) of a data set Sn with ε accuracy nine times out of ten. with different exponents p to generate uniform (p = 1) and non-uniform data sets on the manifold. The first synthetic data is that of Figure 1. We generated 5000 points on a spiral-shaped manifold with a small uniform perpendicular noise. The curves in Figure 3(a) reflect the scale-dependency observed in Figure 1. As the distribution becomes uneven, bDcorr severely underestimates bDtop while bDpack remains stable. (a) Spiral 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 1 p = 2 p = 3 p = 3 p = 5 p = 5 p = 8 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 (b) Hypercube 1 2 3 4 5 6 0.05 0.1 0.15 0.2 0.25 0.3 } } { { } PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 Figure 3: Intrinsic dimension of (a) a spiral-shaped manifold and (b) hypercubes of different dimensions. The curves reflect the scale-dependency observed in Figure 1. The more uneven the distribution, the more bDcorr underestimates bDtop while bDpack remains relatively stable. The second set of experiments were designed to test how well the methods estimate the dimension of 5000 data points generated in hypercubes of dimensions two to six (Figure 3(b)). In general, both bDcorr and bDpack underestimates bDtop. The negative bias grows with the dimension, probably due to the fact that data sets of equal cardinality become sparser in a higher dimensional space. To compensate this bias on a general data set, Camastra and Vinciarelli [10] propose to correct the estimate by the bias observed on a uniformly generated data set of the same cardinality. Our experiment shows that, in the case of bDcorr, this calibrating procedure can fail if the distribution is highly non-uniform. On the other hand, the technique seems more reliable for bDpack due to the relative stability of bDpack. We also tested the methods on two sets of image data. Both sets contained 64×64 images with 256 gray levels. The images were normalized so that the distance between a black and a white image is 1. The first set is a sequence of 481 snapshots of a hand turning a cup from the CMU database2 (Figure 4(a)). The sequence of images sweeps a curve in a 4096-dimensional space so its informal intrinsic dimension is one. Figure 5(a) shows that at a small scale, both methods find a local dimension between 1 and 2. At a slightly higher scale the intrinsic dimension increases indicating a relatively high curvature of the image sequence curve. To test the distribution dependence of the estimates, we constructed a polygonal curve by connecting consecutive points of the sequence, and resampled 481 points by using the power distribution (1) with p = 2,3. We also constructed a highlyuniform, lattice-like data set by drawing approximately equidistant consecutive points from the polygonal curve. Our results in Figure 5(a) confirm again that bDcorr varies extensively with the generating distribution on the manifold while bDpack remains remarkably stable. (a) PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 (b) PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 Figure 4: The real datasets. (a) Sequence of snapshots of a hand turning a cup. (b) Faces database from ISOMAP [4]. The final experiment was conducted on the “faces” database from the ISOMAP paper [4] (Figure 4(b)). The data set contained 698 images of faces generated by using three free parameters: vertical and horizontal orientation, and light direction. Figure 5(b) indicates that both estimates are reasonably close to the informal intrinsic dimension. (a) Turning cup 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 original lattice (b) ISOMAP faces 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 PSfrag replacements (a) D ≃0 (b) D ≃2 (c) D ≃1 (d) D ≃2 bDcorr bDpack p = 1 p = 2 p = 3 p = 5 p = 8 bDcorr, p = 1 bDpack, p = 1 bDcorr, p = 3 bDpack, p = 3 bD r d = 6 d = 5 d = 4 d = 3 d = 2 original lattice Figure 5: The intrinsic dimension of image data sets. We found in all experiments that at a very small scale bDcorr tends to be higher than bDpack, 2http://vasc.ri.cmu.edu/idb/html/motion/hand/index.html while bDpack tends to be more stable as the scale grows. Hence, if the data contains very little noise and it is generated uniformly on the manifold, bDcorr seems to be closer to the “real” intrinsic dimension. On the other hand, if the data contains noise (in which case at a very small scale we are estimating the dimension of the noise rather than the dimension of the manifold), or the distribution on the manifold is non-uniform, bDpack seems more reliable than bDcorr. 5 Conclusion We have presented a new algorithm to estimate the intrinsic dimension of data sets. The method estimates the packing dimension of the data and requires neither parametric assumptions on the data generating model nor input parameters to set. The method is compared to a widely-used technique based on the correlation dimension. Experiments show that our method is more robust in terms of the data generating distribution and more reliable in the presence of noise. References [1] T. Kohonen, The Self-Organizing Map, Springer-Verlag, 2nd edition, 1997. [2] T. F. Cox and M. A. Cox, Multidimensional Scaling, Chapman & Hill, 1994. [3] S. Roweis and Saul L. K., “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, pp. 2323–2326, 2000. [4] J. B. Tenenbaum, V. de Silva, and Langford J. C., “A global geometric framework for nonlinear dimensionality reduction,” Science, vol. 290, pp. 2319–2323, 2000. [5] E. Ch´avez, G. Navarro, R. Baeza-Yates, and J. Marroqu´ın, “Searching in metric spaces,” ACM Computing Surveys, p. to appear, 2001. [6] J. Bruske and G. Sommer, “Intrinsic dimensionality estimation with optimally topology preserving maps,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 5, pp. 572–575, 1998. [7] S. Roweis, “EM algorithms for PCA and SPCA,”in Advances in Neural Information Processing Systems. 1998, vol. 10, pp. 626–632, The MIT Press. [8] C. M. Bishop, M. Svens´en, and C. K. I. Williams, “GTM: The generative topographic mapping,” Neural Computation, vol. 10, no. 1, pp. 215–235, 1998. [9] P. Grassberger and I. Procaccia, “Measuring the strangeness of strange attractors,” Physica, vol. D9, pp. 189–208, 1983. [10] F. Camastra and A. Vinciarelli, “Estimating intrinsic dimension of data with a fractal-based approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, to appear. [11] A. Belussi and C. Faloutsos, “Spatial join selectivity estimation using fractal concepts,” ACM Transactions on Information Systems, vol. 16, no. 2, pp. 161–201, 1998. [12] J. Hastad, “Clicque is hard to approximate within n 1−ε,” in Proceedings of the 37th Annual Symposium on Foundations of Computer Science FOCS’96, 1996, pp. 627–636. [13] T. Erlebach, K. Jansen, and E. Seidel, “Polynomial-time approximation schemes for geometric graphs,” in Proceedings of the 12th ACM-SIAM Symposium on Discrete Algorithms SODA’01, 2001, pp. 671–679.
2002
20
2,214
Discriminative Learning for Label Sequences via Boosting Yasemin Altun, Thomas Hofmann and Mark Johnson* Department of Computer Science *Department of Cognitive and Linguistics Sciences Brown University, Providence, RI 02912 {altun,th}@cs.brown.edu, Mark_Johnson@brown.edu Abstract This paper investigates a boosting approach to discriminative learning of label sequences based on a sequence rank loss function. The proposed method combines many of the advantages of boosting schemes with the efficiency of dynamic programming methods and is attractive both, conceptually and computationally. In addition, we also discuss alternative approaches based on the Hamming loss for label sequences. The sequence boosting algorithm offers an interesting alternative to methods based on HMMs and the more recently proposed Conditional Random Fields. Applications areas for the presented technique range from natural language processing and information extraction to computational biology. We include experiments on named entity recognition and part-of-speech tagging which demonstrate the validity and competitiveness of our approach. 1 Introduction The problem of annotating or segmenting observation sequences arises in many applications across a variety of scientific disciplines, most prominently in natural language processing, speech recognition, and computational biology. Well-known applications include part-of-speech (POS) tagging, named entity classification, information extraction, text segmentation and phoneme classification in text and speech processing [7] as well as problems like protein homology detection, secondary structure prediction or gene classification in computational biology [3]. Up to now, the predominant formalism for modeling and predicting label sequences has been based on Hidden Markov Models (HMMs) and variations thereof. Yet, despite its success, generative probabilistic models - of which HMMs are a special case - have two major shortcomings, which this paper is not the first one to point out. First, generative probabilistic models are typically trained using maximum likelihood estimation (MLE) for a joint sampling model of observation and label sequences. As has been emphasized frequently, MLE based on the joint probability model is inherently non-discriminative and thus may lead to suboptimal prediction accuracy. Secondly, efficient inference and learning in this setting often requires to make questionable conditional independence assumptions. More precisely, in the case of HMMs, it is assumed that the Markov blanket of the hidden label variable at time step t consists of the previous and next labels as well as the t-th observation. This implies that all dependencies on past and future observations are mediated through neighboring labels. In this paper, we investigate the use of discriminative learning methods for learning label sequences. This line of research continues previous approaches for learning conditional models, namely Conditional Random Fields (CRFs) [6], and discriminative re-ranking [1, 2]. CRFs have two main advantages compared to HMMs: They are trained discriminatively by maximizing a conditional (or pseudo-) likelihood criterion and they are more flexible in modeling additional dependencies such as direct dependencies of the t-th label on past or future observations. However, we strongly believe there are two further lines of research that are worth pursuing and may offer additional benefits or improvements. First of all, and this is the main emphasis of this paper, an exponential loss function such as the one used in boosting algorithms [9,4] may be preferable to the logarithmic loss function used in CRFs. In particular we will present a boosting algorithm that has the additional advantage of performing implicit feature selection, typically resulting in very sparse models. This is important for model regularization as well as for reasons of efficiency in high dimensional feature spaces. Secondly, we will also discuss the use of loss functions that explicitly minimize the zer%ne loss on labels, i.e. the Hamming loss, as an alternative to loss functions based on ranking or predicting entire label sequences. 2 Additive Models and Exponential Families Formally, learning label sequences is a generalization of the standard supervised classification problem. The goal is to learn a discriminant function for sequences, i.e. a mapping from observation sequences X = (X1,X2, ... ,Xt, ... ) to label sequences y = (Y1, Y2, ... , Yt, ... ). The availability of a training set of labeled sequences X == {(Xi, yi) : i = 1, ... ,n} to learn this mapping from data is assumed. In this paper, we focus on discriminant functions that can be written as additive models. The models under consideration take the following general form: Fe(X, Y) = L Fe(X, Y; t), with Fe(X, Y; t) = L fh!k(X , Y ; t) (1) k Here fk denotes a (discrete) feature in the language of maximum entropy modeling, or a weak learner in the language of boosting. In the context of label sequences fk will typically be either of the form f~1)(Xt+s,Yt) (with S E {-l,O, l}) or f~2) (Yt-1, Yt). The first type of features will model dependencies between the observation sequence X and the t-th label in the sequence, while the second type will model inter-label dependencies between neighboring label variables. For ease of presentation, we will assume that all features are binary, i.e. each learner corresponds to an indicator function. A typical way of defining a set of weak learners is as follows: (1) ( ) fk Xt+s , Yt (2) ( ) fk Yt-1, Yt J(Yt, y(k))Xdxt+s) J(Yt ,y(k))J(Yt-1 ,y(k)) . (2) (3) where J denotes the Kronecker-J and Xk is a binary feature function that extracts a feature from an observation pattern; y(k) and y(k) refer to the label values for which the weak learner becomes "active". There is a natural way to associate a conditional probability distribution over label sequences Y with an additive model Fo by defining an exponential family for every fixed observation sequence X Po(YIX) == exp~:(~; Y)], Zo(X) == Lexp[Fo(X,Y)]. y (4) This distribution is in exponential normal form and the parameters B are also called natural or canonical parameters. By performing the sum over the sequence index t, we can see that the corresponding sufficient statistics are given by Sk(X, Y) == 2:t h(X, Y; t). These sufficient statistics simply count the number of times the feature fk has been "active" along the labeled sequence (X, Y). 3 Logarithmic Loss and Conditional Random Fields In CRFs, the log-loss of the model with parameters B w.r.t. a set of sequences X is defined as the negative sum of the conditional probabilities of each training label sequence given the observation sequence, Although [6] has proposed a modification of improved iterative scaling for parameter estimation in CRFs, gradient-based methods such as conjugate gradient descent have often found to be more efficient for minimizing the convex loss function in Eq. (5) (cf. [8]). The gradient can be readily computed as (6) where expectations are taken w.r.t. Po(YIX). The stationary equations then simply state that uniformly averaged over the training data, the observed sufficient statistics should match their conditional expectations. Computationally, the evaluation of S(Xi, yi) is straightforward counting, while summing over all sequences Y to compute E [S(X, Y)IX = Xi] can be performed using dynamic programming, since the dependency structure between labels is a simple chain. 4 Ranking Loss Functions for Label Sequences As an alternative to logarithmic loss functions, we propose to minimize an upper bound on the ranking loss [9] adapted to label sequences. The ranking loss of a discriminant function Fo w.r.t. a set of training sequences is defined as 1{rnk(B;X) = L L 8(Fo(Xi,Y) _FO(Xi,yi)), 8(x) == {~ ~~~:r~~e (7) i Y;iY; which is simply the sum of the number of label sequences that are ranked higher than or equal to the true label sequence over all training sequences. It is straightforward to see (based on a term by term comparison) that an upper bound on the rank loss is given by the following exponential loss function 1{exp(B; X) == L L exp [FO(Xi, Y) - FO(Xi, yi)] = L [Po (~iIXi) -1].(8) i Y#Y' i 0 Interestingly this simply leads to a loss function that uses the inverse conditional probability of the true label sequence, if we define this probability via the exponential form in Eq. (4). Notice that compared to [1], we include all sequences and not just the top N list generated by some external mechanism. As we will show shortly, an explicit summation is possible because of the availability of dynamic programming formulation to compute sums over all sequences efficiently. In order to derive gradient equations for the exponential loss we can simply make use of the elementary facts \1 eP(()) 1 \1 eP(()) \le(-logP(()))=P(()) , and\le p (())=- P(())2 \le(-logP(())) P(()) (9) Then it is easy to see that (10) The only difference between Eq. (6) and Eq. (10) is the non-uniform weighting of different sequences by their inverse probability, hence putting more emphasis on training label sequences that receive a small overall (conditional) probability. 5 Boosting Algorithm for Label Sequences As an alternative to a simple gradient method, we now turn to the derivation of a boosting algorithm, following the boosting formulation presented in [9]. Let us introduce a relative weight (or distribution) D(i, Y) for each label sequence Y w.r.t. a training instance (Xi, yi), i.e. L i Ly D(i, Y) = 1, D(i, Y) exp [Fe (Xi, Y) - Fe (Xi, yi)] for Y 1- y i (11) Lj, LY,#Yj exp [Fe(Xj , Y') - Fe (Xj, y j)]' . Pe(YIXi) . _ Pe(yi IXi) - l - 1 D(z) 1 _ Pe(yiIXi) ' D(z) = Lj [Pe(yjIXj)-l _ 1] (12) In addition, we define D(i, y i) = O. Eq. (12) shows how we can split D(i, Y) into a relative weight for each training instance, given by D(i) , and a relative weight of each sequence, given by the re-normalized conditional probability Pe(YIXi). Notice that D(i) --+ 0 as we approach the perfect prediction case of Pe(yiIXi) --+ 1. We define a boosting algorithm which in each round aims at minimizing the partition function or weight normalization constant Zk w.r.t. a weak learner fk and a corresponding optimal parameter increment L,()k Zk(L,()k) == "D(i)" P~~IXli) .) exp [L,()k(Sk(Xi, Y)-Sk(Xi, yi))](13) ~ ~ . 1e Y·X· • Y # Y ' = ~ ( ~ D(i)P(bIXi; k)) exp [bL,()k], (14) where Pe(bIXi; k) = LYEY (b;X i) Pe(YIXi)/(l - Pe(yi IXi)) and Y(b; Xi) == {Y : Y 1- y i 1\ (Sk(Xi,Y) - Sk(Xi,yi)) = b}. This minimization problem is only tractable if the number of features is small, since a dynamic programming run with accumulators [6] for every feature seems to be required in order to compute the probabilities Po(bIXi; k), i.e. the probability for the k-th feature to be active exactly b times, conditioned on the observation sequence Xi. In cases, where this is intractable (and we assume this will be the case in most applications), one can instead minimize an upper bound on every Zk' The general idea is to exploit the convexity of the exponential function and to bound (15) which is valid for every x E [xmin; xmax]. We introduce the following shorthand notation Uik(Y) == Sk(Xi,Y) - SdXi,yi), max (Y) max _ max min -' (Y) minUik = maxy:;tyi Uik , Uk maxi Uik , Uik = mmy:;tyi Uik , Uk = mini u'[kin and 7fi(Y) == Po(YIXi)!(1 - Po(yiIXi) ) which allows us to rewrite Zk(L.Bk) = LD(i) L 7fi(Y) exp [L.BkUik(Y)] (16) y:;tyi < " D(i) " 7fi(Y) [u'[kax - Uik(:) eL:o.Oku,&;n + Uik(Y) u~in eL:o.Oku,&ax] ~ ~ uI?ax uI?m uI?ax uI?m i y:;tyi tk tk tk tk LD(i) (rikeMkU,&;n + (1- rik)eMkU,&aX), where (17) i rik == " 7fi(Y) u'[kax - Uik(:) (18) ~ uI?ax _ u mm y:;tyi tk tk By taking the second derivative w.r.t. L.Bk it is easy to verify that this is a convex function in L.Bk which can be minimized with a simple line search. If one is willing to accept a looser bound, one can instead work with the interval [uk'in; uk'ax] which is the union of the intervals [u'[kin; u'[kax] for every training sequence i and obtain the upper bound Zk(L.Bk) < rkeMkuk';n + (1 _ rk)eL:o.Okuk'ax "D(i) " 7fi(Y) uk'ax - Uik(:) ~ ~ u max _umm i y=/-yi k k Which can be solved analytically L.B 1 10 ( -rkuk'in ) k uk'ax _ uk'in g (1 - rk)Uk'ax but will in general lead to more conservative step sizes. (19) (20) (21) The final boosting procedure picks at every round the feature for which the upper bound on Zk is minimal and then performs an update of Bk +- Bk + L.Bk. Of course, one might also use more elaborate techniques to find the optimal L.Bk, once !k has been selected, since the upper bound approximation may underestimate the optimal step sizes. It is important to see that the quantities involved (rik and rk, respectively) are simple expectations of sufficient statistics that can be computed for all features simultaneously with a single dynamic programming run per sequence. 6 Hamming Loss for Label Sequences In many applications one is primarily interested in the label-by-labelloss or Hamming loss [9]. Here we investigate how to train models by minimizing an upper bound on the Hamming loss. The following logarithmic loss aims at maximizing the log-probability for each individual label and is given by F1og(B;X) == - LL)og Po(y1IXi) = - LLlog L PO(YIXi). (22) v:Yt = Y; Again, focusing on gradient descent methods, the gradient is given by As can be seen, the expected sufficient statistics are now compared not to their empirical values, but to their expected values, conditioned on a given label value Y; (and not the entire sequence Vi). In order to evaluate these expectations, one can perform dynamic programming using the algorithm described in [5], which has (independently of our work) focused on the use of Hamming loss functions in the context of CRFs. This algorithm has the complexity of the forward-backward algorithm scaled by a constant. Similar to the log-loss case, one can define an exponential loss function that corresponds to a margin-like quantity at every single label. We propose minimizing the following loss function ~ ~ ~ exp [F'(X;, Y) -log Y'~": exp [Fo(X" V')] ]<24) L l:vexp [FO(Xi,y)] = LR ( iIXi'B) - l (25) . t l:v Yt=y i exp [FO(Xi, Y)] . t 0 Yt , 2, 't 2 , As a motivation, we point out that for the case of sequences of length 1, this will reduce to the standard multi-class exponential loss. Effectively in this model, the prediction of a label Yt will mimic the probabilistic marginalization, i.e. y; = argmaxy FO(Xi, Y; t), FO(Xi, Y; t) = log l:v:Yt=Y exp [FO(Xi, Y)]. Similar to the log-loss case, the gradient is given by _ "E [S(X, Y)IX = Xi ,Yt = yn ~ E [S(Xi, Y)IX = Xi] (26) it' Po(y:IX') Again, we see the same differences between the log-loss and the exponential loss, but this time for individual labels. Labels for which the marginal probability Po (yf IXi) is small are accentuated in the exponential loss. The computational complexity for computing \7 oFexp and \7 oF1og is practically the same. We have not been able to derive a boosting formulation for this loss function, mainly because it cannot be written as a sum of exponential terms. We have thus resorted to conjugate gradient descent methods for minimizing Fexp in our experiments. 7 Experimental Results 7 .1 Named Entity Recognition Named Entity Recognition (NER), a subtask of Information Extraction, is the task of finding the phrases that contain person, location and organization names, times and quantities. Each word is tagged with the type of the name as well as its position in the name phrase (i.e. whether it is the first item of the phrase or not) in order to represent the boundary information. We used a Spanish corpus which was provided for the Special Session of CoNLL2002 on NER. The data is a collection of news wire articles and is tagged for person names, organizations, locations and miscellaneous names. We used simple binary features to ask questions about the word being tagged, as well as the previous tag (i.e. HMM features). An example feature would be: Is the current word='Clinton' and the tag='Person-Beginning'? We also used features to ask detailed questions (i.e. spelling features) about the current word (e.g.: Is the current word capitalized and the tag='Location-Intermediate'?) and the neighboring words. These questions cannot be asked (in a principled way) in a generative HMM model. We ran experiments comparing the different loss functions optimized with the conjugate gradient method and the boosting algorithm. We designed three sets of features: HMM features (=31), 31 and detailed features of the current word (= 32), and 32 and detailed features of the neighboring words (=33). The results summarized in Table 1 demonstrate the competitiveness of the proposed loss functions with respect to 1{log. We observe that with different sets of features, the ordering of the performance of the loss functions changes. Boosting performs worse than the conjugate gradient when only HMM features are used, since there is not much information in the features other than the identity of the word to be labeled. Consequently, the boosting algorithm needs to include almost all weak learners in the ensemble and cannot exploit feature sparseness. When there are more deFeature Objective Set log exp boost Sl 1{ 6.60 6.95 8.05 :F 6.73 7.33 S2 1{ 6.72 7.03 6.93 :F 6.67 7.49 S3 1{ 6.15 5.84 6.77 :F 5.90 5.10 Table 1: Test error of the Spanish corpus for named entity recognition. tailed features, the boosting algorithm is competitive with the conjugate gradient method, but has the advantage of generating sparser models. The conjugate gradient method uses all of the available features, whereas boosting uses only about 10% of the features. 7.2 Part of Speech Tagging We used the Penn TreeBank corpus for the part-of-speech tagging experiments. The features were similar to the feature sets Sl and S2 described above in the context of NER. Table 2 summarizes the experimental results obtained on this task. It can be seen that the test errors obtained by different loss functions lie within a relatively small range. Qualitatively the behavior of the different optimization methods is comparable to the NER experiments. 7.3 General Comments Feature Objective Set log exp boost Sl 1{ 4.69 5.04 10.58 :F 4.88 4.96 S2 1{ 4.37 4.74 5.09 :F 4.71 4.90 Table 2: Test error of the Penn TreeBank corpus for POS Even with the tighter bound in the boosting formulation, the same features are selected many times, because of the conservative estimate of the step size for parameter updates. We expect to speed up the convergence of the boosting algorithm by using a more sophisticated line search mechanism to compute the optimal step length, a conjecture that will be addressed in future work. Although we did not use real-valued features in our experiments, we observed that including real-valued features in a conjugate gradient formulation is a challenge, whereas it is very natural to have such features in a boosting algorithm. We noticed in our experiments that defining a distribution over the training instances using the inverse conditional probability creates problems in the boosting formulation for data sets that are highly unbalanced in terms of the length of the training sequences. To overcome this problem, we divided the sentences into pieces such that the variation in the length of the sentences is small. The conjugate gradient optimization, on the other hand, did not appear to suffer from this problem. 8 Conclusion and Future Work This paper makes two contributions to the problem of learning label sequences. First, we have presented an efficient algorithm for discriminative learning of label sequences that combines boosting with dynamic programming. The algorithm compares favorably with the best previous approach, Conditional Random Fields, and offers additional benefits such as model sparseness. Secondly, we have discussed the use of methods that optimize a label-by-labelloss and have shown that these methods bear promise for further improving classification accuracy. Our future work will investigate the performance (in both accuracy and computational expenses) of the different loss functions in different conditions (e.g. noise level, size of the feature set). Acknowledgments This work was sponsored by an NSF-ITR grant, award number IIS-0085940. References [1] M. Collins. Discriminative reranking for natural language parsing. In Proceedings 17th International Conference on Machine Learning, pages 175- 182. Morgan Kaufmann, San Francisco, CA, 2000. [2] M. Collins. Ranking algorithms for named- entity extraction: Boosting and the voted perceptron. In Proceedings 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 489- 496, 2002. [3] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press, 1998. [4] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28:337- 374, 2000. [5] S. Kakade, Y.W. Teh, and S. Roweis. An alternative objective function for Markovian fields. In Proceedings 19th International Conference on Machine Learning, 2002. [6] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. 18th International Conf. on Machine Learning, pages 282- 289. Morgan Kaufmann, San Francisco, CA, 200l. [7] C. Manning and H. Schiitze. Foundations of Statistical Natural Language Processing. MIT Press, 1999. [8] T. Minka. Algorithms for maximum-likelihood logistic regression. Technical report, CMU, Department of Statistics, TR 758, 200l. [9] R. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297- 336, 1999.
2002
200
2,215
Reinforcement Learning to Play an Optimal Nash Equilibrium in Team Markov Games Xiaofeng Wang ECE Department Carnegie Mellon University Pittsburgh, PA 15213 xiaofeng@andrew.cmu.edu Tuomas Sandholm CS Department Carnegie Mellon University Pittsburgh, PA 15213 sandholm@cs.cmu.edu Abstract Multiagent learning is a key problem in AI. In the presence of multiple Nash equilibria, even agents with non-conflicting interests may not be able to learn an optimal coordination policy. The problem is exaccerbated if the agents do not know the game and independently receive noisy payoffs. So, multiagent reinforfcement learning involves two interrelated problems: identifying the game and learning to play. In this paper, we present optimal adaptive learning, the first algorithm that converges to an optimal Nash equilibrium with probability 1 in any team Markov game. We provide a convergence proof, and show that the algorithm’s parameters are easy to set to meet the convergence conditions. 1 Introduction Multiagent learning is a key problem in AI. For a decade, computer scientists have worked on extending reinforcement learning (RL) to multiagent settings [11, 15, 5, 17]. Markov games (aka. stochastic games) [16] have emerged as the prevalent model of multiagent RL. An approach called Nash-Q [9, 6, 8] has been proposed for learning the game structure and the agents’ strategies (to a fixed point called Nash equilibrium where no agent can improve its expected payoff by deviating to a different strategy). Nash-Q converges if a unique Nash equilibrium exists, but generally there are multiple Nash equilibria. Even team Markov games (where the agents have common interests) can have multiple Nash equilibria, only some of which are optimal (that is, maximize sum of the agents’ discounted payoffs). Therefore, learning in this setting is highly nontrivial. A straightforward solution to this problem is to enforce convention (social law). Boutilier proposed a tie-breaking scheme where agents choose individual actions in lexicographic order[1]. However, there are many settings where the designer is unable or unwilling to impose a convention. In these cases, agents need to learn to coordinate. Claus and Boutilier introduced fictitious play, an equilibrium selection technique in game theory, to RL. Their algorithm, joint action learner (JAL) [2], guarantees the convergence to a Nash equilibrium in a team stage game. However, this equilibrium may not be optimal. The same problem prevails in other equilibrium-selection approaches in game theory such as adaptive play [18] and the evolutionary model proposed in [7]. In RL, the agents usually do not know the environmental model (game) up front and receive noisy payoffs. In this case, even the lexicographic approaches may not work because agents receive noisy payoffs independently and thus may never perceive a tie. Another significant problem in previous research is how a nonstationary exploration policy (required by RL) affects the convergence of equilibrium selection approaches—which have been studied under the assumption that agents either always take the best-response actions or make mistakes at a constant rate. In RL, learning to play an optimal Nash equilibrium in team Markov games has been posed as one of the important open problems [9]. While there have been heuristic approaches to this problm, no existing algorithm has been proposed that is guarenteed to converge to an optimal Nash equilibrium in this setting. In this paper, we present optimal adaptive learning (OAL), the first algorithm that converge to an optimal Nash equilibrium with probability 1 in any team Markov game (Section 3). We prove its convergence, and show that OAL’s parameters are easy to meet the convergence conditions (Section 4). 2 The setting 2.1 MDPs and reinforcement learning (RL) In a Markov decision problem, there is one agent in the environment. A fully observable Markov decision problem (MDP) is a tuple     where  is a finite state space;  is the space of actions the agent can take;  is a payoff function (    is the expected payoff for taking action  in state  ); and !"#$%&% ' ()+*-, is a transition function ( ./0 213 is the probability of ending in state 41 , given that action  is taken in state  ). An agent’s deterministic policy (aka. strategy) is a mapping from states to actions. We denote by 5 6 the action that policy 5 prescribes in state  . The objective is to find a policy 5 that maximizes 798 :<;=?> :A@ CB :ED 5  , where B : is the payoff at time F , and >HG ' ()E*2 is a discount factor. There exists a deterministic optimal policy 5JI [12]. The Q-function for this policy, K I , is defined by the set of equations KLI / NM/ PO > 7QARCS/T L 0  1 VUXWZY)[ R S\ KLI  1   1  . At any state  , the optimal policy chooses W]^UXWZY_[ KLI /  [10]. Reinforcement learning can be viewed as a sampling method for estimating KI when the payoff function  and/or transition function are unknown. KI /  can be approximated by a function K : /  calculated from the agent’s experience up to time F . The modelbased approach uses samples to generate models of  and , and then iteratively computes K : / V`M : / PO > 7 Q R S/T : / ab 1 VUXWZY [ RcS\ K :edf  1   1  . Based on K : , a learning policy assigns probabilities to actions at each state. If the learning policy has the “Greedy in the Limit with Infinite Exploration” (GLIE) property, then K : will converge to K.I (with either a model-based or model-free approach) and the agent will converge in behavior to an optimal policy [14]. Using GLIE, every state-action pair is visited infinitely often, and in the limit the action selection is greedy with respect to the Q-function w.p.1. One common GLIE policy is Boltzmann exploration [14]. 2.2 Multiagent RL in team Markov games when the game is unknown A natural extension of an MDP to multiagent environments is a Markov game (aka. stochastic game) [16]. In this paper we focus on team Markov games, that are Markov games where each agent receives the same expected payoff (in the presence of noise, different agent may still receive different payoffs at a particular moment.). In other words, there are no conflicts between the agents, but learning the game structure and learning to coordinate are nevertheless highly nontrivial. Definition 1 A team Markov game (aka identical-interest stochastic game) g is a tuple Chi     , where h is a set of n agents; S is a finite state space; jM`lk ;mfonpnpn q is a joint action space of n agents; rs9tuv is the common expected payoff function; and wVxy#z{|' (_E*E, is a transition function. The objective of the } agents is to find a deterministic joint policy (aka. joint strategy aka. strategy profile) 5 M#~ 5 k ;Jfbnpnpn q0 (where 5 /{€ and 5 k ‚ƒ k ) so as to maximize the expected sum of their discounted payoffs. The Q-function, K/  , is the expected sum of discounted payoffs given that the agents play joint action  in state  and follow policy 5 thereafter. The optimal K -function K.I / V is the K -function for (each) optimal policy 5mI . So, KLI captures the game structure. The agents generally do not know KI in advance. Sometimes, they know neither the payoff structure nor the transition probabilities. A joint policy ~ 5 k ;mfonpnpn q  is a Nash equilibrium f if each individual policy is a best response to the others. That is, for all  G h ,  G  and any individual policy 5 1 k , KLI /-~ 5 k 2  5 d k 2  KLI /-~ 5 1 k 2  5 d k 2 , where 5 d k is the joint policy of all agents except agent  . (Likewise, throughout the paper, we use   to denote all agents but  , e.g., by  d k their joint action, by  d k their joint action set.) A Nash equilibrium is strict if the inequality above is strict. An optimal Nash equilibrium 5 I is a Nash equilibrium that gives the agents the maximal expected sum of discounted payoffs. In team games, each optimal Nash equilibrium is an optimal joint policy (and there are no other optimal joint policies). A joint action  is optimal in state  if K.I /   KLI / 1  for all  1 G  . If we treat K I /  as the payoff of joint action  in state  , we obtain a team game in matrix form. We call such a game a state game for  . An optimal joint action in  is an optimal Nash equilibrium of that state game. Thus, the task of optimal coordination in a team Markov game boils down to having all the agents play an optimal Nash equilibrium in state games. However, a coordination problem arises if there are multiple Nash equilibria. The 3-player f f f  f   f      f     [ f 10 -20 -20 -20 -20 5 -20 5 -20 [ -20 -20 5 -20 10 -20 5 -20 -20 [  -20 5 -20 5 -20 -20 -20 -20 10 Table 1: A three-player coordination game game in Table 1 has three optimal Nash equilibria and six sub-optimal Nash equilibria. In this game, no existing equilibrium selection algorithm (e.g.,fictitious play [3]) is guarenteed to learn to play an optimal Nash equilibrium. Furthermore, if the payoffs are only expectations over each agent’s noisy payoffs and unknown to the agents before playing, even identification of these sub-optimal Nash equilibria during learning is nontrivial. 3 Optimal adaptive learning (OAL) algorithm We first consider the case where agents know the game before playing. This enables the learning agents to construct a virtual game (VG) for each state  of the team Markov game to eliminate all the strict suboptimal Nash equilibria in that state. Let  I    be the payoff that the agents receive from the VG in state  for a joint action  . We let LI / VyM * if 9M WZ] ^`UXW4Y)[ R S\ KLI / 1  and LI    M ( otherwise. For example, the VG for the game in Table 1 gives payoff 1 for each optimal Nash equilibrium ( ~60*2*4*  , ~6    , and ~6))  ), and payoff 0 to every other joint action. The VG in this example is weakly acyclic. Definition 2 (Weakly acyclic game [18]) Let  be an n-player game in matrix form. The best-response graph of  takes each joint action  G  as a vertex and connects two vertices  and  1 with a directed edge    1 if and only if 1)  M  1 ; 2) there exists exactly one agent  such that  1 k is a best response to  d k and  1 d k M9 d k . We say the game  is weakly acyclic if in its best-response graph, from any initial vertex  , there exists a directed path to some vertex  I from which there is no outgoing edge. To tackle the equilibrium selection problem for weakly acyclic games, Young [18] proposed a learning algorithm called adaptive play (AP), which works as follows. Let  : G  be a joint action played at time F over an n-player game in matrix form. Fix integers Throughout the paper, every Nash equilibrium that we discuss is also a subgame perfect Nash equilibrium. (This refinement of Nash equilibrium was first introduced in [13] for different games). and  such that *   . When F   , each agent  randomly chooses its actions. Starting from F M  O * , each agent looks back at the  most recent plays  : Mƒ :ed  :edmf  - :ed?f  and randomly (without replacement) selects samples from  : . Let : mf C d k be the number of times that a reduced joint action  d k G  d k (a joint action without agent  ’s individual action) appears in the samples at F O{* . Let  k  be agent  ’s payoff given that joint action  has been played. Agent  calculates its expected payoff w.r.t its individual action  k as @ CVkeyM 7 [ S\   k  ~6Vk    d ke [  ! , and then randomly chooses an action from a set of best responses: "  : k M ~2k D VkM WZ] ^`UXW4Y [ R  S\  @#  1 k   . Young showed that AP in a weakly acyclic game converges to a strict Nash equilibrium w.p.1. Thus, AP on the VG for the game in Table 1 leads to an equilibrium with payoff 1 which is actually an optimal Nash equilibrium for the original game. Unfortunately, this does not extend to all VGs because not all VGs are weakly acyclic: in a VG without any strict Nash equilibrium, AP may not converge to the strategy profile with payoff 1. In order to address more general settings, we now modify the notion of weakly acyclic game and adaptive play to accommodate weak optimal Nash equilibria. Definition 3 (Weakly acyclic game w.r.t a biased set (WAGB)): Let $ be a set containing some of the Nash equilibria of a game  (and no other joint policies). Game  is a WAGB if, from any initial vertex  , there exists a directed path to either a Nash equilibrium inside $ or a strict Nash equilibrium. We can convert any VG to a WAGB by setting the biased set $ to include all joint policies that give payoff 1 (and no other joint policies). To solve such a game, we introduce a new learning algorithm for equilibrium selection. It enables each agent to deterministically select a best-response action once any Nash equilibrium in the biased set is attained (even if there exist several best responses when the Nash equilibrium is not strict). This is different from AP where players randomize their action selection when there are multiple best-response actions. We call our approach biased adaptive play (BAP). BAP works as follows. Let $ be the biased set composed of some Nash equilibria of a game in matrix form. Let   : be the set of samples drawn at time F , without replacement, from among the most recent  joint actions. If (1) there exists a joint action  1 G $ such that for all  G   : ,  d k&%  and  d k'%  1 , and (2) there exists at least one joint action  such that  G   : and  G $ , then agent  chooses its best-response action  k such that  k G  : R and F 1 M UXW4Ys~6 D ( G   :*) ( G $  . That is, Vk is contained in the most recent play of a Nash equilibrium inside $ . On the other hand, if the two conditions above are not met, then agent  chooses its best-response action in the same way as AP. As we will show, BAP (even with GLIE exploration) on a WAGB converges w.p.1 to either a Nash equilibrium in $ or a strict Nash equilibrium. So far we tackled learning of coordination in team Markov games where the game structure is known. Our real interests are in learning when the game is unknown. In multiagent reinforcement learning, K I is asymptotically approximated with K : . Let  :    be the virtual game w.r.t K : /  . Our question is how to construct   : so as to assure  :   LI w.p.1. Our method of achieving this makes use of the notion of + -optimality. Definition 4 Let + be a positive constant. A joint action a is + -optimal at state s and time t if K : / `O +  UXW4Y)[ R K :   1  for all  1 G  . We denote the set of + -optimal joint actions at state s and time t as &,  2 . The idea is to use a decreasing + -bound + : to estimate &,  2 at state  and time F . All the joint actions belonging to the set are treated as optimal Nash equilibria in the virtual game  : which give agents payoff 1. If + : converges to zero at a rate slower than K : converges to KLI , then  :   LI w.p.1. We make + : proportional to a function " :  G ' (_E*E, which decreases slowly and monotonically to zero with : , where : is the smallest number of times that any state-action pair has been sampled so far. Now, we are ready to present the entire optimal adaptive learning (OAL) algorithm. As we will present thereafter, we craft " :  carefully using an understanding of the convergence rate of a model-based RL algorithm that is used to learn the game structure. Optimal adaptive learning algorithm (for agent k ) 1. Initialization :J; = . For all Q ST and [ S \ do q   Q [  ;{f , (   Q [ Q R  ;    and    Q [  ;= . ,  ;  ; \   Q  ; \ . ; \ . 2. Learning of coordination policy If :  , randomly select an action, otherwise do (a) Update the virtual game   at state Q :    Q [  ;yf if [ S \   Q  and    Q [  ;X= otherwise. Set ; [    Q [  ; f . (b) According to GLIE exploration, with an exploitation probability  do i. Randomly select (without replacement) ! records from  recent observations of others’ joint actions played at state Q . ii. Calculate expected payoff of individual action [  over the virtual game    Q [  at current state Q as follows:   Q [   ; 7  "! # %$ '&  )( *    Q  [  + [  . Construct the bestresponse set at state Q and time : : ,-    Q  ; [  [  ;/.021435.6  R  "!    Q [ R    . iii. If conditions 1) and 2) of BAP are met, choose a best-response action with respect to the biased set . Otherwise, randomly select a best-response action from ,-    Q  . Otherwise, randomly select an action to explore. 3. Off-policy learning of game structure 798 (a) Observe state transition Q: Q R and payoff ;  under the joint action [ . Do i. q   Q [ 4< q   Q= [  .f . ii.    Q [ 4<    Q [    >  $ '&  (  ;  d    Q [   . iii. (   Q= [ Q R 4< (   Q [ AQ R    >  $ '&  (  f?d (   Q [ Q R   . iv. For all Q ?iS T and Q ?A@ ; Q R do (   Q= [ Q%? 4<  fd  > )$ %&  (  (   Q [ eQ ?  . (b) 7   Q= [ 4<    Q [  CB 7  R ED (   Q= [ Q R  35.6  R "! 7   Q R  [ R  . (c) : < :.f . F  < 35G H  "D &  I! q   Q [  . (d) If ,  J  ,  F   (see Section 4.2 for the construction of ,  F   ) i. ,  <  ,  F   . ii. Update 7   Q [  for all  Q [  using (b). iii. \   Q 4<  [K 7   Q= [   ,  L 35.6  R I! 7   Q= [ R   . Here, } : /  is the number of times a joint action  has been played in state  by time F . M is a positive constant (any value works). :   d k  is the number of times that a joint action  d k G  d k appears in agent  ’s samples (at time F ) from the most recent  joint actions taken in state  . 4 Proof of convergence of OAL In this section, we prove that OAL converges to an optimal Nash equilibrium. Throughout, we make the common RL assumptions: payoffs are bounded, and the number of states and actions is finite. The proof is organized as follows. In Section 4.1 we show that OAL agents learn optimal coordination if the game is known. Specifically, we show that BAP against a WAGB with known game structure converges to a Nash equilibrium under GLIE exploration. Then in Section 4.2 we show that OAL agents will learn the game structure. Specifically, any virtual game can be converted to a WAGB which will be learned surely. Finally, these two tracks merge in Section 4.3 which shows that OAL agents will learn the game structure and optimal coordination. Due to limited space, we omit most proofs. They can be found at: www.cs.cmu.edu/˜sandholm/oal.ps. 4.1 Learning to coordinate in a known game In this section, we first model our biased adaptive play (BAP) algorithm with best-response action selection as a stationary Markov chain. In the second half of this section we then model BAP with GLIE exploration as a nonstationary Markov chain. 4.1.1 BAP as a stationary Markov chain Consider BAP with randomly selected initial  plays. We take the initial    FB   M C f   +   as the initial state of the Markov chain. The definition of the other states is inductive: A successor of state  is any state  1 obtained by deleting the left-most element of  and appending a new right-most element. The only exception is that all the states  M C0 a 3  with  being either a member of the biased set $ or a strict Nash equilibrium are grouped into a unique terminal state  ( . Any state directing to  G  ( is treated as directly connected to  ( . Let  be the state transition matrix of the above Markov chain. Let  1 be a successor of  , and let  M ~6 f   - q  ( } players) be the new element that was appended to the right of  to get  1 . Let    R ( be the transition probability from  to  1 . Now,     R ( if and only if for each agent  , there exists a sample of size in  to which  k is  ’s best response according to the action-selection rule of BAP. Because agent  chooses such a sample with a probability independent of time F , the Markov chain is stationary. Finally, due to our clustering of multiple states into a terminal state  ( , for any state  connected to  ( , we have     M 7  R S      R . In the above model, once the system reaches the terminal state, each agent’s best response is to repeat its most recent action. This is straightforward if in the actual terminal state  M C00    (which is one of the states that were clustered to form the terminal state),  is a strict Nash equilibrium. If  is only a weak Nash equilibrium (in this case,  G $ ), BAP biases each agent to choose its most recent action because conditions (1) and (2) of BAP are satisfied. Therefore, the terminal state  ( is an absorbing state of the finite Markov chain. On the other hand, the above analysis shows that  ( essentially is composed of multiple absorbing states. Therefore, if agents come into  ( , they will be stuck in a particular state in  ( forever instead of cycling around multiple states in  ( . Theorem 1 Let G be a weakly acyclic game w.r.t. a biased set D. Let L(a) be the length of the shortest directed path in the best-response graph of G from a joint action a to either an absorbing vertex or a vertex in D, and let  M UXW4Y [ C  . If     O  , then, w.p.1, biased adaptive play in G converges to either a strict Nash equilibrium or a Nash equilibrium in D. Theorem 1 says that the stationary Markov chain for BAP in a WAGB (given     O  ) has a unique stationary distribution in which only the terminal state appears. 4.1.2 BAP with GLIE exploration as a nonstationary Markov chain Without knowing game structure, the learners need to use exploration to estimate their payoffs. In this section we show that such exploration does not hurt the convergence of BAP. We show this by first modeling BAP with GLIE exploration as a non-stationary Markov chain. With GLIE exploration, at every time step F , each joint action occurs with positive probability. This means that the system transitions from the state it is in to any of the successor states with positive probability. On the other hand, the agents’ action-selection becomes increasingly greedy over time. In the limit, with probability one, the transition probabilities converge to those of BAP with no exploration. Therefore, we can model the learning process with a sequence of transition matrices ~  :  8 :<;Jf such that  U : : 8  : M  , where  is the transition matrix of the stationary Markov chain describing BAP without exploration.  Akin to how we modeled BAP as a stationary Markov chain above, Young modeled adaptive play (AP) as a stationary Markov chain [18]. There are two differences. First, unlike AP’s, BAP’s action selection is biased. Second, in Young’s model, it is possible to have several absorbing states while in our model, at most one absorbing state exists (for any team game, our model has exactly one absorbing state). This is because we cluster all the absorbing states into one. This allows us to prove our main convergence theorem. Our objective here is to show that on a WAGB, BAP with GLIE exploration will converge to the (“clustered”) terminal state. For that, we use the following lemma (which is a combination of Theorems V4.4 and V4.5 from [4]). Lemma 2 Let  be the finite transition matrix of a stationary Markov chain with a unique stationary distribution . Let ~  :  8 :<;mf be a sequence of finite transition matrices. Let  be a probability vector and denote      M         . If  U : : 8  : M  , then  U : 8      M for all  ( . Using this lemma and Theorem 1, we can prove the following theorem. Theorem 3 (BAP with GLIE) On a WAGB G, w.p.1, BAP with GLIE exploration (and     O / ) converges to either a strict Nash equilibrium or a Nash equilibrium in D. 4.2 Learning the virtual game So far, we have shown that if the game structure is known in a WAGB, then BAP will converge to the terminal state. To prove optimal convergence of the OAL algorithm, we need to further demonstrate that 1) every virtual game is a WAGB, and 2) in OAL, the “temporary” virtual game   : will converge to the “correct” virtual game   I w.p.1. The first of these two issues is handled by the following lemma: Lemma 4 The virtual game VG of any n-player team state game is a weakly acyclic game w.r.t a biased set that contains all the optimal Nash equilibria, and no other joint actions. (By the definition of a virtual game, there are no strict Nash equilibria other than optimal ones.) The length of the shortest best-response path   } . Lemma 4 implies that BAP in a known virtual game with GLIE exploration will converge to an optimal Nash equilibrium. This is because (by Theorem 3) BAP in a WAGB will converge to either a Nash equilibrium in a biased set $ or a strict Nash equilibrium, and (by Lemma 4) any virtual game is a WAGB with all such Nash equilibria being optimal. The following two lemmas are the last link of our proof chain. They show that OAL will cause agents to obtain the correct virtual game almost surely. Lemma 5 In any team Markov game, (part 3 of) OAL assures that as F  , UXW4Y Q S/T  [ S\  D K :    ‚KLI / V D ' M  1  1 F  F  for some constant M  ( w.p.1. Using Lemma 5, the following lemma is easy to prove. Lemma 6 Consider any team Markov game. Let  : be the event that for all F 1  F ,   : R M LI in the OAL algorithm in a given state. If 1) " :  decreases monotonically to zero ( U : : 8 "  : `M9( ), and 2)  U : : 8  1  1 F  F  ,  F   M9( , then U : : 8  B~  :  M#* . Lemma 6 states that if the criterion for including a joint action among the + -optimal joint actions in OAL is not made strict too quickly (quicker than the iterated logarithm), then agents will identify all optimal joint actions with probability one. In this case, they set up the correct virtual game. It is easy to make OAL satisfy this condition. E.g., any function " : NM  d   : ( f , will do. 4.3 Main convergence theorem Now we are ready to prove that OAL converges to an optimal Nash equilibrium in any team Markov game, even when the game structure is unknown. The idea is to show that the OAL agents learn the game structure (VGs) and the optimal coordination policy (over these VGs). OAL tackles these two learning problems simultaneously—specifically, it interleaves BAP (with GLIE exploration) with learning of game structure. However, the convergence proof does not make use of this fact. Instead, the proof proceeds by showing that the VGs are learned first, and coordination second (the learning algorithm does not even itself know when the switch occurs, but it does occur w.p.1). Theorem 7 (Optimal convergence) In any team Markov game among } agents, if (1)    } O  , and (2) " :  satisfies Lemma 6, then the OAL algorithm converges to an optimal Nash equilibrium w.p.1. Proof. According to [1], a team Markov game can be decomposed into a sequence of state games. The optimal equilibria of these state games form the optimal policy 5I for the game. By the definition of GLIE exploration, each state in the finite state space will be visited infinitely often w.p.1. Thus, it is sufficient to only prove that the OAL algorithm will converge to the optimal policy over individual state games w.p.1. Let  : be the event that  : R M  LI at that state for all F 1  F . Let + f be any positive constant. If Condition (2) of the theorem is satisfied, by Lemma 6 there exists a time L + f  such that  B~  :b  *  + f if F  . + f  . If  : occurs and Condition (1) of the theorem is satisfied, by Theorem 3, OAL will converge to either a strict Nash equilibrium or a Nash equilibrium in the biased set w.p.1. Furthermore, by Lemma 4, we know that the biased set contains all of the optimal Nash equilibria (and nothing else), and there are no strict Nash equilibria outside the biased set. Therefore, if  : occurs, then OAL converges to an optimal Nash equilibrium w.p.1.  Let + be any positive constant, and let  be the event that the agents play an optimal joint action at a given state for all F 1  . With this notation, we can reword the previous sentence: there exists a time   +  F  such that if     +  F  , then  B~  D  :   *  + . Put together, there exists a time   + f  +  such that if     + f  +  , then  B~     B~  D  :   B~  :    *  + f - *  +   *  + f  + . Because + f and + are only used in the proof (they are not parameters of the OAL algorithm), we can choose them to be arbitrarily small. Therefore, OAL converges to an optimal Nash equilibrium w.p.1.  5 Conclusions and future research With multiple Nash equilibria, multiagent RL becomes difficult even when agents do not have conflicting interests. In this paper, we present OAL, the first algorithm that converges to an optimal Nash equilibrium with probability 1 in any team Markov game. In the future work, we consider extending the algorithm to some general-sum Markov games. Acknowledgments Wang is supported by NSF grant IIS-0118767, the DARPA OASIS program, and the PASIS project at CMU. Sandholm is supported by NSF CAREER Award IRI-9703122, and NSF grants IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678. References [1] C.Boutilier. Planning, learning and coordination in multi-agent decision processes. In TARK, 1996. [2] C.Claus and C.Boutilier. The dynamics of reinforcement learning in cooperative multi-agent systems. In AAAI, 1998. [3] D.Fudenberg and D.K.Levine. The theory of learning in games. MIT Press, 1998. [4] D.L.Isaacson and R.W.Madsen. Markov chain: theory and applications. John Wiley and Sons, Inc, 1976. [5] G.Wei . Learning to coordinate actions in multi-agent systems. In IJCAI, 1993. [6] J.Hu and W.P.Wellman. Multiagent reinforcement learning: theoretical framework and an algorithm. In ICML, 1998. [7] M.Kandori, G.J.Mailath, and R.Rob. Learning, mutation, and long run equilibria in games. Econometrica, 61(1):29–56, 1993. [8] M.Littman. Friend-or-Foe Q-learning in general sum game. In ICML, 2001. [9] M.L.Littman. Value-function reinforcement learning in markov games. J. of Cognitive System Research, 2:55–66, 2000. [10] M.L.Purterman. Markov decision processes-discrete stochastic dynamic programming. John Wiley, 1994. [11] M.Tan. Multi-agent reinforcement learning: independent vs. cooperative agents. In ICML, 1993. [12] R.A.Howard. Dynamic programming and Markov processes. MIT Press, 1960. [13] R. Selten. Spieltheoretische behandlung eines oligopolmodells mit nachfragetr¨agheit. Zeitschrift f¨ur die gesamte Staatswissenschaft, 12:301–324, 1965. [14] S. Singh, T.Jaakkola, M.L.Littman, and C.Szepesvari. Convergence results for single-step on-policy reinforcement learning algorithms. Machine Learning, 2000. [15] S.Sen, M.Sekaran, and J. Hale. Learning to coordinate without sharing information. In AAAI, 1994. [16] F. Thusijsman. Optimality and equilibrium in stochastic games. Centrum voor Wiskunde en Informatica, 1992. [17] T.Sandholm and R.Crites. Learning in the iterated prisoner’s dilemma. Biosystems, 37:147–166, 1995. [18] H. Young. The evolution of conventions. Econometrica, 61(1):57–84, 1993. Theorem 3 requires  L !      . If Condition (1) of our main theorem is satisfied (  L !  q'  ), then by Lemma 4, we do have  L !      .
2002
201
2,216
Using Manifold Structure for Partially Labelled Classification Mikhail Belkin University of Chicago Department of Mathematics misha@math.uchicago.edu Partha Niyogi University of Chicago Depts of Computer Science and Statistics niyogi@cs.uchicago.edu Abstract We consider the general problem of utilizing both labeled and unlabeled data to improve classification accuracy. Under the assumption that the data lie on a submanifold in a high dimensional space, we develop an algorithmic framework to classify a partially labeled data set in a principled manner. The central idea of our approach is that classification functions are naturally defined only on the submanifold in question rather than the total ambient space. Using the Laplace Beltrami operator one produces a basis for a Hilbert space of square integrable functions on the submanifold. To recover such a basis, only unlabeled examples are required. Once a basis is obtained, training can be performed using the labeled data set. Our algorithm models the manifold using the adjacency graph for the data and approximates the Laplace Beltrami operator by the graph Laplacian. Practical applications to image and text classification are considered. 1 Introduction In many practical applications of data classification and data mining, one finds a wealth of easily available unlabeled examples, while collecting labeled examples can be costly and time-consuming. Standard examples include object recognition in images, speech recognition, classifying news articles by topic. In recent times, genetics has also provided enormous amounts of readily accessible data. However, classification of this data involves experimentation and can be very resource intensive. Consequently it is of interest to develop algorithms that are able to utilize both labeled and unlabeled data for classification and other purposes. Although the area of partially labeled classification is fairly new, a considerable amount of work has been done in that field since the early 90 's, see [2, 4, 7]. In this paper we address the problem of classifying a partially labeled set by developing the ideas proposed in [1] for data representation. In particular, we exploit the intrinsic structure of the data to improve classification with unlabeled examples under the assumption that the data resides on a low-dimensional manifold within a high-dimensional representation space. In some cases it seems to be a reasonable assumption that the data lies on or close to a manifold. For example a handwritten digit 0 can be fairly accurately represented as an ellipse, which is completely determined by the coordinates of its foci and the sum of the distances from the foci to any point. Thus the space of ellipses is a five-dimensional manifold. An actual handwritten 0 would require more parameters, but perhaps not more than 15 or 20. On the other hand the dimensionality of the ambient representation space is the number of pixels which is typically far higher. For other types of data the question of the manifold structure seems significantly more involved. While there has been recent work on using manifold structure for data representation ([6, 8]), the only other application to classification problems that we are aware of, was in [7] , where the authors use a random walk on the data adjacency graph for partially labeled classification. 2 Why Manifold Structure IS Useful for Partially Supervised Learning To provide a motivation for using a manifold structure, consider a simple synthetic example shown in Figure l. The two classes consist of two parts of the curve shown in the first panel (row 1). We are given a few labeled points and 500 unlabeled points shown in panels 2 and 3 respectively. The goal is to establish the identity of the point labeled with a question mark. By observing the picture in panel 2 (row 1) we see that we cannot confidently classify"?" by using the labeled examples alone. On the other hand, the problem seems much more feasible given the unlabeled data shown in panel 3. Since there is an underlying manifold, it seems clear at the outset that the (geodesic) distances along the curve are more meaningful than Euclidean distances in the plane. Therefore rather than building classifiers defined on the plane (lR 2) it seems preferable to have classifiers defined on the curve itself. Even though the data has an underlying manifold, the problem is still not quite trivial since the two different parts of the curve come confusingly close to each other. There are many possible potential representations of the manifold and the one provided by the curve itself is unsatisfactory. Ideally, we would like to have a representation of the data which captures the fact that it is a closed curve. More specifically, we would like an embedding of the curve where the coordinates vary as slowly as possible when one traverses the curve. Such an ideal representation is shown in the panel 4 (first panel of the second row). Note that both represent the same underlying manifold structure but with different coordinate functions. It turns out (panel 6) that by taking a two-dimensional representation of the data with Laplacian Eigenmaps [1] , we get very close to the desired embedding. Panel 5 shows the locations of labeled points in the new representation space. We see that "?" now falls squarely in the middle of "+" signs and can easily be identified as a "+". This artificial example illustrates that recovering the manifold and developing classifiers on the manifold itself might give us an advantage in classification problems. To recover the manifold, all we need is unlabeled data. The labeled data is then used to develop a classifier defined on this manifold. However we need a model for the manifold to utilize this structure. The model used here is that of a weighted graph whose vertices are data points. Two data points are connected with an edge if 3 & to 2 r--" ? 00 0 ~\-) + a - 1 - 1 - 1 -2 -2 +8 -2 '''',-) -3 -3 -3 -2 0 -2 2 -2 2 0.5 0 ?+-++- a C) . <a 0 + + C§P -0.5 -1 -1 0 -0.1 0.1 -0.1 0 .1 Figure 1: Top row: Panel l. Two classes on a plane curve. Panel 2. Labeled examples. "?" is a point to be classified. Panel 3. 500 random unlabeled examples. Bottom row: Panel 4. Ideal representation of the curve. Panel 5. Positions of labeled points and "?" after applying eigenfunctions of the Laplacian. Panel 6. Positions of all examples. and only if the points are sufficiently close. To each edge we can associate a distance between the corresponding points. The "geodesic distance" between two vertices is the length of the shortest path between them on the adjacency graph. Once we set up an approximation to the manifold, we need a method to exploit the structure of the model to build a classifier. One possible simple approach would be to use the "geodesic nearest neighbors" . However, while simple and well-motivated, this method is potentially unstable. A related more sophisticated method based on a random walk on the adjacency graph is proposed in [7]. We also note the approach taken in [2] which uses mincuts of certain graphs for partially labeled classifications. Our approach is based on the Laplace-Beltrami operator defined on Riemannian manifolds (see [5]). The eigenfunctions of the Laplace Beltrami operator provide a natural basis for functions on the manifold and the desired classification function can be expressed in such a basis. The Laplace Beltrami operator can be estimated using unlabeled examples alone and the classification function is then approximated using the labeled data. In the next two sections we describe our algorithm and the theoretical underpinnings in some detail. 3 Description of the Algorithm Given k points X l, . . . , X k E IR I , we assume that the first s < k points have labels Ci, where Ci E {- I, I} and the rest are unlabeled. The goal is to label the unlabeled points. We also introduce a straightforward extension of the algorithm for the case of more than two classes. Step 1 [Constructing the Adjacency Graph with n nearest neighbors]. Nodes i and j corresponding to the points Xi and Xj are connected by an edge if i is among n nearest neighbors of j or j is among n nearest neighbors of i. The distance can be the standard Euclidean distance in II{ I or some other appropriately defined distance. We take Wij = 1 if points Xi and Xj are connected and Wij = 0 otherwise. For a discussion about the appropriate choice of weights, and connections to the heat kernel see [1]. Step 2. [Eigenfunctions] Compute p eigenvectors e1 , ... , ep corresponding to the p smallest eigenvalues for the eigenvector problem Le = Ae where L = D - W is the graph Laplacian for the adjacency graph. Here W is the adjacency matrix defined above and D is a diagonal matrix of the same size as W satisfying Dii = 2::j Wij. Laplacian is a symmetric, positive semidefinite matrix which can be thought of as an operator on functions defined on vertices of the graph. Step 3. [Building the classifier] To approximate the class we minimize the error function Err(a) = 2:::=1 (Ci 2::~=1 ajej(i)) 2 where p is the number of eigenfunctions we wish to employ, the sum is taken over all labeled points and the minimization is considered over the space of coefficients a = (a1' ... ,apf. The solution is given by ( T )-1 T a = E 1ab Elab E 1ab C where c = (C1 ,' .. ,Cs f and Elab is an s x p matrix whose i, j entry is ej (i). For the case of several classes, we build a one-against-all classifier for each individual class. Step 4. [Classifying unlabeled points] If Xi, i > s is an unlabeled point we put { I , Ci = -1 , This, of course, is just applying a linear classifier constructed in Step 3. If there are several classes, one-against-all classifiers compete using 2::~ =1 aj ej (i) as a confidence measure. 4 Theoretical Interpretation Let M C II{ k be an n-dimensional compact Riemannian manifold isometrically embedded in II{ k for some k. Intuitively M can be thought of as an n-dimensional "surface" in II{ k. Riemannian structure on M induces a volume form that allows us to integrate functions defined on M. The square integrable functions form a Hilbert space .c2(M). The Laplace-Beltrami operator 6.M (or just 6.) acts on twice differentiable functions on M. There are three important points that are relevant to our discussion here. The Laplacian provides a basis on .c2(M): It can be shown (e.g. , [5]) that 6. is a self-adjoint positive semidefinite operator and that its eigenfunctions form a basis for the Hilbert space .c2(M) . The spectrum of 6. is discrete (provided M is compact) , with the smallest eigenvalue 0 corresponding to the constant eigenfunction. Therefore any f E .c2(M) can be written as f(x) = 2::~o aiei(x) , where ei are eigenfunctions, 6.ei = Ai ei. The simplest nontrivial example is a circle Sl. 6.S1 f( ¢) - d'li,</» . Therefore the eigenfunctions are given by - d:121» = e( if;), where I( if;) is a 7r-periodic function. It is easy to see that all eigenfunctions of 6. are of the form e( if;) = sin( nif;) or e( if;) = cos( nif;) with eigenvalues {l2, 22, ... }. Therefore, we see that any 7rperiodic £2 function 1 has a convergent Fourier series expansion given by I( if;) = 2::~= o an sin( nif;) + bn cos( nif;). In general, for any manifold M , the eigenfunctions of the Laplace-Beltrami operator provide a natural basis for £2(M). However 6. provides more than just a basis, it also yields a measure of smoothness for functions on the manifold. The Laplacian as a snlOothness functional: A simple measure of the degree of smoothness for a function 1 on a unit circle 51 is the "smoothness functional" S(J) = J I/( if;)' 12dif;. If S(J) is close to zero, we think 5' of 1 as being "smooth" . Naturally, constant functions are the most "smooth" . Integration by parts yields S(J) J f'( if;)dl J 16.ldif; = (6./,1)£.2(51)' In 5' 5' general, if I: M ----+ ~, then S(J) d~f J IV/1 2dp = J 16.ldp = (6./, I)£.2(M ) M M where Viis the gradient vector field of f. If the manifold is ~ n then VI = ", n_ 1 !!La a -aa .' In general, for an n-manifold, the expression in a local coordinate L ~ _ X t X t chart involves the coefficients of the metric tensor. Therefore the smoothness of a unit norm eigenfunction ei of 6. is controlled by the corresponding eigenvalue Ai since 5(ei) = (6.ei, ei)£.2(M) = Ai. For an arbitrary 1 = 2::i [ti ei, we can write S(J) as A Reproducing Kernel Hilbert Space can be constructed from S. A1 = 0 is the smallest eigenvalue for which the corresponding eigenfunction is the constant function e1 = 1'(1). It can also be shown that if M is compact and connected there are no other eigenfunctions with eigenvalue O. Therefore approximating a function I( x) :::::: 2::; ai ei (x) in terms of the first p eigenfunctions of 6. is a way of controlling the smoothness of the approximation. The optimal approximation is obtained by minimizing the £2 norm of the error: a = argmin J (/(X) - t aiei(X)) 2 dp. a=(a" ... ,ap ) M , This approximation is given by a projection in £2 onto the span of the first p eigenfunctions ai = J ei(x )/(x)dp = (ei ' I) £.2(M) In practice we only know the M values of 1 at a finite number of points X l, ... , X n and therefore have to solve a discrete version of this problem a = _ a~gmi~ .t (/(Xi) - t O,jej(Xi)) 2 The soa=(a" ... ,ap ),=l ) =1 lution to this standard least squares problem is given by aT = (ET E)- l EyT, where Eij = ei (Xj) and y = (J(xd , · .. , I(xn)). Conection with the Graph Laplacian: As we are approximating a manifold with a graph, we need a suitable measure of smoothness for functions defined on the graph. It turns out that many of the concepts in the previous section have parallels in graph theory (e.g., see [3]). Let G = (V, E) be a weighted graph on n vertices. We assume that the vertices are numbered and use the notation i ~ j for adjacent vertices i and j. The graph Laplacian of G is defined as L = D - W , where W is the weight matrix and D is a diagonal matrix, Dii = I:j Wj i. L can be thought of as an operator on functions defined on vertices of the graph. It is not hard to see that L is a self-adjoint positive semidefinite operator. By the (finite dimensional) spectral theorem any function on G can be decomposed as a sum of eigenfunctions of L. If we think of G as a model for the manifold M it is reasonable to assume that a function on G is smooth if it does not change too much between nearby points. If f = (11 , ... , In) is a function on G, then we can formalize that intuition by defining the smoothness functional SG(f) = I: Wij(Ji - h)2. It is not hard to show that SG(f) = f LfT = (f ,Lf)G = n I: Ai (f , ei) G which is the discrete analogue of the integration by parts from the i =l previous section. The inner product here is the usual Euclidean inner product on the vector space with coordinates indexed by the vertices of G , ei are normalized eigenvectors of L, Lei = Aiei, Ileill = 1. All eigenvalues are non-negative and the eigenfunctions corresponding to the smaller eigenvalues can be thought as "more smooth". The smallest eigenvalue A1 = 0 corresponds to the constant eigenvector e1· 5 Experimental Results 5.1 Handwritten Digit Recognition We apply our techniques to the problem of optical character recognition. We use the popular MNIST dataset which contains 28x28 grayscale images of handwritten digits. 1 We use the 60000 image training set for our experiments. For all experiments we use 8 nearest neighbours to compute the adjacency matrix. The adjacency matrices are very sparse which makes solving eigenvector problems for matrices as big as 60000 by 60000 possible. For a particular trial, we fix the number of labeled examples we wish to use. A random subset of the 60000 images is used with labels to form the labeled set L. The rest of the images are used without labels to form the unlabeled data U. The classification results (for U) are averaged over 20 different random draws for L. Shown in fig. 2 is a summary plot of classification accuracy on the unlabeled set comparing the nearest neighbors baseline with our algorithm that retains the number of eigenvectors by following taking it to be 20% of the number of labeled points. The improvements over the base line are significant, sometimes exceeding 70% depending on the number of labeled and unlabeled examples. With only 100 labeled examples (and 59900 unlabeled examples), the Laplacian classifier does nearly as well as the nearest neighbor classifier with 5000 labeled examples. Similarly, with 500/59500 labeled/unlabeled examples, it does slightly better than the nearest neighbor base line using 20000 labeled examples By comparing the results for the total 60000 point data set, and 10000 and 1000 subsets we see that adding unlabeled data consistently improves classification accuracy. When almost all of the data is labeled, the performance of our classifier is close to that of k-NN. It is not particularly surprising as our method uses the nearest neighbor information. 1 We use the first 100 principal components of the set of all images to represent each image as a 100 dimensional vector. 60 ~-----'------'------'------'--r~~====~~====~ --e- Laplacian 60,000 points total Laplacian 10,000 points total -A- Laplacian 1 ,QOO points total 40 + best k-NN, k=1 ,3,5 20 2 L-____ -L ______ L-____ -L ______ L-____ -L ______ L-____ -" 20 50 100 500 1000 5000 20000 50000 Number of Labeled Points Figure 2: MNIST data set, Percentage error rates for different numbers of labeled and unlabeled points compared to best k-NN base line, 5.2 Text Classification The second application is text classification using the popular 20 Newsgroups data set, This data set contains approximately 1000 postings from each of 20 different newsgroups, Given an article, the problem is to determine to which newsgroup it was posted, We tokenize the articles using the software package Rainbow written by Andrew McCallum, We use a "stop-list" of 500 most common words to be excluded and also exclude headers, which among other things contain the correct identification of the newsgroup, Each document is then represented by the counts of the most frequent 6000 words normalized to sum to L Documents with 0 total count are removed, thus leaving us with 19935 vectors in a 6000-dimensional space, We follow the same procedure as with the MNIST digit data above, A random subset of a fixed size is taken with labels to form L, The rest of the dataset is considered to be U, We average the results over 20 random splits2 , As with the digits, we take the number of nearest neighbors for the algorithm to be 8, In fig, 3 we summarize the results by taking 19935, 2000 and 600 total points respectively and calculating the error rate for different numbers oflabeled points, The number of eigenvectors used is always 20% of the number of labeled points, We see that having more unlabeled points improves the classification error in most cases although when there are very few labeled points, the differences are smalL References [1] M. Belkin , P. Niyogi, Laplacian Eigenmaps for Dimensionality Redu ction and Data Representation, Technical Report, TR-2002-01 , Department of Computer Science, The University of Chicago, 2002. 2In the case of 2000 eigenvectors we take just 10 random splits since the computations are rather time-consuming. 80 ~------'-------'-------'------r~======~7=====~ -e- Laplacian 19,935 points total ~ Laplacian 2,000 pOints total ---A- Laplacian 600 points total + best k-NN, k=1,3,5 60 40 30 22 L-------~-------L------~--------~-------L------~ 50 100 500 1000 5000 10000 18000 Number of Labeled Points Figure 3: 20 Newsgroups data set. Error rates for different numbers of labeled and unlabeled points compared to best k-NN baseline. [2] A. Blum, S. Chawla, Learning from Labeled and Unlabeled Data using Graph Mincuts, ICML, 2001 , [3] Fan R. K. Chung, Spectral Graph Theory, Regional Conference Series in Mathematics, number 92, 1997 [4] K. Nigam, A.K. McCallum, S. Thrun, T. Mitchell, Text Classification from Labeled in Unlabeled Data, Machine Learning 39(2/3),2000, [5] S. Rosenberg, The Laplacian on a Riemmannian Manifold, Cambridge University Press, 1997, [6] Sam T. Roweis, Lawrence K. Saul, N onlinear Dimensionality Reduction by Locally Linear Embedding, Science, vol 290, 22 December 2000, [7] Martin Szummer, Tommi Jaakkola, Partially labeled classification with Markov random walks, Neural Information Processing Systems (NIPS) 2001 , vol 14., [8] Joshua B. Tenenbaum, Vin de Silva, John C. Langford, A Global Geometric Framework for N onlinear Dimensionality Reduction, Science, Vol 290, 22 December 2000,
2002
202
2,217
Recovering Intrinsic Images from a Single Image Marshall F Tappen William T Freeman Edward H Adelson MIT Artificial Intelligence Laboratory Cambridge, MA 02139 mtappen@ai.mit.edu, wtf@ai.mit.edu, adelson@ai.mit.edu Abstract We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface’s reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images. 1 Introduction Every image is the product of the characteristics of a scene. Two of the most important characteristics of the scene are its shading and reflectance. The shading of a scene is the interaction of the surfaces in the scene and the illumination. The reflectance of the scene describes how each point reflects light. The ability to find the reflectance of each point in the scene and how it is shaded is important because interpreting an image requires the ability to decide how these two factors affect the image. For example, the geometry of an object in the scene cannot be recovered without being able to isolate the shading of every point. Likewise, segmentation would be simpler given the reflectance of each point in the scene. In this work, we present a system which finds the shading and reflectance of each point in a scene by decomposing an input image into two images, one containing the shading of each point in the scene and another image containing the reflectance of each point. These two images are types of a representation known as intrinsic images [1] because each image contains one intrinsic characteristic of the scene. Most prior algorithms for finding shading and reflectance images can be broadly classified as generative or discriminative approaches. The generative approaches create possible surfaces and reflectance patterns that explain the image, then use a model to choose the most likely surface. Previous generative approaches include modeling worlds of painted polyhedra [11] or constructing surfaces from patches taken out of a training set [3]. In contrast, discriminative approaches attempt to differentiate between changes in the image caused by shading and those caused by a reflectance change. Early algorithms, such as Retinex [8], were based on simple assumptions, such as the assumption that the gradients along reflectance changes have much larger magnitudes than those caused by shading. That assumption does not hold for many real images, so recent algorithms have used more complex statistics to separate shading and reflectance. Bell and Freeman [2] trained a classifier to use local image information to classify steerable pyramid coefficients as being due to shading or reflectance. Using steerable pyramid coefficients allowed the algorithm to classify edges at multiple orientations and scales. However, the steerable pyramid decomposition has a low-frequency residual component that cannot be classified. Without classifying the low-frequency residual, only band-pass filtered copies of the shading and reflectance images can be recovered. In addition, low-frequency coefficients may not have a natural classification. In a different direction, Weiss [13] proposed using multiple images where the reflectance is constant, but the illumination changes. This approach was able to create full frequency images, but required multiple input images of a fixed scene. In this work, we present a system which uses multiple cues to recover full-frequency shading and reflectance intrinsic images from a single image. Our approach is discriminative, using both a classifier based on color information in the image and a classifier trained to recognize local image patterns to distinguish derivatives caused by reflectance changes from derivatives caused by shading. We also address the problem of ambiguous local evidence by using a Markov Random Field to propagate the classifications of those areas where the evidence is clear into ambiguous areas of the image. 2 Separating Shading and Reflectance Our algorithm decomposes an image into shading and reflectance images by classifying each image derivative as being caused by shading or a reflectance change. We assume that the input image, I(x, y), can be expressed as the product of the shading image, S(x, y), and the reflectance image, R(x, y). Considering the images in the log domain, the derivatives of the input image are the sum of the derivatives of the shading and reflectance images. It is unlikely that significant shading boundaries and reflectance edges occur at the same point, thus we make the simplifying assumption that every image derivative is either caused by shading or reflectance. This reduces the problem of specifying the shading and reflectance derivatives to that of binary classification of the image’s x and y derivatives. Labelling each x and y derivative produces estimates of the derivatives of the shading and reflectance images. Each derivative represents a set of linear constraints on the image and using both derivative images results in an over-constrained system. We recover each intrinsic image from its derivatives by using the method introduced by Weiss in [13] to find the pseudo-inverse of the over-constrained system of derivatives. If fx and fy are the filters used to compute the x and y derivatives and Fx and Fy are the estimated derivatives of shading image, then the shading image, S(x, y) is: S(x, y) = g ⋆[(fx(−x, −y) ⋆Fx) + (fy(−x, −y) ⋆Fy)] (1) where ⋆is convolution, f(−x, −y) is a reversed copy of f(x, y), and g is the solution of g ⋆[(fx(−x, −y) ⋆fx(x, y)) + (fy(−x, −y) ⋆fx(x, y))] = δ (2) The reflectance image is found in the same fashion. One nice property of this technique is that the computation can be done using the FFT, making it more computationally efficient. 3 Classifying Derivatives With an architecture for recovering intrinsic images, the next step is to create the classifiers to separate the underlying processes in the image. Our system uses two classifiers, one which uses color information to separate shading and reflectance derivatives and a second classifier that uses local image patterns to classify each derivative. Original Image Shape Image Reflectance Image Figure 1: Example computed using only color information to classify derivatives. To facilitate printing, the intrinsic images have been computed from a gray-scale version of the image. The color information is used solely for classifying derivatives in the gray-scale copy of the image. 3.1 Using Color Information Our system takes advantage of the property that changes in color between pixels indicate a reflectance change [10]. When surfaces are diffuse, any changes in a color image due to shading should affect all three color channels proportionally. Assume two adjacent pixels in the image have values c1 and c2, where c1 and c2 are RGB triplets. If the change between the two pixels is caused by shading, then only the intensity of the color changes and c2 = αc1 for some scalar α. If c2 ̸= αc1, the chromaticity of the colors has changed and the color change must have been caused by a reflectance change. A chromaticity change in the image indicates that the reflectance must have changed at that point. To find chromaticity changes, we treat each RGB triplet as a vector and normalize them to create ˆc1 and ˆc2. We then use the angle between ˆc1 and ˆc2 to find reflectance changes. When the change is caused by shading, (ˆc1 · ˆc2) equals 1. If (ˆc1 · ˆc2) is below a threshold, then the derivative associated with the two colors is classified as a reflectance derivative. Using only the color information, this approach is similar to that used in [6]. The primary difference is that our system classifies the vertical and horizontal derivatives independently. Figure 1 shows an example of the results produced by the algorithm. The classifier marked all of the reflectance areas correctly and the text is cleanly removed from the bottle. This example also demonstrates the high quality reconstructions that can be obtained by classifying derivatives. 3.2 Using Gray-Scale Information While color information is useful, it is not sufficient to properly decompose images. A change in color intensity could be caused by either shading or a reflectance change. Using only local color information, color intensity changes cannot be classified properly. Fortunately, shading patterns have a unique appearance which can be discriminated from most common reflectance patterns. This allows us to use the local gray-scale image pattern surrounding a derivative to classify it. The basic feature of the gray-scale classifier is the absolute value of the response of a linear filter. We refer to a feature computed in this manner as a non-linear filter. The output of a non-linear, F, given an input patch Ip is F = |Ip ⋆w| (3) where ⋆is convolution and w is a linear filter. The filter, w is the same size as the image patch, I, and we only consider the response at the center of Ip. This makes the feature a function from a patch of image data to a scalar response. This feature could also be viewed as the absolute value of the dot product of Ip and w. We use the responses of linear Figure 2: Example images from the training set. The first two are examples of reflectance changes and the last three are examples of shading (a) Original Image (b) Shading Image (c) Reflectance Image Figure 3: Results obtained using the gray-scale classifier. filters as the basis for our feature, in part, because they have been used successfully for characterizing [9] and synthesizing [7] images of textured surfaces. The non-linear filters are used to classify derivatives with a classifier similar to that used by Tieu and Viola in [12]. This classifier uses the AdaBoost [4] algorithm to combine a set of weak classifiers into a single strong classifier. Each weak classifier is a threshold test on the output of one non-linear filter. At each iteration of the AdaBoost algorithm, a new weak classifier is chosen by choosing a non-linear filter and a threshold. The filter and threshold are chosen greedily by finding the combination that performs best on the re-weighted training set. The linear filter in each non-linear filter is chosen from a set of oriented first and second derivative of Gaussian filters. The training set consists of a mix of images of rendered fractal surfaces and images of shaded ellipses placed randomly in the image. Examples of reflectance changes were created using images of random lines and images of random ellipse painted onto the image. Samples from the training set are shown in 2. In the training set, the illumination is always coming from the right side of the image. When evaluating test images, the classifier will assume that the test image is also lit from the right. Figure 3 shows the results of our system using only the gray-scale classifier. The results can be evaluated by thinking of the shading image as how the scene should appear if it were made entirely of gray plastic. The reflectance image should appear very flat, with the the three-dimensional depth cues placed in the shading image. Our system performs well on the image shown in Figure 3. The shading image has a very uniform appearance, with almost all of the effects of the reflectance changes placed in the reflectance image. The examples shown are computed without taking the log of the input image before processing it. The input images are uncalibrated and ordinary photographic tonescale is very similar to a log transformation. Errors from not taking log of the input image first would (a) (b) (c) (d) Figure 4: An example where propagation is needed. The smile from the pillow image in (a) has been enlarged in (b). Figures (c) and (d) contain an example of shading and a reflectance change, respectively. Locally, the center of the mouth in (b) is as similar to the shading example in (c) as it is to the example reflectance change in (d). (a) Original Image (b) Shading Image (c) Reflectance Image Figure 5: The pillow from Figure 4. This is found by combining the local evidence from the color and gray-scale classifiers, then using Generalized Belief Propagation to propagate local evidence. cause one intrinsic image to modulate the local brightness of the other. However, this does not occur in the results. 4 Propagating Evidence While the classifier works well, there are still areas in the image where the local information is ambiguous. An example of this is shown in Figure 4. When compared to the example shading and reflectance change in Figure 4(c) and 4(d), the center of the mouth in Figure 4(b) is equally well classified with either label. However, the corners of the mouth can be classified as being caused by a reflectance change with little ambiguity. Since the derivatives in the corner of the mouth and the center all lie on the same image contour, they should have the same classification. A mechanism is needed to propagate information from the corners of the mouth, where the classification is clear, into areas where the local evidence is ambiguous. This will allow areas where the classification is clear to disambiguate those areas where it is not. In order to propagate evidence, we treat each derivative as a node in a Markov Random Field with two possible states, indicating whether the derivative is caused by shading or caused by a reflectance change. Setting the compatibility functions between nodes correctly will force nodes along the same contour to have the same classification. 4.1 Model for the Potential Functions Each node in the MRF corresponds to the classification of a derivative. We constrain the compatibility functions for two neighboring nodes, xi and xj, to be of the form ψ(xi, xj) =  β 1 −β 1 −β β  (4) with 0 ≤β ≤1. The term β controls how much the two nodes should influence each other. Since derivatives along an image contour should have the same classification, β should be close to 1 when two neighboring derivatives are along a contour and should be 0.5 when no contour is present. Since β depends on the image at each point, we express it as β(Ixy), where Ixy is the image information at some point. To ensure β(Ixy) between 0 and 1, it is modelled as β(Ixy) = g(z(Ixy)), where g(·) is the logistic function and z(Ixy) has a large response along image contours. 4.2 Learning the Potential Functions The function z(Ixy) is based on two local image features, the magnitude of the image and the difference in orientation between the gradient and the orientation of the graph edge. These features reflect our heuristic that derivatives along an image contour should have the same classification. The difference in orientation between a horizontal graph edge and image contour, ˆφ, is found from the orientation of the image gradient, φ. Assuming that −π/2 ≤φ ≤π/2, the angle between a horizontal edge and the image gradient,ˆφ, is ˆφ = |φ|. For vertical edges, ˆφ = |φ| −π/2. To find the values of z(·) we maximize the probability of a set of the training examples over the parameters of z(·). The examples are taken from the same set used to train the gray-scale classifiers. The probability of training samples is P = 1 Z Y (i,j) ψ(xi, xj) (5) where all (i, j) are the indices of neighboring nodes in the MRF and Z is a normalization constant. Note that each ψ(·) is a function of z(Ixy). The function relating the image features to ψ(·), z(·), is chosen to be a linear function and is found by maximizing equation 5 over a set of training images similar to those used to train the local classifier. In order to simplify the training process, we approximate the true probability in Equation 5 by assuming that Z is constant. Doing so leads to the following value of z(·): z(ˆφ, |∇I|) = −1.2 × ˆφ + 1.62 × |∇I| + 2.3 (6) where |∇I| is the magnitude of the image gradient and both ˆφ and |∇I| have been normalized to be between 0 and 1. These measures break down in areas with a weak gradient, so we set β(Ixy) to 0.5 for regions of the image with a gradient magnitude less than 0.05. Combined with the values learned for z(·), this effectively limits β to the range 0.5 ≤β ≤1. Larger values of z(·) correspond to a belief that the derivatives connected by the edge should have the same value, while negative values signify that the derivatives should have (a) Original Image (b) Shading Image (c) Reflectance Image Figure 6: Example generated by combining color and gray-scale information, along with using propagation. a different value. The values in equation 6 correspond with our expected results; two derivatives are constrained to have the same value when they are along an edge in the image that has a similar orientation to the edge in the MRF connecting the two nodes. 4.3 Inferring the Correct Labelling Once the compatibility functions have been learned, the label of each derivative can be inferred. The local evidence for each node in the MRF is obtained from the results of the color classifier and from the gray-scale classifier by assuming that the two are statistically independent. It is necessary to use the color information because propagation cannot help in areas where the gray-scale classifier misses an edge altogether. In Figure 5, the cheek patches on the pillow, which are pink in the color image, are missed by the gray-scale classifier, but caught by the color classifier. For the results shown, we used the results of the AdaBoost classifier to classify the gray-scale images and used the method suggested by Friedman et al. to obtain the probability of the labels [5]. We used the Generalized Belief Propagation algorithm [14] to infer the best label of each node in the MRF because ordinary Belief Propagation performed poorly in areas with both weak local evidence and strong compatibility constraints. The results of using color, grayscale information, and propagation can be seen in Figure 5. The ripples on the pillow are correctly identified as being caused by shading, while the face is correctly identified as having been painted on. In a second example, shown in Figure 6, the algorithm correctly identifies the change in reflectance between the sweatshirt and the jersey and correctly identifies the folds in the clothing as being caused by shading. There are some small shading artifacts in the reflectance image, especially around the sleeves of the sweatshirt, presumably caused by particular shapes not present in the training set. All of the examples were computed using ten non-linear filters as input for the AdaBoost gray-scale classifier. 5 Discussion We have presented a system that is able to use multiple cues to produce shading and reflectance intrinsic images from a single image. This method is also able to produce satisfying results for real images. The most computationally intense steps for recovering the shading and reflectance images are computing the local evidence, which takes about six minutes on a 700MHz Pentium for a 256 × 256 image, and running the Generalized Belief Propagation algorithm. Belief propagation was used on both the x and y derivative images and took around 6 minutes to run 200 iterations on each image. The pseudo-inverse process took under 5 seconds. The primary limitation of this method lies in the classifiers. For each type of surface, the classifiers must incorporate knowledge about the structure of the surface and how it appears when illuminated. The present classifiers operate at a single spatial scale, however the MRF framework allows the integration of information from multiple scales. Acknowledgments Portions of this work were completed while W.T.F was a Senior Research Scientist and M.F.T was a summer intern at Mitsubishi Electric Research Labs. This work was supported by an NDSEG fellowship to M.F.T, by NIH Grant EY11005-04 to E.H.A., by a grant from NTT to E.H.A., and by a contract with Unilever Research. References [1] H. G. Barrow and J. M. Tenenbaum. Recovering intrinsic scene characteristics from images. In Computer Vision Systems, pages 3–26. Academic Press, 1978. [2] M. Bell and W. T. Freeman. Learning local evidence for shading and reflection. In Proceedings International Conference on Computer Vision, 2001. [3] W. T. Freeman, E. C. Pasztor, and O. T. Carmichael. Learning low-level vision. International Journal of Computer Vision, 40(1):25–47, 2000. [4] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119– 139, 1997. [5] J. Friedman, T. Hastie, and R. Tibshirami. Additive logistic regression: A statistical view of boosting. The Annals of Statistics, 38(2):337–374, 2000. [6] B. V. Funt, M. S. Drew, and M. Brockington. Recovering shading from color images. In G. Sandini, editor, ECCV-92: Second European Conference on Computer Vision, pages 124–132. Springer-Verlag, May 1992. [7] D. Heeger and J. Bergen. Pyramid-based texture analysis/synthesis. In Computer Graphics Proceeding, SIGGRAPH 95, pages 229–238, August 1995. [8] E. H. Land and J. J. McCann. Lightness and retinex theory. Journal of the Optical Society of America, 61:1–11, 1971. [9] T. Leung and J. Malik. Recognizing surfaces using three-dimensional textons. In IEEE International Conference on Computer Vision, 1999. [10] J. M. Rubin and W. A. Richards. Color vision and image intensities: When are changes material. Biological Cybernetics, 45:215–226, 1982. [11] P. Sinha and E. H. Adelson. Recovering reflectance in a world of painted polyhedra. In Fourth International Conference on Computer Vision, pages 156–163. IEEE, 1993. [12] K. Tieu and P. Viola. Boosting image retrieval. In Proceedings IEEE Computer Vision and Pattern Recognition, volume 1, pages 228–235, 2000. [13] Y. Weiss. Deriving intrinsic images from image sequences. In Proceedings International Conference on Computer Vision, Vancouver, Canada, 2001. IEEE. [14] J. Yedidia, W. T. Freeman, and Y. Weiss. Generalized belief propagation. In Advances in Neural Information Processing Systems 13, pages 689–695, 2001.
2002
203
2,218
A Note on the Representational Incompatibility of Function Approximation and Factored Dynamics Eric Allender Computer Science Department Rutgers University allender@cs.rutgers.edu Sanjeev Arora Computer Science Department Princeton University arora@cs.princeton.edu Michael Kearns Department of Computer and Information Science University of Pennsylvania mkearns@cis.upenn.edu Cristopher Moore Department of Computer Science University of New Mexico moore@santafe.edu Alexander Russell Department of Computer Science and Engineering University of Connecticut acr@cse.uconn.edu Abstract We establish a new hardness result that shows that the difficulty of planning in factored Markov decision processes is representational rather than just computational. More precisely, we give a fixed family of factored MDPs with linear rewards whose optimal policies and value functions simply cannot be represented succinctly in any standard parametric form. Previous hardness results indicated that computing good policies from the MDP parameters was difficult, but left open the possibility of succinct function approximation for any fixed factored MDP. Our result applies even to policies which yield a polynomially poor approximation to the optimal value, and highlights interesting connections with the complexity class of Arthur-Merlin games. 1 Introduction While a number of different representational approaches to large Markov decision processes (MDPs) have been proposed and studied over recent years, relatively little is known about the relationships between them. For example, in function approximation, a parametric form is proposed for the value functions of policies. Presumably, for any assumed parametric form (for instance, linear value functions), rather strong constraints on the underlying stochastic dynamics and rewards may be required to meet the assumption. However, a precise characterization of such constraints seems elusive. Similarly, there has been recent interest in making parametric assumptions on the dynamics and rewards directly, as in the recent work on factored MDPs. Here it is known that the problem of computing an optimal policy from the MDP parameters is intractable (see [7] and the references therein), but exactly what the representational constraints on such policies are has remained largely unexplored. In this note, we give a new intractability result for planning in factored MDPs that exposes a noteworthy conceptual point missing from previous hardness results. Prior intractability results for planning in factored MDPs established that the problem of computing optimal policies from MDP parameters is hard, but left open the possibility that for any fixed factored MDP, there might exist a compact, parametric representation of its optimal policy. This would be roughly analogous to standard NP-complete problems such as graph coloring — any 3-colorable graph has a “compact” description of its 3-coloring, but it is hard to compute it from the graph. Here we dismiss even this possibility. Under a standard and widely believed complexitytheoretic assumption (that is even weaker than the assumption that NP does not have polynomial size Boolean circuits), we prove that a specific family of factored MDPs does not even possess “succinct” policies. By this we mean something extremely general — namely, that for each MDP in the family, it cannot have an optimal policy represented by an arbitrary Boolean circuit whose size is bounded by a polynomial in the size of the MDP description. Since such circuits can represent essentially any standard parametric functional form, we are showing that there exists no “reasonable” representation of good policies in factored MDPs, even if we ignore the problem of how to compute them from the MDP description. This result holds even if we ask only for policies whose expected return approximates the optimal within a polynomial factor. (With a slightly stronger complexity-theoretic assumption, it follows that obtaining an approximation even within an exponential factor is impossible.) Thus, while previous results established that there was at least a computational barrier to going from factored MDP parameters to good policies, here we show that the barrier is actually representational, a considerably worse situation. The result highlights the fact that even when making strong and reasonable assumptions about one representational aspect of MDPs (such as value functions or dynamics), there is no reason in general for this to lead to any nontrivial restrictions on the others. The construction in our result is ultimately rather simple, and relies on powerful results developed in complexity theory over the last decade. In particular, we exploit striking results on the complexity class associated with computational protocols known as ArthurMerlin games. We note that recent and independent work by Liberatore [5] establishes results similar to ours. The primary differences between our work and Liberatore’s is that our results prove intractability of approximation and rely on different proof techniques. 2 DBN-Markov Decision Processes A Markov decision process is a tuple where is a set of states, is a set of actions, is a family of probability distributions on , one for each , and is a reward function. We will denote by the probability that action in state results in state . When started in a state , and provided with a sequence of actions the MDP traverses a sequence of states , where each is a random sample from the distribution . Such a state sequence is called a path. The -discounted return associated with such a path is . A policy is a mapping from states to actions. When the action sequence is generated according to this policy, we denote by the state sequence produced as above. A policy is optimal if for all policies and all , we have We consider MDPs where the transition law is represented as a dynamic Bayes net, or DBN-MDPs. Namely, if the state space has size , then is represented by a -layer Bayes net. There are variables in the first layer, representing the state variables at any given time , along with the action chosen at time . There are variables in the second layer, representing the state variables at time . All directed edges in the Bayes net go from variables in the first layer to variables in the second layer; for our result, it suffices to consider Bayes nets in which the indegree of every second-layer node is bounded by some constant. Each second layer node has a conditional probability table (CPT) describing its conditional distribution for every possible setting of its parents in the Bayes net. Thus the stochastic dynamics of the DBN-MDP are entirely described by the Bayes net in the standard way; the next-state distribution for any state is given by simply fixing the first layer nodes to the settings given by the state. Any given action choice then yields the nextstate distribution according to standard Bayes net semantics. We shall assume throughout that the rewards are a linear function of state. 3 Arthur-Merlin Games The complexity class AM is a probabilistic extension of the familiar class NP, and is typically described in terms of Arthur–Merlin games (see [2]). An Arthur–Merlin game for a language is played by two players (Turing machines), (the Verifier, often referred to as Arthur in the literature), who is equipped with a random coin and only modest (polynomialtime bounded) computing power; and (the Prover, often referred to as Merlin), who is computationally unbounded. Both are supplied with the same input of length bits. For instance, might be some standard encoding of an undirected graph , and might be interested in proving to that is 3-colorable. Thus, seeks to prove that ; is skeptical but willing to listen. At each step of the conversation, flips a fair coin, perhaps several times, and reports the resulting bits to ; this is interpreted as a “question” or “challenge” to . In the graph coloring example, it might be reasonable to interpret the random bits generated by as identifying a random edge in , with the challenge to being to identify the colors of the nodes on each end of this edge (which had better be different, and consistent with any previous responses of , if is to be convinced). Thus responds with some number of bits, and the protocol proceeds to the next round. After poly steps, decides, based upon the conversation, whether to accept that or reject. We say that the language is in the class AM poly if there is a (polynomial-time) algorithm such that: When , there is always a strategy for to generate the responses to the random challenges that causes to accept. When , regardless of how responds to the random challenges, with probability at least , rejects. Here the probability is taken over the random challenges. In other words, we ask that there be a polynomial time algorithm such that if , there is always some response to the random challenge sequence that will convince of this fact; but if , then every way of responding to the random challenge sequence has an overwhelming probability of being “caught” by . What is the power of the class AM poly ? From the definition, it should be clear that every language in NP has an (easy) AM poly protocol in which , the prover, ignores the random challenges, and simply presents with the standard NP witness to (e.g., a specific 3-coloring of the graph ). More surprisingly, every language in the class PSPACE (the class of all languages that can be recognized in deterministic polynomial space, conjectured to be much larger than NP) also has an AM poly protocol, a beautiful and important result due to [6, 9]. (For definitions of classes such as P, NP, and PSPACE, see [8, 4].) If a language has an Arthur-Merlin game where Arthur asks only a constant number of questions, we say that AM . NP corresponds to Arthur-Merlin games where Arthur says nothing, and thus clearly NP AM . Restricting the number of questions seems to put severe limitations on the power of Arthur-Merlin games. Though AM poly PSPACE, it is generally believed that NP AM PSPACE 4 DBN-MDPs Requiring Large Policies In this section, we outline our construction proving that factored MDPs may not have any succinct representation for (even approximately) optimal policies, and conclude this note with a formal statement of the result. Let us begin by drawing a high-level analogy with the MDP setting. Let be a language in PSPACE, and let and be the Turing machines for the AM protocol for . Since is simply a Turing machine, it has some internal configuration (sufficient to completely describe the tape contents, read/write head position, abstract computational state, and so on) at any given moment in the protocol with . Since we assume is allpowerful (computationally unbounded), we can assume that has complete knowledge of this internal state of at all times. The protocol at round can thus be viewed: is in some state/configuration ; a random bit sequence (the challenge) is generated; based on and , computes some response or action ; and based on and , enters its next configuration . From this description, several observations can be made: ’s internal configuration constitutes state in the Markovian sense — combined with the action , it entirely determines the next-state distribution. The dynamics are probabilistic due to the influence of the random bit sequence . We can thus view as implementing a policy in the MDP determined by (the internal configuration of) — ’s actions, together with the stochastic , determine the evolution of the . Informally, we might imagine defining the total return to to be 1 if causes to accept, and 0 if rejects. The MDP so defined in this manner is not arbitrarily complex — in particular, the transition dynamics are defined by the polynomial-time Turing machine . At a high level, then, if every MDP so defined by a language in AM poly had an “efficient” policy , then something remarkable would occur: the arbitrary power allowed to in the definition of the class would have been unnecessary. We shall see that this would have extraordinary and rather implausible complexity-theoretic implications. For the moment, let us simply sketch the refinements to this line of thought that will allow us to make the connection to factored MDPs: we will show that the MDPs defined above can actually be represented by DBN-MDPs with only constant indegree and a linear reward function. As suggested, this will allow us to assert rather strong negative results about even the existence of efficient policies, even when we ask for rather weak approximation to the optimal return. We now turn to the problem of planning in a DBN-MDP. Typically, one might like to have a “general-purpose” planning procedure — a procedure that takes as input a description of a DBN-MDP , and returns a description of the optimal policy . This is what is typically meant by the term planning, and we note that it demands a certain kind of uniformity — a single planning algorithm that can efficiently compute a succinct representation of the optimal policy for any DBN-MDP. Note that the existence of such a planning algorithm would certainly imply that every DBN-MDP has a succinct representation of its optimal policy — but the converse does not hold. It could be that the difficulty of planning in DBN-MDPs arises from the demand of uniformity — that is, that every DBNMDP possesses a succinct optimal policy, but the problem of computing it from the MDP parameters is intractable. This would be analogous to problems in NP — for example, every 3-colorable graph obviously has a succinct description of a 3-coloring, but it is difficult to compute it from the graph. As mentioned in the introduction, it has been known for some time that planning in this uniform sense is computationally intractable. Here we establish the stronger and conceptually important result that it is not the uniformity giving rise to the difficulty, but rather that there simply exist DBN-MDPs in which the optimal policy does not possess a succinct representation in any natural parameterization. We will present a specific family of DBNMDPs (where has states with components), and show that, under a standard complexity-theoretic assumption, the corresponding family of optimal policies cannot be represented by arbitrary Boolean circuits of size polynomial in . We note that such circuits constitute a universal representation of efficiently computable functions, and all of the standard parametric forms in wide use in AI and statistics can be computed by such circuits. We now provide the details of the construction. Let be any language in PSPACE, and let be a polynomial-time Turing machine running in time on inputs of length , implementing the algorithm of “Arthur” in the AM protocol for . Let be the maximum number of bits needed to write down a complete configuration of that may arise during computation on an input of length (so , since no computation taking time can consume more than space). Each state of our DBN-MDP will have components, each corresponding to one bit of the encoding of a configuration. No states will have rewards, except for the accepting states, which have reward . (Without loss of generality, we may assume that never enters an accepting state other than at time time .) Note that we can encode configurations so that there is one bit position (say, the first bit of the state vector) that records if the current state of is accepting or not. Thus the reward function is obviously linear (it is simply times the first component). There are two actions: . Each action advances the simulation of the AM game by one time step. There are three types of steps: 1. Steps where is choosing a bit to send to ; action corresponds to choosing to send a “ ” to . 2. Steps where is flipping a coin; each action yields probability of having the coin come up “heads”. 3. Steps where is doing deterministic computation; each action moves the computation ahead one step. It is straightforward to encode this as a DBN-MDP. Note that each bit of the next move relation of a Turing machine depends on only bits of the preceding configuration (i.e., on the bits encoding the contents of the neighboring cells, the bits encoding the presence or absence of the input head in one of those cells, and the bits encoding the finite state information of the Turing machine). Thus the DBN-MDP describing on inputs of length has constant indegree; each bit is connected to the bits on which it depends. Note that a path in this MDP corresponding to an accepting computation of on an input of length has total reward ; a rejecting path has reward . A routine calculation shows that the expected reward of the optimal policy is equal to the fraction of coin flip sequences that cause to accept when communicating with an optimal . That is, Prob accepts Optimal expected reward With the construction above, we can now describe our result: Theorem 1. If PSPACE is not contained in P/POLY, then there is a family of DBN-MDPs , , such that for any two polynomials, and , there exist infinitely many such that no circuit of size can compute a policy having expected reward greater than times the optimum. Before giving the formal proof, we remark that the assumption that PSPACE is not contained in P/POLY is standard and widely believed, and informally asserts that not everything that can be computed in polynomial space can be computed by a non-uniform family of small circuits. Proof. Let be any language in PSPACE that is not in P/POLY, and let be as described above. Suppose, contrary to the statement of Theorem, that for large enough there is indeed a circuit of size computing a policy for whose return is within a factor of optimal. We now consider the probabilistic circuit that operates as follows. takes a string as input, and estimates the expected return of the policy given by (which is the same as the probability that the prover associated with is able to convince that ). Specifically, builds the state corresponding to the start state of protocol on input , and then repeats the following procedure times: Given state , if is a state encoding a configuration in which it is ’s turn, use to compute the message sent by and set to the new state of the AM protocol. Otherwise, if is a state encoding a configuration in which it is ’s turn, flip a coin at random and set to the new state of the AM protocol. Repeat until an accept or reject state is encountered. If any of these repetitions result in an accept, accepts; otherwise rejects. Note now that if , then the probability that rejects is no more than since in this case we are guaranteed that each iteration will accept with probability at least . On the other hand, if , then accepts with probability no more than , since each iteration accepts with probability at most . As has polynomial size and a probabilistic circuit can be simulated by a deterministic one of essentially the same size, it follows that is in P/POLY, a contradiction. It is worth mentioning that, by the worst-case-to-average-case reduction of [1], if PSPACE is not in P/POLY then we can select such a language so that the circuit will perform badly on a non-negligible fraction of the states of . That is, not only is it hard to find an optimal policy, it will be the case that every policy that can be expressed as a polynomial size circuit will perform very badly on very many inputs. Finally, we remark that by coupling the above construction with the approximate lower bound protocol of [3], one can prove (under a stronger assumption) that there are no succinct policies for the DBN-MDPs which even approximate the optimum return to within an exponential factor. Theorem 2. If PSPACE is not contained in AM , then there is a family of DBN-MDPs , , such that for any polynomial there exist infinitely many such that no circuit of size can compute a policy having expected reward greater than times the optimum. References [1] L. Babai, L. Fortnow, N. Nisan, and A. Wigderson. BPP has subexponential time simulations unless EXPTIME has publishable proofs. Complexity Theory, 3:307–318, 1993. [2] L. Babai and S. Moran. Arthur-merlin games: a randomized proof system, and a hierarchy of complexity classes. Journal of Computer and System Sciences, 36(2):254– 276, 1988. [3] S. Goldwasser and M. Sipser. Private coins versus public coins in interactive proof systems. Advances in Computing Research, 5:73–90, 1989. [4] D. Johnson. A catalog of complexity classes. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, volume A. The MIT Press, 1990. [5] P. Liberatore. The size of MDP factored policies. In Proceedings of AAAI 2002. AAAI Press, 2002. [6] C. Lund, L. Fortnow, H. Karloff, and N. Nisan. Algebraic methods for interactive proof systems. Journal of the ACM, 39(4):859–868, 1992. [7] M. Mundhenk, J. Goldsmith, C. Lusena, and E. Allender. Complexity of finite-horizon Markov decision process problems. Journal of the ACM, 47(4):681–720, 2000. [8] C. Papadimitriou. Computational Complexity. Addison-Wesley, 1994. [9] A. Shamir. IP = PSPACE. Journal of the ACM, 39(4):869–877, 1992.
2002
204
2,219
A Prototype for Automatic Recognition of Spontaneous Facial Actions M.S. Bartlett, G. Littlewort, B. Braathen, T.J. Sejnowski , and J.R. Movellan Institute for Neural Computation and Department of Biology University of California, San Diego and Howard Hughes Medical Institute at the Salk Institute Email: marni, gwen, bjorn, terry, javier @inc.ucsd.edu Abstract We present ongoing work on a project for automatic recognition of spontaneous facial actions. Spontaneous facial expressions differ substantially from posed expressions, similar to how continuous, spontaneous speech differs from isolated words produced on command. Previous methods for automatic facial expression recognition assumed images were collected in controlled environments in which the subjects deliberately faced the camera. Since people often nod or turn their heads, automatic recognition of spontaneous facial behavior requires methods for handling out-of-image-plane head rotations. Here we explore an approach based on 3-D warping of images into canonical views. We evaluated the performance of the approach as a front-end for a spontaneous expression recognition system using support vector machines and hidden Markov models. This system employed general purpose learning mechanisms that can be applied to recognition of any facial movement. The system was tested for recognition of a set of facial actions defined by the Facial Action Coding System (FACS). We showed that 3D tracking and warping followed by machine learning techniques directly applied to the warped images, is a viable and promising technology for automatic facial expression recognition. One exciting aspect of the approach presented here is that information about movement dynamics emerged out of filters which were derived from the statistics of images. 1 Introduction Much of the early work on computer vision applied to facial expressions focused on recognizing a few prototypical expressions of emotion produced on command (e.g. ”smile”). These examples were collected under controlled imaging conditions with subjects deliberately facing the camera. Extending these systems to spontaneous facial behavior is a critical step forward for applications of this technology. Spontaneous facial expressions differ substantially from posed expressions, similar to how continuous, spontaneous speech differs from isolated words produced on command. Spontaneous facial expressions are mediated by a distinct neural pathway from posed expressions. The pyramidal motor system, originating in the cortical motor strip, drives voluntary facial actions, whereas involuntary, emotional facial expressions appear to originate in a subcortical motor circuit involving the basal ganglia, limbic system, and the cingulate motor area (e.g. [15]). Psychophysical work has shown that spontaneous facial expressions differ from posed expressions in a number of ways [6]. Subjects often contract different facial muscles when asked to pose an emotion such as fear versus when they are actually experiencing fear. (See Figure 1b.) In addition, the dynamics are different. Spontaneous expressions have a fast and smooth onset, with apex coordination, in which muscle contractions in different parts of the face peak at the same time. In posed expressions, the onset tends to be slow and jerky, and the muscle contractions typically do not peak simultaneously. Spontaneous facial expressions often contain much information beyond what is conveyed by basic emotion categories, such as happy, sad, or surprised. Faces convey signs of cognitive state such as interest, boredom, and confusion, conversational signals, and blends of two or more emotions. Instead of classifying expressions into a few basic emotion categories, the work presented here attempts to measure the full range of facial behavior by recognizing facial animation units that comprise facial expressions. The system is based on the Facial Action Coding System (FACS) [7]. FACS [7] is the leading method for measuring facial movement in behavioral science. It is a human judgment system that is presently performed without aid from computer vision. In FACS, human coders decompose facial expressions into action units (AUs) that roughly correspond to independent muscle movements in the face (see Figure 1). Ekman and Friesen described 46 independent facial movements, or ”facial actions” (Figure 1). These facial actions are analogous to phonemes for facial expression. Over 7000 distinct combinations of such movements have been observed in spontaneous behavior. 1+2 1+2+4 1+4 AU1 Inner Brow Raiser (Central Frontalis) AU2 (Lateral Frontalis) Outer Brow Raiser AU4 Brow Lower Depressor Glaballae) Depressor Supercilli, (Corrugator, Figure 1: The Facial Action Coding System decomposes facial expressions into component actions. The three individual brow region actions and selected combinations are illustrated. When subjects pose fear they often perform 1+2 (top right), whereas spontaneous fear reliably elicits 1+2+4 (bottom right) [6]. Advantages of FACS include (1) Objectivity. It does not apply interpretive labels to expressions but rather a description of physical changes in the face. This enables studies of new relationships between facial movement and internal state, such as the facial signals of stress or fatigue. (2) Comprehensiveness. FACS codes for all independent motions of the face observed by behavioral psychologists over 20 years of study. (3) Robust link with ground truth. There is over 20 years of behavioral data on the relationships between FACS movement parameters and underlying emotional or cognitive states. Automated facial action coding would be effective for human-computer interaction tools and low bandwidth facial animation coding, and would have a tremendous impact on behavioral science by making objective measurement more accessible. There has been an emergence of groups that analyze facial expressing into elementary movements. For example, Essa and Pentland [8] and Yacoob and Davis [16] proposed methods to analyze expressions into elementary movements using an animation style coding system inspired by FACS. Eric Petajan’s group has also worked for many years on methods for automatic coding of facial expressions in the style of MPEG4 [5], which codes movement of a set of facial feature points. While coding standards like MPEG4 are useful for animating facial avatars, they are of limited use for behavioral research since, for example, MPEG4 does not encode some behaviorally relevant facial movements such as the muscle that circles the eye (the orbicularis oculi, which differentiates spontaneous from posed smiles [6]). It also does not encode the wrinkles and bulges that are critical for distinguishing some facial muscle activations that are difficult to differentiate using motion alone yet can have different behavioral implications (e.g. see Figure 1b.) One other group has focused on automatic FACS recognition as a tool for behavioral research, lead by Jeff Cohn and Takeo Kanade. They present an alternative approach based on traditional computer vision techniques, including edge detection and optic flow. A comparative analysis of our approaches is available in [1, 4, 10]. 2 Factorizing rigid head motion from nonrigid facial deformations The most difficult technical challenge that came with spontaneous behavior was the presence of out-of-plane rotations due to the fact that people often nod or turn their head as they communicate with others. Our approach to expression recognition is based on statistical methods applied directly to filter bank image representations. While in principle such methods may be able to learn the invariances underlying out-of-plane rotations, the amount of data needed to learn such invariances is likely to be impractical. Instead, we addressed this issue by means of deformable 3D face models. We fit 3D face models to the image plane, texture those models using the original image frame, then rotate the model to frontal views, warp it to a canonical face geometry, and then render the model back into the image plane. (See Figures 2,3,4). This allowed us to factor out image variation due to rigid head rotations from variations due to nonrigid face deformations. The rigid transformations were encoded by the rotation and translation parameters of the 3D model. These parameters are retained for analysis of the relation of rigid head dynamics to emotional and cognitive state. Since our goal was to explore the use of 3D models to handle out-of-plane rotations for expression recognition, we first tested the system using hand-labeling to give the position of 8 facial landmarks. However the approach can be generalized in a straightforward and principled manner to work with automatic 3D trackers, which we are presently developing [9]. Although human labeling can be highly precise, the labels employed here had substantial error due to inattention when the face moved. Mean deviation between two labelers was 4 pixels  8.7. Hence it may be realistic to suppose that a fully automatic head pose tracker may achieve at least this level of accuracy. a. b. Figure 2: Head pose estimation. a. First camera parameters and face geometry are jointly estimated using an iterative least squares technique b. Next head pose is estimated in each frame using stochastic particle filtering. Each particle is a head model at a particular orientation and scale. When landmark positions in the image plane are known, the problem of 3D pose estimation is relatively easy to solve. We begin with a canonical wire-mesh face model and adapt it to the face of a particular individual by using 30 image frames in which 8 facial features have been labeled by hand. Using an iterative least squares triangulation technique, we jointly estimate camera parameters and the 3D coordinates of these 8 features. A scattered data interpolation technique is then used to modify the canonical 3D face model so that it fits the 8 feature positions [14]. Once camera parameters and 3D face geometry are known, we use a stochastic particle filtering approach [11] to estimate the most likely rotation and translation parameters of the 3D face model in each video frame. (See [2]). 3 Action unit recognition Database of spontaneous facial expressions. We employed a dataset of spontaneous facial expressions from freely behaving individuals. The dataset consisted of 300 Gigabytes of 640 x 480 color images, 8 bits per pixels, 60 fields per second, 2:1 interlaced. The video sequences contained out of plane head rotation up to 75 degrees. There were 17 subjects: 3 Asian, 3 African American, and 11 Caucasians. Three subjects wore glasses. The facial behaviors in one minute of video per subject were scored frame by frame by 2 teams experts on the FACS system, one lead by Mark Frank at Rutgers, and another lead by Jeffrey Cohn at U. Pittsburgh. While the database we used was rather large for current digital video storage standards, in practice the number of spontaneous examples of each action unit in the database was relatively small. Hence, we prototyped the system on the three actions which had the most examples: Blinks (AU 45 in the FACS system) for which we used 168 examples provided by 10 subjects, Brow raises (AU 1+2) for which we had 48 total examples provided by 12 subjects, and Brow lower (AU 4) for which we had 14 total examples provided by 12 subjects. Negative examples for each category consisted of randomly selected sequences matched by subject and sequence length. These three facial actions have relevance to applications such as monitoring of alertness, anxiety, and confusion. The system presented here employs general purpose learning mechanisms that can be applied to recognition of any facial action once sufficient training data is available. There is no need to develop special purpose feature measures to recognize additional facial actions. HMM Decoder SVM Bank Figure 3: Flow diagram of recognition system. First, head pose is estimated, and images are warped to frontal views and canonical face geometry. The warped images are then passed through a bank of Gabor filters. SVM’s are then trained to classify facial actions from the Gabor representation in individual video frames. The output trajectories of the SVM’s for full video sequences are then channeled to hidden Markov models. Recognition system. An overview of the recognition system is illustrated in Figure 3. Head pose was estimated in the video sequences using a particle filter with 100 particles. Face images were then warped onto a face model with canonical face geometry, rotated to frontal, and then projected back into the image plane. This alignment was used to define and crop a subregion of the face image containing the eyes and brows. The vertical position of the eyes was 0.67 of the window height. There were 105 pixels between the eyes and 120 pixels from eyes to mouth. Pixel brightnesses were linearly rescaled to [0,255]. Soft histogram equalization was then performed on the image gray-levels by applying a logistic filter with parameters chosen to match the mean and variance of the gray-levels in the neutral frame [13]. The resulting images were then convolved with a bank of Gabor kernels at 5 spatial frequencies and 8 orientations. Output magnitudes were normalized to unit length and then downsampled by a factor of 4. The Gabor representations were then channeled to a bank of support vector machines (SVM’s). Nonlinear SVM’s were trained to recognize facial actions in individual video frames. The training samples for the SVM’s were the action peaks as identified by the FACS experts, and negative examples were randomly selected frames matched by subject. Generalization to novel subjects was tested using leave-oneout cross-validation. The SVM output was the margin (distance along the normal to the class partition). Trajectories of SVM outputs for the full video sequence of test subjects were then channeled to hidden Markov models (HMM’s). The HMM’s were trained to classify facial actions without using information about which frame contained the action peak. Generalization to novel subjects was again tested using leave-one-out cross-validation. Figure 4: User interface for the FACS recognition system. The face on the bottom right is an original frame from the dataset. Top right: Estimate of head pose. Center image: Warped to frontal view and conical geometry. The curve shows the output of the blink detector for the video sequence. This frame is in the relaxation phase of a blink. 4 Results Classifying individual frames with SVM’s. SVM’s were first trained to discriminate images containing the peak of blink sequences from randomly selected images containing no blinks. A nonlinear SVM applied to the Gabor representations obtained 95.9% correct for discriminating blinks from non-blinks for the peak frames. The nonlinear kernel was of the form  where  is Euclidean distance, and  is a constant. Here  . Recovering FACS dynamics. Figure 5a shows the time course of SVM outputs for complete sequences of blinks. Although the SVM was not trained to measure the amount of eye opening, it is an emergent property. In all time courses shown, the SVM outputs are test outputs (the SVM was not trained on the subject shown). Figure 5b shows the SVM trajectory when tested on a sequence with multiple peaks. The SVM outputs provide information about FACS dynamics that was previously unavailable by human coding due to time constraints. Current coding methods provide only the beginning and end of the action, along with the location and magnitude of the action unit peak. This information about dynamics may be useful for future behavioral studies. a. b. Output Frame * * * c. Output Frame B B C C D C D Figure 5: a. Blink trajectories of SVM outputs for four different subjects. Star indicates the location of the AU peak as coded by the human FACS expert. b. SVM output trajectory for a blink with multiple peaks (flutter). c. Brow raise trajectories of SVM outputs for one subject. Letters A-D indicate the intensity of the AU as coded by the human FACS expert, and are placed at the peak frame. HMM’s were trained to classify action units from the trajectories of SVM outputs. HMM’s addressed the case in which the frame containing the action unit peak is unknown. Two hidden Markov models, one for Blinks and one for random sequences matched by subject and length, were trained and tested using leave-one-out cross-validation. A mixture of Gaussians model was employed. Test sequences were assigned to the category for which the probability of the sequence given the model was greatest. The number of states was varied from 1-10, and the number of Gaussian mixtures was varied from 1-7. Best performance of 98.2% correct was obtained using 6 states and 7 Gaussians. Brow movement discrimination. The goal was to discriminate three action units localized around the eyebrows. Since this is a 3-category task and SVMs are originally designed for binary classification tasks, we trained a different SVM on each possible binary decision task: Brow Raise (AU 1+2) versus matched random sequences, Brow Lower (AU 4) versus another set of matched random sequences, and Brow Raise versus Brow Lower. The output of these three SVM’s was then fed to an HMM for classification. The input to the HMM consisted of three values which were the outputs of each of the three 2-category SVM’s. As for the blinks, the HMM’s were trained on the “test” outputs of the SVM’s. The HMM’s achieved 78.2% accuracy using 10 states, 7 Gaussians and including the first derivatives of the observation sequence in the input. Separate HMM’s were also trained to perform each of the 2-category brow movement discriminations in image sequences. These results are summarized in Table 1. Figure 5c shows example output trajectories for the SVM trained to discriminate Brow Raise from Random matched sequences. As with the blinks, we see that despite not being trained to indicate AU intensity, an emergent property of the SVM output was the magnitude of the brow raise. Maximum SVM output for each sequence was positively correlated with action unit intensity, as scored by the human FACS expert          . The contribution of Gabors was examined by comparing linear and nonlinear SVM’s applied directly to the difference images versus to Gabor outputs. Consistent with our previous findings [12], Gabor filters made the space more linearly separable than the raw difference images. For blink detection, a linear SVM on the Gabors performed significantly better (93.5%) than a linear SVM applied directly to difference images (78.3%). Using a nonlinear SVM with difference images improved performance substantially to 95.9%, whereas the nonlinear SVM on Gabors gave only a small increment in performance, also Action % Correct N (HMM) Blink vs. Non-blink 98.2 168 Brow Raise vs. Random 90.6 48 Brow Lower vs. Random 75.0 14 Brow Raise vs. Brow Lower 93.5 31 Brow Raise vs. Lower vs. Random 78.2 62 Table 1: Summary of results. All performances are for generalization to novel subjects. Random: Random sequences matched by subject and length. N: Total number of positive (and also negative) examples. to 95.9%. A similar pattern was obtained for the brow movements, except that nonlinear SVMs applied directly to difference images did not perform as well as nonlinear SVM’s applied to Gabors. The details of this analysis, and also an analysis of the contribution of SVM’s to system performance, are available in [1]. 5 Conclusions We explored an approach for handling out-of-plane head rotations in automatic recognition of spontaneous facial expressions from freely behaving individuals. The approach fits a 3D model of the face and rotates it back to a canonical pose (e.g., frontal view). We found that machine learning techniques applied directly to the warped images is a promising approach for automatic coding of spontaneous facial expressions. This approach employed general purpose learning mechanisms that can be applied to the recognition of any facial action. The approach is parsimonious and does not require defining a different set of feature parameters or image operations for each facial action. While the database we used was rather large for current digital video storage standards, in practice the number of spontaneous examples of each action unit in the database was relatively small. We therefore prototyped the system on the three actions which had the most examples. Inspection of the performance of our system shows that 14 examples was sufficient to successfully learn an action, an order of 50 examples was sufficient to achieve performance over 90%, and an order of 150 examples was sufficient to achieve over 98% accuracy and learn smooth trajectories. Based on these results, we estimate that a database of 250 minutes of coded, spontaneous behavior would be sufficient to train the system on the vast majority of facial actions. One exciting finding is the observation that important measurements emerged out of filters derived from the statistics of the images. For example, the output of the SVM filter matched to the blink detector could be potentially used to measure the dynamics of eyelid closure, even though the system was not designed to explicitly detect the contours of the eyelid and measure the closure. (See Figure 5.) The results presented here employed hand-labeled feature points for the head pose tracking step. We are presently developing a fully automated head pose tracker that integrates particle filtering with a system developed by Matthew Brand for automatic real-time 3D tracking based on optic flow [3]. All of the pieces of the puzzle are ready for the development of automated systems that recognize spontaneous facial actions at the level of detail required by FACS. Collection of a much larger, realistic database to be shared by the research community is a critical next step. Acknowledgments Support for this project was provided by ONR N00014-02-1-0616, NSF-ITR IIS-0220141 and IIS0086107, DCI contract No.2000-I-058500-000, and California Digital Media Innovation Program DiMI 01-10130. References [1] M.S. Bartlett, B. Braathen, G. Littlewort-Ford, J. Hershey, I. Fasel, T. Marks, E. Smith, T.J. Sejnowski, and J.R. Movellan. Automatic analysis of of spontaneous facial behavior: A final project report. Technical Report UCSD MPLab TR 2001.08, University of California, San Diego, 2001. [2] B. Braathen, M.S. Bartlett, G. Littlewort-Ford, and J.R. Movellan. 3-D head pose estimation from video by nonlinear stochastic particle filtering. In Proceedings of the 8th Joint Symposium on Neural Computation, 2001. [3] M. Brand. Flexible flow for 3d nonrigid tracking and shape recovery. In CVPR, 2001. [4] J.F. Cohn, T. Kanade, T. Moriyama, Z. Ambadar, J. Xiao, J. Gao, and H. Imamura. A comparative study of alternative FACS coding algorithms. Technical Report CMU-RI-TR-02-06, Robotics Institute, Carnegie-Mellon Univerisity, 2001. [5] P. Doenges, F. Lavagetto, J. Ostermann, I.S. Pandzic, and E. Petajan. Mpeg-4: Audio/video and synthetic graphics/audio for real-time, interactive media delivery. Image Communications Journal, 5(4), 1997. [6] P. Ekman. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage. W.W. Norton, New York, 3rd edition, 2001. [7] P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, CA, 1978. [8] I. Essa and A. Pentland. Coding, analysis, interpretation, and recognition of facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):757–63, 1997. [9] I.R. Fasel, M.S. Bartlett, and J.R. Movellan. A comparison of gabor filter methods for automatic detection of facial landmarks. In Proceedings of the 5th International Conference on Face and Gesture Recognition, 2002. Accepted. [10] M.G. Frank, P. Perona, and Y. Yacoob. Automatic extraction of facial action codes. final report and panel recommendations for automatic facial action coding. Unpublished manuscript, Rutgers University, 2001. [11] G. Kitagawa. Monte carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1):1–25, 1996. [12] G. Littlewort-Ford, M.S. Bartlett, and J.R. Movellan. Are your eyes smiling? detecting genuine smiles with support vector machines and gabor wavelets. In Proceedings of the 8th Joint Symposium on Neural Computation, 2001. [13] J.R. Movellan. Visual speech recognition with stochastic networks. In G. Tesauro, D.S. Touretzky, and T. Leen, editors, Advances in Neural Information Processing Systems, volume 7, pages 851–858. MIT Press, Cambridge, MA, 1995. [14] Fr´ed´eric Pighin, Jamie Hecker, Dani Lischinski, Richard Szeliski, and David H. Salesin. Synthesizing realistic facial expressions from photographs. Computer Graphics, 32(Annual Conference Series):75–84, 1998. [15] W. E. Rinn. The neuropsychology of facial expression: A review of the neurological and psychological mechanisms for producing facial expressions. Psychological Bulletin, 95(1):52– 77, 1984. [16] Y. Yacoob and L. Davis. Recognizing human facial expressions from long image sequences using optical flow. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(6):636– 642, 1996.
2002
205
2,220
Exact MAP Estimates by (Hyper)tree Agreement Martin J. Wainwright, Department of EECS, UC Berkeley, Berkeley, CA 94720 martinw@eecs.berkeley.edu Tommi S. Jaakkola and Alan S. Willsky, Department of EECS, Massachusetts Institute of Technology, Cambridge, MA, 02139 tommi,willsky  @mit.edu Abstract We describe a method for computing provably exact maximum a posteriori (MAP) estimates for a subclass of problems on graphs with cycles. The basic idea is to represent the original problem on the graph with cycles as a convex combination of tree-structured problems. A convexity argument then guarantees that the optimal value of the original problem (i.e., the log probability of the MAP assignment) is upper bounded by the combined optimal values of the tree problems. We prove that this upper bound is met with equality if and only if the tree problems share an optimal configuration in common. An important implication is that any such shared configuration must also be the MAP configuration for the original problem. Next we develop a tree-reweighted max-product algorithm for attempting to find convex combinations of tree-structured problems that share a common optimum. We give necessary and sufficient conditions for a fixed point to yield the exact MAP estimate. An attractive feature of our analysis is that it generalizes naturally to convex combinations of hypertree-structured distributions. 1 Introduction Integer programming problems arise in various fields, including machine learning, statistical physics, communication theory, and error-correcting coding. In many cases, such problems can be formulated in terms of undirected graphical models [e.g., 1], in which the cost function corresponds to a graph-structured probability distribution, and the problem of interest is to find the maximum a posteriori (MAP) configuration. In previous work [2], we have shown how to use convex combinations of tree-structured distributions in order to upper bound the log partition function. In this paper, we apply similar ideas to upper bound the log probability of the MAP configuration. As we show, this upper bound is met with equality whenever there is a configuration that is optimal for all trees, in which case it must also be a MAP configuration for the original problem. The work described here also makes connections with the max-product algorithm [e.g., 3, 4, 5], a well-known method for attempting to compute the MAP configuration, one which is exact for trees but approximate for graphs with cycles. In the context of coding problems, Frey and Koetter [4] developed an attenuated version of max-product, which is guaranteed to find the MAP codeword if it converges. One contribution of this paper is to develop a tree-reweighted max-product algorithm that attempts to find a collection of tree-structured problems that share a common optimum. This algorithm, though similar to both the standard and attenuated max-product updates [4], differs in key ways. The remainder of this paper is organized as follows. The next two subsections provide background on exponential families and convex combinations. In Section 2, we introduce the basic form of the upper bounds on the log probability of the MAP assignment, and then develop necessary and sufficient conditions for it to tight (i.e., met with equality). In Section 3, we develop tree-reweighted max-product algorithms for attempting to find a convex combination of trees that yields a tight bound. We prove that for positive compatibility functions, the algorithm always has at least one fixed point; moreover, if a key uniqueness condition is satisfied, the configuration specified by a fixed point must be MAP optimal. We also illustrate how the algorithm, like the standard max-productalgorithm [5], can fail if the uniqueness condition is not satisfied. We conclude in Section 4 with pointers to related work, and extensions of the current work. 1.1 Notation and set-up Consider an undirected (simple) graph  . For each vertex  , let  be a random variable taking values in the discrete space    !#"$  . We use the letters %'& to denote particular elements of the sample space   . The overall random vector (   *) +!  takes values in the Cartesian product space -,./1032544462 , , where 7  )  ) . We make use of the following exponential representation of a graph-structured distribution 89 ( . For some index set : , we let ;. =<> )@? !:  denote a collection of potential functions defined on the cliques of , and let A! A > )B? C:  be a vector of real-valued weights on these potential functions. The exponential family determined by ; is the collection of distributions 8D (FE A GIHKJ@L M#N >PO=Q A >R<S>  ( T . In a minimal exponential representation, the functions =<>  are affinely independent. For example, one minimal representation of a binary process (i.e., U1    for all ) using pairwise potential functions is the usual Ising model, in which the collection of potentials ;V W ) XY [Z WR\ ) ] ^ _ `a  . In this case, the index set is given by :bcYZd . In most of our analysis, we use an overcomplete representation, in which there are linear dependencies among the potentials =<>  . In particular, we use indicator functions as potentials: <  e f g  h i  e f g  h j` E %k!  (1a) < l\e f'm n    \ h i  e f g  oi \e m g \ K  _ pq E r%'&P 1`  2` \ (1b) where the indicator function i  e f g  is equal to one if   $% , and zero otherwise. In this case, the index set : consists of the union of :   E %s ) tu E %`v  with the edge indices : 5   _ E %s&P ) ]  _ #! E r%6&P p!pw2`x\  . Of interest to us is the maximum a posteriori configuration y (zp{S| I}^~ €t}Jƒ‚ O„x… 8D (FE A . Equivalently, we can express this MAP configuration as the solution of the integer program † gA I€k}‡J ‚ O„ …-ˆ  (FE A , where ˆ  (FE A h ‰gARB; ( Š‹Œ  OƒŽ Œ f A e f <  e fgW  Œ  '‘ \g’ O^“ Œ f ‘ m Al\e f'm < l\e f'mRnW= R\ (2) Note that the function † nA is the maximum of a collection of linear functions, and hence is convex [6] as a function of A , which is a key property for our subsequent development. 1.2 Convex combinations of trees Let y A be a particular parameter vector for which we are interested in computing †  y A . In this section, we show how to derive upper bounds via the convexity of † . Let ” denote a particular spanning tree of , and let Unj denote the set of all spanning trees. For each spanning tree ”  , let AR” be an exponential parameter vector of the same dimension as A that respects the structure of ” . To be explicit, if ” is defined by an edge set ” Y , then ARn” must have zeros in all elements corresponding to edges not in ” . However, given an edge  belonging to two trees ” 0 and ” , the quantity A n” 0 can be different than A ‡n”  . For compactness, let   ARn”5 ) ”  denote the full collection, where the notation ARn” specifies those subelements of corresponding to spanning tree ” . In order to define a convex combination, we require a probability distribution   over the set of spanning trees — that is, a vector     n” K ” )  ”    such that N O   n”5 `  . For any distribution   , we define its support, denoted by PLPL3  , to be the set of trees to which it assigns strictly positive probability. In the sequel, we will also be interested in the probability   ~ 5!”  that a given edge   appears in a spanning tree ” chosen randomly under   . We let "!  #  ) +!  represent a vector of edge appearance probabilities, which must belong to the spanning tree polytope [see 2]. We say that a distribution   (or the vector $! ) is valid if  &%  for every edge j! . A convex combination of exponential parameter vectors is defined via the weighted sum N O   n”5 oARn”t , which we denote compactly as '& &( ARn” *) . Of particular importance are collections of exponential parameters for which there exists a convex combination that is equal to y A . Accordingly, we define the set +  y Aƒ ,  M - E   . . '& "( AR” *) y A T . For any valid distribution   , it can be seen that there exist pairs * E   p+  y As . Example 1 (Single cycle). To illustrate these definitions, consider a binary distribution ( #w    for all nodes d ) defined by a single cycle on 4 nodes. Consider a target distribution in the minimal Ising form 89 (FE y Aƒ ` HJsL  0 /j.//0j./021j.21 0 " 3  y As  ; otherwise stated, the target distribution is specified by the minimal parameter y AY ( dud udu4) , where the zeros represent the fact that y A`  for all ua . The 5 6 5 6 5 6 7 8 9 8 9 : 8 9 ; < ; < = ; < > ? > ? > ? @ Figure 1. A convex combination of four distributions ACBEDGFIH BKJMLONPN , each defined by a spanning tree JL , is used to approximate the target distribution AQBEDRFS HTN on the single-cycle graph. four possible spanning trees v ”2U )WV Y^YX  on a single cycle on four nodes are illustrated in Figure 1. We define a set of associated exponential parameters Y ARn” U  as follows: ARn” 0  X Z (   / ..  ) ARn” =  X Z (   / .  [) ARn” 0= h X Z ( ..    /4) ARn”\1‡ h X Z ( .. /  /4) Finally, we choose  n”U bW]WX for all ”U1^ . With this uniform distribution over trees, we have  1 Z ]_X for each edge, and '" `( ABn” P)S y A , so that - E   pa+  y Aƒ . 2 Optimal upper bounds With the set-up of the previous section, the basic form of the upper bounds follows by applying Jensen’s inequality [6]. In particular, for any pair - E   t +  y As , we have the upper bound †  y Aƒ  '" "( † gABn” *) . The goal of this section is to examine this bound, and understand when it is met with equality. In more explicit terms, the upper bound can be written as: †  y As  Œ   n” † gAR” v Œ   ”5 *€k}‡J ‚ O„ … ‰nARn” B;* ( Š (3) Now suppose that there exists an y (  k, that attains the maximum defining † nARn” for each tree ”  YPLRLD  . In this case, it is clear that the bound (3) is met with equality. An important implication is that the configuration y ( also attains the maximum defining †  y Aƒ , so that it is an optimal solution to the original problem. In fact, as we show below, the converse to this statement also holds. More formally, for any exponential parameter vector ARn” , let `*nARn” be the collection of configurations ( that attain the maximum defining † gAR” , defined as follows: `nARn”t  ( ! , ) ‰nARn” B; ( Š  ‰gABn” KB;* ( Š  ~k}  (   ,  (4) With this notation, the critical property is that the intersection `+- *    `nARn” of configurations optimal for all tree-structured problems is non-empty. We thus have the following result: Proposition 1 (Tightness of bound). The bound of equation (3) is tight if and only if there exists a configuration y (  k, that for each ”  PLPL3  achieves the maximum defining † gAR” . In other words, y ( `** * . Proof. Consider some pair - E   ^+  y Aƒ . Let y ( be a configuration that attains the maximum defining †  y A . We write the difference of the RHS and the LHS of equation (3) as follows:    Œ   ” † gABn” *" †  y As   Œ   n” † nARn” *"$‰ y ARB;gy ( Š  Œ   n”  † gAR” " ‰nARn” B;ny ( Š Now for each ” PLPL3  , the term † gAR” W"u‰gABn”t B;y ( Š is non-negative, and equal to zero only when y ( belongs to `*gABn”t . Therefore, the bound is met with equality if and only if y ( achieves the maximum defining † gAR” for all trees ”  PLPL3  . Proposition 1 motivates the following strategy: given a spanning tree distribution   , find a collection of exponential parameters . A^n”  such that the following holds: (a) Admissibility: The pair *  E   satisfies N   n” oA^n” # y A . (b) Mutual agreement: The intersection   `*gA^” of tree-optimal configurations is non-empty. If (for a fixed   ) we are able to find a collection  satisfying these two properties, then Proposition 1 guarantees that all configurations in the (non-empty) intersection   `nAn” achieve the maximum defining †  y Aƒ . As discussed above, assuming that   assigns strictly positive probability to every edge in the graph, satisfying the admissibility condition is not difficult. It is the second condition of mutual optimality on all trees that poses the challenge. 3 Mutual agreement via equal max-marginals We now develop an algorithm that attempts to find, for a given spanning tree distribution   , a collection  A^n”  satisfying both of these properties. Interestingly, this algorithm is related to the ordinary max-product algorithm [3, 5], but differs in several key ways. While this algorithm can be formulated in terms of reparameterization [e.g., 5], here we present a set of message-passing updates. 3.1 Max-marginals The foundation of our development is the fact [1] that any tree-structured distribution 8D (FE AR” can be factored in terms of its max-marginals. In particular, for each node +! , the corresponding single node max-marginal is defined as follows: =gW  €k}J  ‚   89 (  E ABn” (5) In words, for each Sw # , =nWK is the maximum probability over the subset of configurations (  with element    fixed to S . For each edge   _ #! , the pairwise max-marginal is defined analogously as l\ g    \ $€k}J  ‚      ‘   ’   ‘   ’ 89 (  E AR” . With these definitions, the max-marginal tree factorization [1] is given by: 89 (FE ARn”t G   OƒŽ =nW   '‘ \g’ O^“   ’ l\ n    \ =nWK \nR\ (6) One interpretation of the ordinary max-product algorithm for trees, as shown in our related work [5], is as computing this alternative representation. Suppose moreover that for each node j! , the following uniqueness condition holds: Uniqueness Condition: For each +! , the max-marginal  has a unique optimum   . In this case, the vector (     ) +   is the MAP configuration for the tree-structured distribution [see 5]. 3.2 Tree-reweighted max-product The tree-reweighted max-product method is a message-passing algorithm, with fixed points that specify a collection of tree exponential parameters t A”  satisfying the admissibility condition. The defining feature of  is that the associated tree distributions 8D (FE A”5 all share a common set        l\  of max-marginals. In particular, for a given tree ” with edge set ”5 , the distribution 8D (FE A n”5 is specified compactly by the subcollection         ) +! xZ  l\ )   _ p` ”  as follows: ACBEDGF*HTBKJ"NPNaA BEDGF N ! #" $%'&)(  $ B+* $ N " , $.+/%'0 ,  / (  $. B+* $21 * . N (  $ B+* $ N (  . B+* . N (7) where 3 is a constant1 independent of ( . As long as   satisfies the Uniqueness Condition, the configuration (      ) b  must be the MAP configuration for each treestructured distribution 8D (FE A^n” . This mutual agreement on trees, in conjunction with the admissibility of  , implies that (  is also the MAP configuration for 8D (FE y Aƒ . For each valid  ! , there exists a tree-reweighted max-product algorithm designed to find the requisite set   of max-marginals via a sequence of message-passing operations. For each edge   _ !Y , let 4v\n=nWK be the message passed from node _ to node . It is a vector of length ! , with one element for each state %/Y . We use < ‡gS E y Aƒ as a 1We use this notation throughout the paper, where the value of may change from line to line. shorthand for N f y A e f <  e fƒgSK , with the quantity <  \KgW‡ R\ E y Al\o similarly defined. We use the messages  4Xl\  to specify a set of functions a  l\  as follows: ( $ B+* $ N    $ B+* $ F S H $ N " % , $ /  $ B+* $ N  (8a) ( $. B+* $ 1 * . N   $. B+* $ 1 * . FYS HTN  % , $ / .   $ B+* $ N    . $ B+* $ N  ,   /  % , . / $   . B+* . N     $. B+* . N  ,   / (8b) where l\KgS= B\ E y A IHKJ@L ! 0 "  <  \KgW‡ R\ E y Al\o 3 < nW E y A 3 y A=\ < gB\ E y A=\o # . For each tree ” , the subcollection    5 can be used to define a tree-structured distribution 8   (FE  , in a manner analogous to equation (7). By expanding the expectation ' &( 8   (FE 5 *) and making use of the definitions of   and  l\ , we can prove the following: Lemma 1 (Admissibility). Given any collection   l\  defined by a set of messages as in equations (8a) and (8b), the convex combination N   ”   ^38   (FE 5 is equivalent to  ^89 (FE y As up to an additive constant. We now need to ensure that     l\  are a consistent set of max-marginals for each tree-distribution 8   (FE 5 . It is sufficient [1, 5] to impose, for each edge   _ , the edgewise consistency condition €k}‡J    O„  l\nW=   \  3 =gW . In order to enforce this condition, we update the messages in the following manner: Algorithm 1 (Tree reweighted max-product). 1. Initialize the messages %$w 4&$ l\  with arbitrary positive real numbers. 2. For iterations '`  (P , update the messages as follows: *),+  . $ B+* $ N .-0/1 2   %3 4 6587 9 $.  $. B+* $ 1 *;: . F S H $ . N=<> . B+*;: . F S H . N@?  % , .+/ $  ) . B+* : . N      ) $. B+* : . N  ,   / A (9) Using the definitions of   and  l\ , as well as the message update equation (9), the following result can be proved: Lemma 2 (Edgewise consistency). Let  be a fixed point of the message update equation (9), and let  j     l\  be defined via  as in equations (8a) and (8b) respectively. Then the edgewise consistency condition is satisfied. The message update equation (9) is similar to the standard max-product algorithm [3, 5]. Indeed, if is actually a tree, then we must have  l\   for every edge   _  , in which case equation (9) is precisely equivalent to the ordinary max-product update. However, if has cycles, then it is impossible to have  l\p  for every edge ] ^ _ w , so that the updates in equation (9) differ from ordinary max-product in some key ways. First of all, the weight y Al\ on the potential function < l\ is scaled by the (inverse of the) edge appearance probability W]  l\^  . Secondly, for each neighbor B!>Cg_ D , the incoming message 4FE \ is scaled by the corresponding edge appearance probability  E \   . Third of all, in sharp contrast to standard [3] and attenuated [4] max-product updates, the update of message 4v\n — that is, from _ to along edge   _ — depends on the reverse direction message 4‹l\ from to _ along the same edge. Despite these differences, the messages can be updated synchronously as in ordinary max-product. It also possible to perform reparameterization updates over spanning trees, analogous to but distinct from those for ordinary max-product [5]. Such tree-based updates can be terminated once the trees agree on a common configuration, which may happen prior to message convergence [7]. 3.3 Analysis of fixed points In related work [5], we established the existence of fixed points for the ordinary maxproduct algorithm for positive compatibility functions on an arbitrary graph. The same proof can be adapted to show that the tree-reweighted max-product algorithm also has at least one fixed point  . Any such fixed point  defines pseudo-max-marginals   via equations (8a) and (8b), which (by design of the algorithm) have the following property: Theorem 1 (Exact MAP). If   satisfies the Uniqueness Condition, then the configuration (  with elements    }^~ F€k}‡J   O„   n   is a MAP configuration for 89 (FE y Aƒ . Proof. For each spanning tree ” a n” , the fixed point   defines a tree-structured distribution 89 (FE A”5 via equation (7). By Lemma 2, the elements of   are edgewise consistent. By the equivalence of edgewise and global consistency for trees [1], the subcollection         ) +! xZ   \ ) ] ^ _ #! n”5  are exact max-marginals for the tree-structured distribution 8D (FE A ” . As a consequence, the configuration (  must belong to `*gAn”5 for each tree ” , so that mutual agreement is satisfied. By Lemma 1, the convex combination '& "(  ^D8D (FE A^n” *) is equal to  ^89 (FE y Aƒ , so that admissibility is satisfied. Proposition 1 then implies that (  is a MAP configuration for 8D (FE y A . 3.4 Failures of tree-reweighted max-product In all of our experiments so far, the message updates of equation (9), if suitably relaxed, have always converged.2 Rather than convergence problems, the breakdown of the algorithm appears to stem primarily from failure of the Uniqueness Condition. If this assumption is not satisfied, we are no longer guaranteed that the mutual agreement condition is satisfied (i.e., `*  may be empty). Indeed, a configuration (  belongs to `*-  if and only if the following conditions hold: Node optimality: The element    must achieve €t}J     n   for every +  . Edge optimality: The pair g     \ must achieve €k}‡J   ‘    ’   \ n      \ for all   _ #! . For a given fixed point   that fails the Uniqueness Condition, it may or may not be possible to satisfy these conditions, as the following example illustrates. Example 2. Consider the single cycle on three vertices, as illustrated in Figure 2. We define a distribution 8D (FE y Aƒ in an indirect manner, by first defining a set of pseudo-maxmarginals   in panel (a). Here ‹ (  4) is a parameter to be specified. Observe that the symmetry of this construction ensures that   satisfies the edgewise consistency condition (Lemma 2) for any v (  [) . For each of the three spanning trees of this graph, the collection   defines a tree-structured distribution 8   (FE   as in equation (7). We define the underlying distribution via 8D (FE y As # ' `(  ^38   (FE   P)P , where   is the uniform distribution (weight  W] Z on each tree). In the case ^%    , illustrated in panel (b), it can be seen that two configurations — namely ( vv ) and ( vv[) — satisfy the node and edgewise optimality conditions. Therefore, each of these configurations are global maxima for the cost function '` &(  ^89 (FE   P) . On the other hand, when    , as illustrated in panel (c), any configuration (  that is edgewise optimal for all three edges must satisfy     \ for all ]  _ p` . This is clearly impossible, so that the fixed point   cannot be used to specify a MAP assignment. Of course, it should be recognized that this example was contrived to break down the algorithm. It should also be noted that, as shown in our related work [5], the standard max2In a relaxed message update, we take an -step towards the new (log) message, where B 1 7  is the step size parameter. To date, we have not been able to prove that relaxed updates will always converge. (  $   7 7  (  $.   7  7                                      !      "       ! (a) (b) (c) Figure 2. Cases where the Uniqueness Condition fails. (a) Specification of pseudo-maxmarginals   . (b) For $# &% ' , both (  and ( 7 7 7  are node and edgewise optimal. (c) For *) +% ' , no configurations are node and edgewise optimal on the full graph. product algorithm can also break down when this Uniqueness Condition is not satisfied. 4 Discussion This paper demonstrated the utility of convex combinations of tree-structured distributions in upper bounding the log probability of the MAP configuration. We developed a family of tree-reweighted max-product algorithms for computing optimal upper bounds. In certain cases, the optimal upper bound is met with equality, and hence yields an exact MAP configuration for the original problem on the graph with cycles. An important open question is to characterize the range of problems for which the upper bound is tight. For problems involving a binary-valued random vector, we have isolated a class of problems for which the upper bound is guaranteed to be tight. We have also investigated the Lagrangian dual associated with the upper bound (3). The dual has a natural interpretation as a tree-relaxed linear program, and has been applied to turbo decoding [7]. Finally, the analysis and upper bounds of this paper can be extended in a straightforward manner to hypertrees of of higher width. In this context, hypertree-reweighted forms of generalized max-product updates [see 5] can again be used to find optimal upper bounds, which (when they are tight) again yield exact MAP configurations. References [1] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. Probablistic Networks and Expert Systems. Statistics for Engineering and Information Science. Springer-Verlag, 1999. [2] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. In Proc. Uncertainty in Artificial Intelligence, volume 18, pages 536–543, August 2002. [3] W. T. Freeman and Y. Weiss. On the optimality of solutions of the max-product belief propagation algorithm in arbitrary graphs. IEEE Trans. Info. Theory, 47:736–744, 2001. [4] B. J. Frey and R. Koetter. Exact inference using the attenuated max-product algorithm. In Advanced mean field methods: Theory and Practice. MIT Press, 2000. [5] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. Tree consistency and bounds on the maxproduct algorithm and its generalizations. LIDS Tech. report P-2554, MIT; Available online at http://www.eecs.berkeley.edu/ , martinw, July 2002. [6] D.P. Bertsekas. Nonlinear programming. Athena Scientific, Belmont, MA, 1995. [7] J. Feldman, M. J. Wainwright, and D. R. Karger. Linear programming-based decoding and its relation to iterative approaches. In Proc. Allerton Conf. Comm. Control and Computing, October 2002.
2002
206
2,221
Prediction of Protein Topologies Using Generalized IOHMMs and RNNs Gianluca Pollastri and Pierre Baldi Department of Information and Computer Science University of California, Irvine Irvine, CA 92697-3425 gpollast,pfbaldi@ics.uci.edu Alessandro Vullo and Paolo Frasconi Dipartimento di Sistemi e Informatica Universit`a di Firenze Via di Santa Marta 3, 50139 Firenze, ITALY vullo,paolo@dsi.unifi.it Abstract We develop and test new machine learning methods for the prediction of topological representations of protein structures in the form of coarse- or fine-grained contact or distance maps that are translation and rotation invariant. The methods are based on generalized input-output hidden Markov models (GIOHMMs) and generalized recursive neural networks (GRNNs). The methods are used to predict topology directly in the fine-grained case and, in the coarsegrained case, indirectly by first learning how to score candidate graphs and then using the scoring function to search the space of possible configurations. Computer simulations show that the predictors achieve state-of-the-art performance. 1 Introduction: Protein Topology Prediction Predicting the 3D structure of protein chains from the linear sequence of amino acids is a fundamental open problem in computational molecular biology [1]. Any approach to the problem must deal with the basic fact that protein structures are translation and rotation invariant. To address this invariance, we have proposed a machine learning approach to protein structure prediction [4] based on the prediction of topological representations of proteins, in the form of contact or distance maps. The contact or distance map is a 2D representation of neighborhood relationships consisting of an adjacency matrix at some distance cutoff(typically in the range of 6 to 12 ˚A), or a matrix of pairwise Euclidean distances. Fine-grained maps are derived at the amino acid or even atomic level. Coarse maps are obtained by looking at secondary structure elements, such as helices, and the distance between their centers of gravity or, as in the simulations below, the minimal distances between their Cα atoms. Reasonable methods for reconstructing 3D coordinates from contact/distance maps have been developed in the NMR literature and elsewhere H H O i i i I i B F Figure 1: Bayesian network for bidirectional IOHMMs consisting of input units, output units, and both forward and backward Markov chains of hidden states. [14] using distance geometry and stochastic optimization techniques. Thus the main focus here is on the more difficult task of contact map prediction. Various algorithms for the prediction of contact maps have been developed, in particular using feedforward neural networks [6]. The best contact map predictor in the literature and at the last CASP prediction experiment reports an average precision [True Positives/(True Positives + False Positives)] of 21% for distant contacts, i.e. with a linear distance of 8 amino acid or more [6] for fine-grained amino acid maps. While this result is encouraging and well above chance level by a factor greater than 6, it is still far from providing sufficient accuracy for reliable 3D structure prediction. A key issue in this area is the amount of noise that can be tolerated in a contact map prediction without compromising the 3D-reconstruction step. While systematic tests in this area have not yet been published, preliminary results appear to indicate that recovery of as little as half of the distant contacts may suffice for proper reconstruction, at least for proteins up to 150 amino acid long (Rita Casadio and Piero Fariselli, private communication and oral presentation during CASP4 [10]). It is important to realize that the input to a fine-grained contact map predictor need not be confined to the sequence of amino acids only, but may also include evolutionary information in the form of profiles derived by multiple alignment of homologue proteins, or structural feature information, such as secondary structure (alpha helices, beta strands, and coils), or solvent accessibility (surface/buried), derived by specialized predictors [12, 13]. In our approach, we use different GIOHMM and GRNN strategies to predict both structural features and contact maps. 2 GIOHMM Architectures Loosely speaking, GIOHMMs are Bayesian networks with input, hidden, and output units that can be used to process complex data structures such as sequences, images, trees, chemical compounds and so forth, built on work in, for instance, [5, 3, 7, 2, 11]. In general, the connectivity of the graphs associated with the hidden units matches the structure of the data being processed. Often multiple copies of the same hidden graph, but with different edge orientations, are used in the hidden layers to allow direct propagation of information in all relevant directions. Output Plane Input Plane 4 Hidden Planes NE NW SW SE Figure 2: 2D GIOHMM Bayesian network for processing two-dimensional objects such as contact maps, with nodes regularly arranged in one input plane, one output plane, and four hidden planes. In each hidden plane, nodes are arranged on a square lattice, and all edges are oriented towards the corresponding cardinal corner. Additional directed edges run vertically in column from the input plane to each hidden plane, and from each hidden plane to the output plane. To illustrate the general idea, a first example of GIOHMM is provided by the bidirectional IOHMMs (Figure 1) introduced in [2] to process sequences and predict protein structural features, such as secondary structure. Unlike standard HMMs or IOHMMS used, for instance in speech recognition, this architecture is based on two hidden markov chains running in opposite directions to leverage the fact that biological sequences are spatial objects rather than temporal sequences. Bidirectional IOHMMs have been used to derive a suite of structural feature predictors [12, 13, 4] available through http://promoter.ics.uci.edu/BRNN-PRED/. These predictors have accuracy rates in the 75-80% range on a per amino acid basis. 2.1 Direct Prediction of Topology To predict contact maps, we use a 2D generalization of the previous 1D Bayesian network. The basic version of this architecture (Figures 2) contains 6 layers of units: input, output, and four hidden layers, one for each cardinal corner. Within each column indexed by i and j, connections run from the input to the four hidden units, and from the four hidden units to the output unit. In addition, the hidden units in each hidden layer are arranged on a square or triangular lattice, with all the edges oriented towards the corresponding cardinal corner. Thus the parameters of this two-dimensional GIOHMMs, in the square lattice case, are the conditional probability distributions:            P(Oi|Ii,j, HNE i,j , HNW i,j , HSW i,j , HSE i,j, ) P(HNE i,j |Ii,j, HNE i−1,j, HNE i,j−1) P(HNW i,j |Ii,j, HNW i+1,j, HNW i,j−1) P(HSW i,j |Ii,j, HSW i+1,j, HSW i,j+1) P(HSE i,j |Ii,j, HSE i−1,j, HSE i,j+1) (1) In a contact map prediction at the amino acid level, for instance, the (i, j) output represents the probability of whether amino acids i and j are in contact or not. This prediction depends directly on the (i, j) input and the four-hidden units in the same column, associated with omni-directional contextual propagation in the hidden planes. In the simulations reported below, we use a more elaborated input consisting of a 20 × 20 probability matrix over amino acid pairs derived from a multiple alignment of the given protein sequence and its homologues, as well as the structural features of the corresponding amino acids, including their secondary structure classification and their relative exposure to the solvent, derived from our corresponding predictors. It should be clear how GIOHMM ideas can be generalized to other data structures and problems in many ways. In the case of 3D data, for instance, a standard GIOHMM would have an input cube, an output cube, and up to 8 cubes of hidden units, one for each corner with connections inside each hidden cube oriented towards the corresponding corner. In the case of data with an underlying tree structure, the hidden layers would correspond to copies of the same tree with different orientations and so forth. Thus a fundamental advantage of GIOHMMs is that they can process a wide range of data structures of variable sizes and dimensions. 2.2 Indirect Prediction of Topology Although GIOHMMs allow flexible integration of contextual information over ranges that often exceed what can be achieved, for instance, with fixed-input neural networks, the models described above still suffer from the fact that the connections remain local and therefore long-ranged propagation of information during learning remains difficult. Introduction of large numbers of long-ranged connections is computationally intractable but in principle not necessary since the number of contacts in proteins is known to grow linearly with the length of the protein, and hence connectivity is inherently sparse. The difficulty of course is that the location of the long-ranged contacts is not known. To address this problem, we have developed also a complementary GIOHMM approach described in Figure 3 where a candidate graph structure is proposed in the hidden layers of the GIOHMM, with the two different orientations naturally associated with a protein sequence. Thus the hidden graphs change with each protein. In principle the output ought to be a single unit (Figure 3b) which directly computes a global score for the candidate structure presented in the hidden layer. In order to cope with long-ranged dependencies, however, it is preferable to compute a set of local scores (Figure 3c), one for each vertex, and combine the local scores into a global score by averaging. More specifically, consider a true topology represented by the undirected contact graph G∗= (V, E∗), and a candidate undirected prediction graph G = (V, E). A global measure of how well E approximates E∗is provided by the informationretrieval F1 score defined by the normalized edge-overlap F1 = 2|E ∩E∗|/(|E| + |E∗|) = 2PR/(P + R), where P = |E ∩E∗|/|E| is the precision (or specificity) and R = |E ∩E∗|/|E∗| is the recall (or sensitivity) measure. Obviously, 0 ≤F1 ≤1 and F1 = 1 if and only if E = E∗. The scoring function F1 has the property of being monotone in the sense that if |E| = |E′| then F1(E) < F1(E′) if and only if |E ∩E∗| < |E′ ∩E∗|. Furthermore, if E′ = E ∪{e} where e is an edge in E∗but not in E, then F1(E′) > F1(E). Monotonicity is important to guide the search in the space of possible topologies. It is easy to check that a simple search algorithm based on F1 takes on the order of O(|V |3) steps to find E∗, basically by trying all possible edges one after the other. The problem then is to learn F1, or rather a good approximation to F1. To approximate F1, we first consider a similar local measure Fv by considering the (a) (b) (c) O I(v) I(v) I(v) O(v) H (v) H (v) H (v) H (v) F B F B Figure 3: Indirect prediction of contact maps. (a) target contact graph to be predicted. (b) GIOHMM with two hidden layers: the two hidden layers correspond to two copies of the same candidate graph oriented in opposite directions from one end of the protein to the other end. The single output O is the global score of how well the candidate graph approximates the true contact map. (c) Similar to (b) but with a local score O(v) at each vertex. The local scores can be averaged to produce a global score. In (b) and (c) I(v) represents the input for vertex v, and H F (v) and HB(v) are the corresponding hidden variables. set Ev of edges adjacent to vertex v and Fv = 2|Ev ∩E∗ v|/(|Ev| + |E∗ v|) with the global average ¯F = P v Fv/|V |. If n and n∗are the average degrees of G and G∗, it can be shown that: F1 = 1 |V | X v 2|Ev ∩E∗| n + n∗ and ¯F = 1 |V | X v 2|Ev ∩E∗| n + ϵv + n∗+ ϵ∗v (2) where n+ϵv (resp. n∗+ϵ∗ v) is the degree of v in G (resp. in G∗). In particular, if G and G∗are regular graphs, then F1(E) = ¯F(E) so that ¯F is a good approximation to F1. In the contact map regime where the number of contacts grows linearly with the length of the sequence, we should have in general |E| ≈|E∗| ≈(1 + α)|V | so that each node on average has n = n∗= 2(1 + α) edges. The value of α depends of course on the neighborhood cutoff. As in reinforcement learning, to learn the scoring function one is faced with the problem of generating good training sets in a high dimensional space, where the states are the topologies (graphs), and the policies are algorithms for adding a single edge to a given graph. In the simulations we adopt several different strategies including static and dynamic generation. Within dynamic generation we use three exploration strategies: random exploration (successor graph chosen at random), pure exploitation (successor graph maximizes the current scoring function), and semi-uniform exploitation to find a balance between exploration and exploitation [with probability ϵ (resp. 1 −ϵ) we choose random exploration (resp. pure exploitation)]. 3 GRNN Architectures Inference and learning in the protein GIOHMMs we have described is computationally intensive due to the large number of undirected loops they contain. This problem can be addressed using a neural network reparameterization assuming that: (a) all the nodes in the graphs are associated with a deterministic vector (note that in the case of the output nodes this vector can represent a probability distribution so that the overall model remains probabilistic); (b) each vector is a deterministic function of its parents; (c) each function is parameterized using a neural network (or some other class of approximators); and (d) weight-sharing or stationarity is used between similar neural networks in the model. For example, in the 2D GIOHMM contact map predictor, we can use a total of 5 neural networks to recursively compute the four hidden states and the output in each column in the form:            Oij = NO(Iij, HNW i,j , HNE i,j , HSW i,j , HSE i,j ) HNE i,j = NNE(Ii,j, HNE i−1,j, HNE i,j−1) HNW i,j = NNW (Ii,j, HNW i+1,j, HNW i,j−1) HSW i,j = NSW (Ii,j, HSW i+1,j, HSW i,j+1) HSE i,j = NSE(Ii,j, HSE i−1,j, HSE i,j+1) (3) In the NE plane, for instance, the boundary conditions are set to H NE ij = 0 for i = 0 or j = 0. The activity vector associated with the hidden unit H NE ij depends on the local input Iij, and the activity vectors of the units HNE i−1,j and HNE i,j−1. Activity in NE plane can be propagated row by row, West to East, and from the first row to the last (from South to North), or column by column South to North, and from the first column to the last. These GRNN architectures can be trained by gradient descent by unfolding the structures in space, leveraging the acyclic nature of the underlying GIOHMMs. 4 Data Many data sets are available or can be constructed for training and testing purposes, as described in the references. The data sets used in the present simulations are extracted from the publicly available Protein Data Bank (PDB) and then redundancy reduced, or from the non-homologous subset of PDB Select (ftp://ftp.emblheidelberg.de/pub/databases/). In addition, we typically exclude structures with poor resolution (less than 2.5-3 ˚A), sequences containing less than 30 amino acids, and structures containing multiple sequences or sequences with chain breaks. For coarse contact maps, we use the DSSP program [9] (CMBI version) to assign secondary structures and we remove also sequences for which DSSP crashes. The results we report for fine-grained contact maps are derived using 424 proteins with lengths in the 30-200 range for training and an additional non-homologous set of 48 proteins in the same length range for testing. For the coarse contact map, we use a set of 587 proteins of length less than 300. Because the average length of a secondary structure element is slightly above 7, the size of a coarse map is roughly 2% the size of the corresponding amino acid map. 5 Simulation Results and Conclusions We have trained several 2D GIOHMM/GRNN models on the direct prediction of fine-grained contact maps. Training of a single model typically takes on the order of a week on a fast workstation. A sample of validation results is reported in Table 1 for four different distance cutoffs. Overall percentages of correctly predicted contacts Table 1: Direct prediction of amino acid contact maps. Column 1: four distance cutoffs. Column 2, 3, and 4: overall percentages of amino acids correctly classified as contacts, non-contacts, and in total. Column 5: Precision percentage for distant contacts (|i −j| ≥8) with a threshold of 0.5. Single model results except for last line corresponding to an ensemble of 5 models. Cutoff Contact Non-Contact Total Precision (P) 6 ˚A .714 .998 .985 .594 8 ˚A .638 .998 .970 .670 10 ˚A .512 .993 .931 .557 12 ˚A .433 .987 .878 .549 12 ˚A .445 .990 .883 .717 and non-contacts at all linear distances, as well as precision results for distant contacts (|i −j| ≥8) are reported for a single GIOHMM/GRNN model. The model has k = 14 hidden units in the hidden and output layers of the four hidden networks, as well as in the hidden layer of the output network. In the last row, we also report as an example the results obtained at 12˚A by an ensemble of 5 networks with k = 11, 12, 13, 14 and 15. Note that precision for distant contacts exceeds all previously reported results and is well above 50%. For the prediction of coarse-grained contact maps, we use the indirect GIOHMM/GRNN strategy and compare different exploration/exploitation strategies: random exploration, pure exploitation, and their convex combination (semiuniform exploitation). In the semi-uniform case we set the probability of random uniform exploration to ϵ = 0.4. In addition, we also try a fourth hybrid strategy in which the search proceeds greedily (i.e. the best successor is chosen at each step, as in pure exploitation), but the network is trained by randomly sub-sampling the successors of the current state. Eight numerical features encode the input label of each node: one-hot encoding of secondary structure classes; normalized linear distances from the N to C terminus; average, maximum and minimum hydrophobic character of the segment (based on the Kyte-Doolittle scale with a moving window of length 7). A sample of results obtained with 5-fold cross-validation is shown in Table 2. Hidden state vectors have dimension k = 5 with no hidden layers. For each strategy we measure performances by means of several indices: micro and macroaveraged precision (mP, MP), recall (mR, MR) and F1 measure (mF1, MF1). Micro-averages are derived based on each pair of secondary structure elements in each protein, whereas macro-averages are obtained on a per-protein basis, by first computing precision and recall for each protein, and then averaging over the set of all proteins. In addition, we also measure the micro and macro averages for specificity in the sense of percentage of correct prediction for non-contacts (mP(nc), MP(nc)). Note the tradeoffs between precision and recall across the training methods, the hybrid method achieving the best F1 results. Table 2: Indirect prediction of coarse contact maps with dynamic sampling. Strategy mP mP(nc) mR mF1 MP MP(nc) MR MF1 Random exploration .715 .769 .418 .518 .767 .709 .469 .574 Semi-uniform .454 .787 .631 .526 .507 .767 .702 .588 Pure exploitation .431 .806 .726 .539 .481 .793 .787 .596 Hybrid .417 .834 .790 .546 .474 .821 .843 .607 We have presented two approaches, based on a very general IOHMM/RNN framework, that achieve state-of-the-art performance in the prediction of proteins contact maps at fine and coarse-grained levels of resolution. In principle both methods can be applied to both resolution levels, although the indirect prediction is computationally too demanding for fine-grained prediction of large proteins. Several extensions are currently under development, including the integration of these methods into complete 3D structure predictors. While these systems require long training periods, once trained they can rapidly sift through large proteomic data sets. Acknowledgments The work of PB and GP is supported by a Laurel Wilkening Faculty Innovation award and awards from NIH, BREP, Sun Microsystems, and the California Institute for Telecommunications and Information Technology. The work of PF and AV is partially supported by a MURST grant. References [1] D. Baker and A. Sali. Protein structure prediction and structural genomics. Science, 294:93–96, 2001. [2] P. Baldi and S. Brunak and P. Frasconi and G. Soda and G. Pollastri. Exploiting the past and the future in protein secondary structure prediction. Bioinformatics, 15(11):937–946, 1999. [3] P. Baldi and Y. Chauvin. Hybrid modeling, HMM/NN architectures, and protein applications. Neural Computation, 8(7):1541–1565, 1996. [4] P. Baldi and G. Pollastri. Machine learning structural and functional proteomics. IEEE Intelligent Systems. Special Issue on Intelligent Systems in Biology, 17(2), 2002. [5] Y. Bengio and P. Frasconi. Input-output HMM’s for sequence processing. IEEE Trans. on Neural Networks, 7:1231–1249, 1996. [6] P. Fariselli, O. Olmea, A. Valencia, and R. Casadio. Prediction of contact maps with neural networks and correlated mutations. Protein Engineering, 14:835–843, 2001. [7] P. Frasconi, M. Gori, and A. Sperduti. A general framework for adaptive processing of data structures. IEEE Trans. on Neural Networks, 9:768–786, 1998. [8] Z. Ghahramani and M. I. Jordan. Factorial hidden Markov models Machine Learning, 29:245–273, 1997. [9] W. Kabsch and C. Sander. Dictionary of protein secondary structure: pattern recognition of hydrogen-bonded and geometrical features. Biopolymers, 22:2577–2637, 1983. [10] A. M. Lesk, L. Lo Conte, and T. J. P. Hubbard. Assessment of novel fold targets in CASP4: predictions of three-dimensional structures, secondary structures, and interresidue contacts. Proteins, 45, S5:98–118, 2001. [11] G. Pollastri and P. Baldi. Predition of contact maps by GIOHMMs and recurrent neural networks using lateral propagation from all four cardinal corners. Proceedings of 2002 ISMB (Intelligent Systems for Molecular Biology) Conference. Bioinformatics, 18, S1:62–70, 2002. [12] G. Pollastri, D. Przybylski, B. Rost, and P. Baldi. Improving the prediction of protein secondary structure in three and eight classes using recurrent neural networks and profiles. Proteins, 47:228–235, 2002. [13] G. Pollastri, P. Baldi, P. Fariselli, and R. Casadio. Prediction of coordination number and relative solvent accessibility in proteins. Proteins, 47:142–153, 2002. [14] M. Vendruscolo, E. Kussell, and E. Domany. Recovery of protein structure from contact maps. Folding and Design, 2:295–306, 1997.
2002
207
2,222
Mismatch String Kernels for SVM Protein Classification Christina Leslie Department of Computer Science Columbia University cleslie@cs.columbia.edu Eleazar Eskin Department of Computer Science Columbia University eeskin@cs.columbia.edu Jason Weston Max-Planck Institute Tuebingen, Germany weston@tuebingen.mpg.de William Stafford Noble Department of Genome Sciences University of Washington noble@gs.washington.edu Abstract We introduce a class of string kernels, called mismatch kernels, for use with support vector machines (SVMs) in a discriminative approach to the protein classification problem. These kernels measure sequence similarity based on shared occurrences of  -length subsequences, counted with up to  mismatches, and do not rely on any generative model for the positive training sequences. We compute the kernels efficiently using a mismatch tree data structure and report experiments on a benchmark SCOP dataset, where we show that the mismatch kernel used with an SVM classifier performs as well as the Fisher kernel, the most successful method for remote homology detection, while achieving considerable computational savings. 1 Introduction A fundamental problem in computational biology is the classification of proteins into functional and structural classes based on homology (evolutionary similarity) of protein sequence data. Known methods for protein classification and homology detection include pairwise sequence alignment [1, 2, 3], profiles for protein families [4], consensus patterns using motifs [5, 6] and profile hidden Markov models [7, 8, 9]. We are most interested in discriminative methods, where protein sequences are seen as a set of labeled examples — positive if they are in the protein family or superfamily and negative otherwise — and we train a classifier to distinguish between the two classes. We focus on the more difficult problem of remote homology detection, where we want our classifier to detect (as positives) test sequences that are only remotely related to the positive training sequences. One of the most successful discriminative techniques for protein classification – and the best performing method for remote homology detection – is the Fisher-SVM [10, 11] approach of Jaakkola et al. In this method, one first builds a profile hidden Markov model  Formerly William Noble Grundy: see http://www.cs.columbia.edu/˜noble/name-change.html (HMM) for the positive training sequences, defining a log likelihood function   for any protein sequence . If  is the maximum likelihood estimate for the model parameters, then the gradient vector       assigns to each (positive or negative) training sequence an explicit vector of features called Fisher scores. This feature mapping defines a kernel function, called the Fisher kernel, that can then be used to train a support vector machine (SVM) [12, 13] classifier. One of the strengths of the Fisher-SVM approach is that it combines the rich biological information encoded in a hidden Markov model with the discriminative power of the SVM algorithm. However, one generally needs a lot of data or sophisticated priors to train the hidden Markov model, and because calculating the Fisher scores requires computing forward and backward probabilities from the Baum-Welch algorithm (quadratic in sequence length for profile HMMs), in practice it is very expensive to compute the kernel matrix. In this paper, we present a new string kernel, called the mismatch kernel, for use with an SVM for remote homology detection. The  "!  -mismatch kernel is based on a feature map to a vector space indexed by all possible subsequences of amino acids of a fixed length  ; each instance of a fixed  -length subsequence in an input sequence contributes to all feature coordinates differing from it by at most  mismatches. Thus, the mismatch kernel adds the biologically important idea of mismatching to the computationally simpler spectrum kernel presented in [14]. In the current work, we also describe how to compute the new kernel efficiently using a mismatch tree data structure; for values of  "!  useful in this application, the kernel is fast enough to use on real datasets and is considerably less expensive than the Fisher kernel. We report results from a benchmark dataset on the SCOP database [15] assembled by Jaakkola et al. [10] and show that the mismatch kernel used with an SVM classifier achieves performance equal to the Fisher-SVM method while outperforming all other methods tested. Finally, we note that the mismatch kernel does not depend on any generative model and could potentially be used in other sequence-based classification problems. 2 Spectrum and Mismatch String Kernels The basis for our approach to protein classification is to represent protein sequences as vectors in a high-dimensional feature space via a string-based feature map. We then train a support vector machine (SVM), a large-margin linear classifier, on the feature vectors representing our training sequences. Since SVMs are a kernel-based learning algorithm, we do not calculate the feature vectors explicitly but instead compute their pairwise inner products using a mismatch string kernel, which we define in this section. 2.1 Feature Maps for Strings The  "! # -mismatch kernel is based on a feature map from the space of all finite sequences from an alphabet $ of size $% &(' to the ') -dimensional vector space indexed by the set of  -length subsequences (“  -mers”) from $ . (For protein sequences, $ is the alphabet of amino acids, '*&,+- .) For a fixed  -mer ./&,01204365758590 ) , with each 0;: a character in $ , the  "!  -neighborhood generated by . is the set of all  -length sequences < from $ that differ from . by at most  mismatches. We denote this set by =?> )A@ BDC E.  . We define our feature map FG> )A@ BDC as follows: if . is a  -mer, then FH> )A@ BDC E. &IKJMLNE. 9 L;OPQ (1) where JRLE. S&UT if < belongs to =V> )A@ BDC E.  , and JLW.X&Y- otherwise. Thus, a  -mer contributes weight to all the coordinates in its mismatch neighborhood. For a sequence Z of any length, we extend the map additively by summing the feature vectors for all the  -mers in Z : FH> ) @ B*C KZ" &  ) -mers  in  FH> )A@ BDC W. Note that the < -coordinate of FG> )A@ BDC KZR is just a count of all instances of the  -mer < occurring with up to  mismatches in Z . The  "! # -mismatch kernel  > ) @ B*C is the inner product in feature space of feature vectors: > )A@ BDC KZ !  &KFH> ) @ B*C KZ" ! FH> )A@ BDC    25 For  &/- , we retrieve the  -spectrum kernel defined in [14]. 2.2 Fisher Scores and the Spectrum Kernel While we define the spectrum and mismatch feature maps without any reference to a generative model for the positive class of sequences, there is some similarity between the  -spectrum feature map and the Fisher scores associated to an order  T Markov chain model. More precisely, suppose the generative model for the positive training sequences is given by KZ  &  1 58575 ) 1  E ) 1 57575 ) 1 ! "58575 E   ") 1 575859  1 !  for a string ZV& 1 3658575  , with parameters   &7  ") 1 585759  1 & 1 57585 ) 1 !  &   ! " "  Q#  for characters  ! 1 ! 58575 !  ) 1 in alphabet $ . Denote by  the maximum likelihood estimate for on the positive training set. To calculate the Fisher scores for this model, we follow [10] and define independent variables  @  ! " "  Q!#  & $&% ' ( ( ( ' Q#  ) $+* $ * % ' ( ( ( ' Q!#  satisfying  @   " "  Q!#   &/    " "  Q!#   , ,  *  * @   " "  Q#   & T . Then the Fisher scores are given by  @   " "  Q#  WZ        & .   ! " "  Q!#  / T  @   " "  Q!#    @   " "  Q!#   0  1 32   . 1    " "  Q!#  & .    " "  Q!#     " "  Q!#   .   " "  Q#  ! where .    " "  Q!#  is the number of instances of the  -mer  1 57585 ) 1  in Z , and . 3! " "  Q#  is the number of instances of the  4 T  -mer M157585 ) 1 . Thus the Fisher score captures the degree to which the  -mer  158575! )5 16 is over- or under-represented relative to the positive model. For the  -spectrum kernel, the corresponding feature coordinate looks similar but simply uses the unweighted count: J  ! " "  Q!#   KZ" &7.    " "  Q!#  5 3 Efficient Computation of the Mismatch Kernel Unlike the Fisher vectors used in [10], our feature vectors are sparse vectors in a very high dimensional feature space. Thus, instead of calculating and storing the feature vectors, we directly and efficiently compute the kernel matrix for use with an SVM classifier. 3.1 Mismatch Tree Data Structure We use a mismatch tree data structure (similar to a trie or suffix tree [16, 17]) to represent the feature space (the set of all  -mers) and perform a lexical traversal of all  -mers occurring in the sample dataset match with up to  of mismatches; the entire kernel matrix  KZA: ! Z   , ! & T 58575 for the sample of  sequences is computed in one traversal of the tree. A  ! # -mismatch tree is a rooted tree of depth  where each internal node has $ "& ' branches and each branch is labeled with a symbol from $ . A leaf node represents a fixed  -mer in our feature space – obtained by concatenating the branch symbols along the path from root to leaf – and an internal node represents the prefix for those  -mer features which are its descendants in the tree. We use a depth-first search of this tree to store, at each node that we visit, a set of pointers to all instances of the current prefix pattern that occur with mismatches in the sample data. Thus at each node of depth  , we maintain pointers to all substrings from the sample data set whose  -length prefixes are within  mismatches from the  -length prefix represented by the path down from the root. Note that the set of valid substrings at a node is a subset of the set of valid substrings of its parent. When we encounter a node with an empty list of pointers (no valid occurrences of the current prefix), we do not need to search below it in the tree. When we reach a leaf node, we sum the contributions of all instances occurring in each source sequence to obtain feature values corresponding to the current  -mer, and we update the kernel matrix entry  WZ ! Z   for each pair of source sequences Z  and Z  having non-zero feature values. (a) A L A K V V L A V A L A L L K V L L K V L A A L 0 0 0 (b) A L A K V V L A V A L A L L K V L L K V L A A L 0 0 0 V A L A L K V L L K V A A L L K V L A A L 0 1 1 A (c) V A L A L K V L L K V A A L L K V L A A L 0 1 1 A L A K V V L A V A L A L L K V L L K V L A A L 0 0 0 A L A L K V L K V A A L 1 1 A L Figure 1: An   -mismatch tree for a sequence AVLALKAVLL, showing valid instances at each node down a path: (a) at the root node; (b) after expanding the path  ; and (c) after expanding the path  . The number of mismatches for each instance is also indicated. 3.2 Efficiency of the Kernel Computation Since we compute the kernel in one depth-first traversal, we do not actually need to store the entire mismatch tree but instead compute the kernel using a recursive function, which makes more efficient use of memory and allows kernel computations for large datasets. The number of  -mers within  mismatches of any given fixed  -mer is N "!  ! 'K& , B :     E' TA : &   B 'WB  . Thus the effective number of  -mer instances that we need to traverse grows as  E=  B 'WB  , where = is the total length of the sample data. At a leaf node, if exactly input sequences contain valid instances of the current  -mer, one performs 3 updates to the kernel matrix. For  sequences each of length . (total length = &.! ), the worst case for the kernel computation occurs when the  feature vectors are all equal and have the maximal number of non-zero entries, giving worst case overall running time  " 3 .# "!  ! 'W  & " 3 .  B ' B  . For the application we discuss here, small values of  are most useful, and the kernel calculations are quite inexpensive. When mismatch kernels are used in combination with SVMs, the learned classifier $ WZ"6& ,:  1  :W.:  FH> )A@ BDC KZ8:K ! FH> )A@ BDC WZ"3  (where Z : are the training sequences that map to support vectors,  : are labels, and . : are weights) can be implemented by pre-computing and storing per  -mer scores. Then the prediction $ WZ" can be calculated in linear time by look-up of  -mer scores. In practice, one usually wants to use a normalized feature map, so one would also need to compute the norm of the vector F> )A@ BDC WZ" , with complexity   .  B 'WB  for a sequence of length . . Simple  9TA normalization schemes, like dividing by sequence length, can also be used. 4 Experiments: Remote Protein Homology Detection We test the mismatch kernel with an SVM classifier on the SCOP [15] (version 1.37) datasets designed by Jaakkola et al. [10] for the remote homology detection problem. In these experiments, remote homology is simulated by holding out all members of a target SCOP family from a given superfamily. Positive training examples are chosen from the remaining families in the same superfamily, and negative test and training examples are chosen from disjoint sets of folds outside the target family’s fold. The held-out family members serve as positive test examples. In order to train HMMs, Jaakkola et al. used the SAM-T98 algorithm to pull in domain homologs from the non-redundant protein database and added these sequences as positive examples in the experiments. Details of the datasets are available at www.soe.ucsc.edu/research/compbio/discriminative. Because the test sets are designed for remote homology detection, we use small values of  . We tested  "! # &  ! TA and  ! TA , where we normalized the kernel via  Norm > ) @ B*C KZ !   & > )A@ BDC WZ !   #> )A@ BDC WZ ! Z" > ) @ B*C  M!  5 We found that  "!  &   ! TA gave slightly better performance, though results were similar for the two choices. (Data for  "!  &  ! T  not shown.) We use a publicly available SVM implementation (www.cs.columbia.edu/compbio/svm) of the soft margin optimization algorithm described in [10]. For comparison, we include results from three other methods. These include the original experimental results from Jaakkola et al. for two methods: the SAM-T98 iterative HMM, and the Fisher-SVM method. We also test PSI-BLAST [3], an alignment-based method widely used in the biological community, on the same data using the methodology described in [14]. Figure 2 illustrates the mismatch-SVM method’s performance relative to three existing homology detection methods as measured by ROC scores. The figure includes results for all  SCOP families, and each series corresponds to one homology detection method. Qualitatively, the curves for Fisher-SVM and mismatch-SVM are quite similar. When we compare the overall performance of two methods using a two-tailed signed rank test [18, 19] based on ROC scores over the 33 families with a  -value threshold of -M5 - and including a Bonferroni adjustment to account for multiple comparisons, we find only the following significant differences: Fisher-SVM and mismatch-SVM perform better than SAM-T98 (with p-values 1.3e-02 and 2.7e-02, respectively); and these three methods all perform significantly better than PSI-BLAST in this experiment. Figure 3 shows a family-by-family comparison of performance of the   ! TA -mismatchSVM and Fisher-SVM using ROC scores in plot (A) and ROC-50 scores in plot (B). 1 In both plots, the points fall approximately evenly above and below the diagonal, indicating little difference in performance between the two methods. Figure 4 shows the improvement provided by including mismatches in the SVM kernel. The figures plot ROC scores (plot 1The ROC-50 score is the area under the graph of the number of true positives as a function of false positives, up to the first 50 false positives, scaled so that both axes range from 0 to 1. This score is sometimes preferred in the computational biology community, motivated by the idea that a biologist might be willing to sift through about 50 false positives. 0 5 10 15 20 25 30 35 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Number of families ROC (5,1)-Mismatch-SVM ROC Fisher-SVM ROC SAM-T98 PSI-BLAST Figure 2: Comparison of four homology detection methods. The graph plots the total number of families for which a given method exceeds an ROC score threshold. (A)) and ROC-50 scores (plot (B)) for two string kernel SVM methods: using  &  ,  & T mismatch kernel, and using  & (no mismatch) spectrum kernel, the best-performing choice with  & - . Almost all of the families perform better with mismatching than without, showing that mismatching gives significantly better generalization performance. 5 Discussion We have presented a class of string kernels that measure sequence similarity without requiring alignment or depending upon a generative model, and we have given an efficient method for computing these kernels. For the remote homology detection problem, our discriminative approach — combining support vector machines with the mismatch kernel — performs as well in the SCOP experiments as the most successful known method. A practical protein classification system would involve fast multi-class prediction – potentially involving thousands of binary classifiers – on massive test sets. In such applications, computational efficiency of the kernel function becomes an important issue. Chris Watkins [20] and David Haussler [21] have recently defined a set of kernel functions over strings, and one of these string kernels has been implemented for a text classification problem [22]. However, the cost of computing each kernel entry is   . 3  in the length of the input sequences. Similarly, the Fisher kernel of Jaakkola et al. requires quadratic-time computation for each Fisher vector calculated. The  ! # -mismatch kernel is relatively inexpensive to compute for values of  that are practical in applications, allows computation of multiple kernel values in one pass, and significantly improves performance over the previously presented (mismatch-free) spectrum kernel. Many family-based remote homogy detection algorithms incorporate a method for selecting probable domain homologs from unannotated protein sequence databases for additional training data. In these experiments, we used the domain homologs that were identified by SAM-T98 (an iterative HMM-based algorithm) as part of the Fisher-SVM method and included in the datasets; these homologs may be more useful to the Fisher kernel than to the mismatch kernel. We plan to extend our method by investigating semi-supervised techniques for selecting unannotated sequences for use with the mismatch-SVM. 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Fisher-SVM ROC (5,1)-Mismatch-SVM ROC 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Fisher-SVM ROC50 (5,1)-Mismatch-SVM ROC50 (A) (B) Figure 3: Family-by-family comparison of   -mismatch-SVM with Fisher-SVM. The coordinates of each point in the plot are the ROC scores (plot (A)) or ROC-50 scores (plot (B)) for one SCOP family, obtained using the mismatch-SVM with  ,   (x-axis) and Fisher-SVM (y-axis). The dotted line is  . 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 k=3 Spectrum-SVM ROC (5,1)-Mismatch-SVM ROC 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 k=3 Spectrum-SVM ROC50 (5,1)-Mismatch-SVM ROC50 (A) (B) Figure 4: Family-by-family comparison of  -mismatch-SVM with spectrum-SVM. The coordinates of each point in the plot are the ROC scores (plot (A)) or ROC-50 scores (plot (B)) for one SCOP family, obtained using the mismatch-SVM with   ,    (x-axis) and spectrum-SVM with   (y-axis). The dotted line is    . Many interesting variations on the mismatch kernel can be explored using the framework presented here. For example, explicit  -mer feature selection can be implemented during calculation of the kernel matrix, based on a criterion enforced at each leaf or internal node. Potentially, a good feature selection criterion could improve performance in certain applications while decreasing kernel computation time. In biological applications, it is also natural to consider weighting each  -mer instance contribution to a feature coordinate by evolutionary substitution probabilities. Finally, one could use linear combinations of kernels #> ) @ BEC to capture similarity of different length  -mers. We believe that further experimentation with mismatch string kernels could be fruitful for remote protein homology detection and other biological sequence classification problems. Acknowledgments CL is partially supported by NIH grant LM07276-02. WSN is supported by NSF grants DBI-0078523 and ISI-0093302. We thank Nir Friedman for pointing out the connection with Fisher scores for Markov chain models. References [1] M. S. Waterman, J. Joyce, and M. Eggert. Computer alignment of sequences, chapter Phylogenetic Analysis of DNA Sequences. Oxford, 1991. [2] S. F. Altschul, W. Gish, W. Miller, E. W. Myers, and D. J. Lipman. A basic local alignment search tool. Journal of Molecular Biology, 215:403–410, 1990. [3] S. F. Altschul, T. L. Madden, A. A. Schaffer, J. Zhang, Z. Zhang, W. Miller, and D. J. Lipman. Gapped BLAST and PSI-BLAST: A new generation of protein database search programs. Nucleic Acids Research, 25:3389–3402, 1997. [4] Michael Gribskov, Andrew D. McLachlan, and David Eisenberg. Profile analysis: Detection of distantly related proteins. PNAS, pages 4355–4358, 1987. [5] A. Bairoch. The PROSITE database, its status in 1995. Nucleic Acids Research, 24:189–196, 1995. [6] T. K. Attwood, M. E. Beck, D. R. Flower, P. Scordis, and J. N Selley. The PRINTS protein fingerprint database in its fifth year. Nucleic Acids Research, 26(1):304–308, 1998. [7] A. Krogh, M. Brown, I. Mian, K. Sjolander, and D. Haussler. Hidden markov models in computational biology: Applications to protein modeling. Journal of Molecular Biology, 235:1501– 1531, 1994. [8] S. R. Eddy. Multiple alignment using hidden markov models. In Proceedings of the Third International Conference on Intelligent Systems for Molecular Biology, pages 114–120. AAAI Press, 1995. [9] P. Baldi, Y. Chauvin, T. Hunkapiller, and M. A. McClure. Hidden markov models of biological primary sequence information. PNAS, 91(3):1059–1063, 1994. [10] T. Jaakkola, M. Diekhans, and D. Haussler. A discriminative framework for detecting remote protein homologies. Journal of Computational Biology, 2000. [11] T. Jaakkola, M. Diekhans, and D. Haussler. Using the fisher kernel method to detect remote protein homologies. In Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology, pages 149–158. AAAI Press, 1999. [12] V. N. Vapnik. Statistical Learning Theory. Springer, 1998. [13] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge, 2000. [14] C. Leslie, E. Eskin, and W. S. Noble. The spectrum kernel: A string kernel for SVM protein classification. Proceedings of the Pacific Biocomputing Symposium, 2002. [15] A. G. Murzin, S. E. Brenner, T. Hubbard, and C. Chothia. SCOP: A structural classification of proteins database for the investigation of sequences and structures. Journal of Molecular Biology, 247:536–540, 1995. [16] M. Sagot. Spelling approximate or repeated motifs using a suffix tree. Lecture Notes in Computer Science, 1380:111–127, 1998. [17] G. Pavesi, G. Mauri, and G. Pesole. An algorithm for finding signals of unknown length in DNA sequences. Bioinformatics, 17:S207–S214, July 2001. Proceedings of the Ninth International Conference on Intelligent Systems for Molecular Biology. [18] S. Henikoff and J. G. Henikoff. Embedding strategies for effective use of information from multiple sequence alignments. Protein Science, 6(3):698–705, 1997. [19] S. L. Salzberg. On comparing classifiers: Pitfalls to avoid and a recommended approach. Data Mining and Knowledge Discovery, 1:371–328, 1997. [20] C. Watkins. Dynamic alignment kernels. Technical report, UL Royal Holloway, 1999. [21] D. Haussler. Convolution kernels on discrete structure. Technical report, UC Santa Cruz, 1999. [22] Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. Text classification using string kernels. Preprint.
2002
21
2,223
Adaptive Caching by Refetching Robert B. Gramacy , Manfred K. Warmuth , Scott A. Brandt, Ismail Ari  Department of Computer Science, UCSC Santa Cruz, CA 95064  rbgramacy, manfred, scott, ari  @cs.ucsc.edu Abstract We are constructing caching policies that have 13-20% lower miss rates than the best of twelve baseline policies over a large variety of request streams. This represents an improvement of 49–63% over Least Recently Used, the most commonly implemented policy. We achieve this not by designing a specific new policy but by using on-line Machine Learning algorithms to dynamically shift between the standard policies based on their observed miss rates. A thorough experimental evaluation of our techniques is given, as well as a discussion of what makes caching an interesting on-line learning problem. 1 Introduction Caching is ubiquitous in operating systems. It is useful whenever we have a small, fast main memory and a larger, slower secondary memory. In file system caching, the secondary memory is a hard drive or a networked storage server while in web caching the secondary memory is the Internet. The goal of caching is to keep within the smaller memory data objects (files, web pages, etc.) from the larger memory which are likely to be accessed again in the near future. Since the future request stream is not generally known, heuristics, called caching policies, are used to decide which objects should be discarded as new objects are retained. More precisely, if a requested object already resides in the cache then we call it a hit, corresponding to a low-latency data access. Otherwise, we call it a miss, corresponding to a high-latency data access as the data must be fetched from the slower secondary memory into the faster cache memory. In the case of a miss, room must be made in the cache memory for the new object. To accomplish this a caching policy discards from the cache objects which it thinks will cause the fewest or least expensive future misses. In this work we consider twelve baseline policies including seven common policies (RAND, FIFO, LIFO, LRU, MRU, LFU, and MFU), and five more recently developed and very successful policies (SIZE and GDS [CI97], GD* [JB00], GDSF and LFUDA [ACD  99]). These algorithms employ a variety of directly observable criteria including recency of access, frequency of access, size of the objects, cost of fetching the objects from secondary memory, and various combinations of these. The primary difficulty in selecting the best policy lies in the fact that each of these policies may work well in different situations or at different times due to variations in workload,  Partial support from NSF grant CCR 9821087  Supported by Hewlett Packard Labs, Storage Technologies Department system architecture, request size, type of processing, CPU speed, relative speeds of the different memories, load on the communication network, etc. Thus the difficult question is: In a given situation, which policy should govern the cache? For example, the request stream from disk accesses on a PC is quite different from the request stream produced by web-proxy accesses via a browser, or that of a file server on a local network. The relative performance of the twelve policies vary greatly depending on the application. Furthermore, the characteristics of a single request stream can vary temporally for a fixed application. For example, a file server can behave quite differently during the middle of the night while making tape archives in order to backup data, whereas during the day its purpose is to serve file requests to and from other machines and/or users. Because of their differing decision criteria, different policies perform better given different workload characteristics. The request streams become even more difficult to characterize when there is a hierarchy or a network of caches handling a variety of file-type requests. In these cases, choosing a fixed policy for each cache in advance is doomed to be sub-optimal. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 205000 210000 215000 220000 225000 230000 235000 lru fifo mru lifo size lfu mfu rand gds gdsf lfuda gd 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 205000 210000 215000 220000 225000 230000 (a) (b) 205000 210000 215000 220000 225000 230000 235000 Lowest miss rate policy switches between SIZE, GDS, GDSF, and GD* size gds gdsf gd 205000 210000 215000 220000 225000 230000 Lowest miss rate policy ... SIZE, GDS, GDSF, and GD* (c) (d) Figure 1: Miss rates ( axis)of a) the twelve fixed policies (calculated w.r.t. a window of 300 requests) over 30,000 requests (  axis), b) the same policies on a random permutation of the data set, c) and d) the policies with the lowest miss rates in the figures above. The usual answer to the question of which policy to employ is either to select one that works well on average, or to select one that provides the best performance on some past workload that is believed to be representative. However, these strategies have two inherent costs. First, the selection (and perhaps tuning) of the single policy to be used in any given situation is done by hand and may be both difficult and error-prone, especially in complex system architectures with unknown and/or time-varying workloads. And second, the performance of the chosen policy with the best expected average case performance may in fact be worse than that achievable by another policy at any particular moment. Figure 1 (a) shows the hit rate of the twelve policies described above on a representative portion of one of our data sets (described below in Section 3) and Figure 1 (b) shows the hit rate of the same policies on a random permutation of the request stream. As can be clearly be seen, the miss rates on the permuted data set are quite different from those of the original data set, and it is this difference that our algorithms aim to exploit. Figures 1 (c) and (d) show which policy is best at each instant of time for the data segment and the permuted data segment. It is clear from these (representative) figures that the best policy changes over time. To avoid the perils associated with trying to hand-pick a single policy, one would like to be able to automatically and dynamically select the best policy for any given situation. In other words, one wants a cache replacement policy which is “adaptive”. In our Storage Systems Research Group, we have identified the need for such a solution in the context of complex network architectures and time-varying workloads and suggested a preliminary framework in which a solution could operate [AAG  ar], but without giving specific algorithmic solutions to the adaptation problem. This paper presents specific algorithmic solutions that address the need identified in that work. It is difficult to give a precise definition of “adaptive” when the data stream is continually changing. We use the term “adaptive” only informally and when we want to be precise we use off-line comparators to judge the performance of our on-line algorithms, as is commonly done in on-line learning [LW94, CBFH  97, KW97]. An on-line algorithm is called adaptive if it performs well when measured up against off-line comparators. In this paper we use two off-line comparators: BestFixed and BestShifting( ). BestFixed is the a posteriori selected policy with the lowest miss rate on the entire request stream for our twelve policies. BestShifting( ) considers all possible partitions of the request stream into at most segments along with the best policy for each segment. BestShifting( ) chooses the partition with the lowest total miss rate over the entire dataset and can be computed in time  using dynamic programming. Here  is the total number of requests, a bound on the number of segments, and  the number of base-line policies. Figure 2 0 200 400 600 4.0 4.5 5.0 5.5 WWk, BestShifting(K) K = Number of Shifts Missrates % Best Fixed = SIZE BestShift(K) All Virtual Caches BF=SIZE All VC Figure 2: Optimal offline comparators. AllVC  BestShifting(  ). shows graphically each of the comparators mentioned above. Notice that BestFixed  BestShifting(  ), and that most of the advantage of shifting policies occurs with relatively few shifts (  shifts in roughly 300,000 requests). Rather than developing a new caching policy (well-plowed ground, to say the least), this paper uses a master policy to dynamically determine the success rate of all the other policies and switch among them based on their relative performance on the current request stream. We show that with no additional fetches, the master policy works about as well as BestFixed. We define a refetch as a fetch of a previously seen object that was favored by the current policy but discarded from the real cache by a previously active policy. With refetching, it can outperform BestFixed. In particular, when all required objects are refetched instantly, this policy has a 13-20% lower miss rate than BestFixed, and almost the same performance as BestShifting( ) for modest . For reference, when compared with LRU, this policy has a 49-63% lower miss rate. Disregarding misses on objects never seen before (compulsory misses), the performance improvements are even greater. Because refetches themselves potentially costly, it is important to note that they can be done in the background. Our preliminary experiments show this to be both feasible and effective, capturing most of the advantage of instant refetching. A more detailed discussion of our results is given in Section 3 2 The Master Policy We seek to develop an on-line master policy that determines which of a set of baseline policies should govern the real cache at any time. Appropriate switch points need to be found and switches must be facilitated. Our key idea is “virtual caches”. A virtual cache simulates the operation of a single baseline policy. Each virtual cache records a few bytes of metadata about each object in its cache: ID, size, and calculated priority. Object data is only kept in the real cache, making the cost of maintain- Figure 3: Virtual caches embedded in the cache memory. ing the virtual caches negligible1. Via the virtual caches, the master policy can observe the miss rates of each policy on the actual request stream in order to determine their performance on the current workload. To be fair, virtual caches reside in the memory space which could have been used to cache real objects, as is illustrated in Figure 3. Thus, the space used by the real cache is reduced by the space occupied by the virtual caches. We set the virtual size of each virtual cache equal to the size of the full cache. The caches used for computing the comparators BestFixed and BestShifting( ) are based on caches of the full size. A simple heuristic the master policy can use to choose which caching policy should control at any given time is to continuously monitor the number of misses incurred by each policy in a past window of, for example, 300 requests (depicted in Figure 1 (a)). The master policy then gives control of the real cache to the policy with the least misses in this window (shown in Figure 1 (c)). While this works well in practice, maintaining such a window for many fixed policies is expensive, further reducing the space for the real cache. It is also hard to tune the window size. A better master policy keeps just one weight  for each policy (non-negative and summing to one) which represents an estimate of its current relative performance. The master policy is always governed by the policy with the maximum weight2. Weights are updated by using the combined loss and share updates of Herbster and Warmuth [HW98] and Bousquet and Warmuth [BW02] from the expert framework [CBFH  97] for on-line learning. Here the experts are the caching policies. This technique is preferred to the window-based master policy because it uses much less memory, and because the parameters of the weight updates are easier to tune than the window size. This also makes the resulting master policy more robust (not shown). 2.1 The Weight Updates Updating the weight vector       after each trial is a two-part process. First, the weights of all policies that missed the new request are multiplied by a factor     and then renormalized. We call this the loss update. Since the weights are renormalized, they remain unchanged if all policies miss the new request. As noticed by Herbster and Warmuth [HW98], multiplicative updates drive the weights of poor experts to zero so quickly that it becomes difficult for them to recover if their experts subsequently start doing well. 1As an additional optimization, we record the id and size of each object only once, regardless of the number of virtual caches it appears in. 2This can be sub-optimal in the worst case since it is always possible to construct a data stream where two policies switch back and forth after each request. However, real request streams appear to be divided into segments that favor one of the twelve policies for a substantial number of requests (see Figure 1). Therefore, the second share update prevents the weights of experts that did well in the past from becoming too small, allowing them to recover quickly, as shown in Figure 4. Figure 1(a) shows the current absolute performance of the policies in a rolling window (   ), whereas Figure 4 depicts relative performance and shows how the policies compete over time. (Recall that the policy with the highest weight always controls the real cache). There are a number of share updates [HW98, BW02] with various recovery properties. We chose the FIXED SHARE TO UNIFORM PAST (FSUP) update because of its simplicity and efficiency. Note that the loss bounds proven in the expert framework for the combined loss and share update do not apply in this context. This is because we use the mixture weights only to select the best policy. However, our experimental results suggest that we are exploiting the 0 0.2 0.4 0.6 0.8 1 205000 210000 215000 220000 225000 230000 235000 FSUP Weight Requests Over Time Weight History for Individual Policies lru fifo mru lifo size lfu mfu rand gds gdsf lfuda gd Figure 4: Weights of baseline policies. recovery properties of the combined update that are discussed extensively by Bousquet and Warmuth [BW02]. Formally, for each trial  , the loss update is          miss                  miss  for          where  is a parameter in     and miss    is 1 if the  -th object is missed by policy  and 0 otherwise. The initial distribution is uniform, i.e.       . The Fixed-Share to Uniform Past update mixes the current weight vector with the past average weight vector           , which is easy to maintain:       !    #"   $   where is a parameter in    . A small  parameter causes high weight to decay quickly if its corresponding policy starts incurring more misses than other policies with high weights. The higher the the more quickly past good policies will recover. In our experiments we used    &% and !     . 2.2 Demand vs. Instantaneous Rollover When space is needed to cache a new request, the master policy discards objects not present in the governing policy’s virtual cache3. This causes the content of the real cache to “roll over” to the content of the current governing virtual cache. We call this demand rollover because objects in the governing virtual cache are refetched into the real cache on demand. While this master policy works almost as well as BestFixed, we were not satisfied and wanted to do as well as BestShifting( ) (for a reasonably large bound on the number of segments). We noticed that the content of the real cache lagged behind the content of the governing virtual cache and had more misses, and conjectured that ”quicker” rollover strategies would improve overall performance. Our search for a better master policy began by considering an extreme and unrealistic rollover strategy that assures no lag time: After each switch instantaneously refetch all 3We update the virtual caches before the real cache, so there are always objects in the real cache that are not in the governing virtual cache when the master policy goes to find space for a new request. the objects in the new governing virtual cache that were not retained in the real cache. We call this refetching policy instantaneous rollover. By appropriate tuning of the update parameters  and the number of instantaneous rollovers can be kept reasonably small and the miss rates of our master policy are almost as good as BestShifting( ) for much larger than the actual number of shifts used on-line. Note that the comparator BestShifting( ) is also not penalized for its instantaneous rollovers. While this makes sense for defining a comparator, we now give more realistic rollover strategies that reduce the lag time. 2.3 Background Rollover Because instantaneous rollover immediately refetches everything in the governing virtual cache that is not already in the real cache, it may cause a large number of refetches even when the number of policy switches is kept small. If all refetches are counted as misses, then the miss rate of such a master policy is comparable to that of BestFixed. The same holds for BestShifting. However, from a user perspective, refetching is advantageous because of the latency advantage gained by having required objects in memory before they are needed. And from a system perspective, refetches can be “free” if they are done when the system is idle. To take advantage of these “free” refetches, we introduce the concept of background rollover. The exact criteria for when to refetch each missing object will depend heavily on the system, workload, and expected cost and benefit of each object. To characterize the performance of background rollover without addressing these architectural details, the following background refetching strategies were examined: 1 refetch for every cache miss; 1 for every hit; 1 for every request; 2 for every request; 1 for every hit and 5 for every miss, etc. Each background technique gave fewer misses than BestFixed, approaching and nearly matching the performance obtained by the master policy using instantaneous rollover. Of course, techniques which reduce the number of policy switches (by tuning  and ) also reduce the number of refetches. Figure 5 compares the performance of each master policy with that of BestFixed and shows that the three master policies almost always outperform BestFixed. -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 205000 210000 215000 220000 225000 230000 Miss Rate Requests Over Time Miss Rate Differences bestF - demd bestF - back bestF - inst Figure 5: BestFixed - P, where P  Instantaneous, Demand, and Background Rollover 2  . The baseline    is BestFixed. Deviations from the baseline   show how the performance of our on-line shifting policies differ in miss rate. Above (Below)   corresponds to fewer (more) misses than BestFixed. 3 Data and Results Figure 6 shows how the master policy with instantaneous rollover (labeled ’roll’) “tracks” the baseline policy with the lowest miss rate over the representative data segment used in previous figures. Figure 7 shows the performance of our master policies with respect to BestFixed, BestShifting( ), and LRU. It shows that demand rollover does slightly worse than BestFixed, while background 1 (1 refetch every request) and background 2 (1 refetch every hit and 5 every miss) do better than BestFixed and almost as well as instantaneous, which itself does almost as well as BestShifting. All of the policies do significantly better than LRU. Discounting the compulsory misses, our best policies have 1/3 fewer “real” misses than BestFixed and 1/2 the “real” misses of LRU. Figure 8 summarizes the performance of our algorithms over three large datasets. These were gathered using Carnegie Mellon University’s DFSTrace system [MS96] and had durations ranging from a single day to over a year. The traces we used represent a variety of workloads including a personal workstation (Work-Week), a single user (User-Month), and a remote storage system with a large number of clients, filtered by LRU on the clients’ local caches (Server-Month-LRU). For each data set, the table shows the number of requests, % of requests skipped (size  cache size), number of compulsory misses of objects not previously seen, and the number of rollovers. For each policy (including BestShifting( )), the table shows miss rate, and % improvement over BestFixed (labeled ’  BF’) and LRU. In each case all 12 virtual caches consumed on average less than 2% of the real cache space. We fixed     % , #    for all experiments. As already mentioned, BestShifting( ) is never penalized for rollovers. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 205000 210000 215000 220000 225000 230000 235000 Miss Rates Requests Over Time Miss Rates under FSUP with Master lru fifo mru lifo size lfu mfu rand gds gdsf lfuda gd roll Figure 6: “Tracking”the best policy. 0 200 400 600 800 2 3 4 5 6 7 8 9 WWk Master and Comparator Missrates K = Number of Shifts Missrates % LRU Best Fixed = SIZE BestShift(K) All Virtual Caches Compulsory Missrate BF=SIZE LRU All VC Background 2 Background 1 Demand Instantaneous K = 76 Figure 7: Online shifting policies against offline comparators and LRU for Work-Week dataset. Dataset Works User Server Week Month Month LRU #Requests 138k 382k 48k Cache size 900KB 2MB 4MB %Skipped 6.5% 12.8% 15.7% # Compuls 0.020 0.015 0.152 # Shifts 88 485 93 LRU Miss Rate 0.088 0.076 0.450 BestFixed Policy SIZE GDS GDSF Miss Rate 0.055 0.075 0.399 %  LRU 36.8% 54.7% 54.2% Demand Miss Rate 0.061 0.076 0.450 %  BestF -9.6% -0.5% -12.8% %  LRU 30.9% 54.4% 48.5% Backgrnd 1 Miss Rate 0.053 0.068 0.401 %  BestF 5.1% 9.8% -0.7% %  LRU 40.1% 59.4% 55.5% Backgrnd 2 Miss Rate 0.047 0.067 0.349 %  BestF 15.4% 11.9% 12.4% %  LRU 46.6% 60.1% 60.3% Instant Miss Rate 0.044 0.065 0.322 %  BestF 19.7% 13.4% 19.3% %  LRU 49.2% 60.8% 63% BestShifting Miss Rate 0.042 0.039 0.312 %  BestF 23.6% 48.0% 21.8% %  LRU 52.2% 48.7% 30.1% Figure 8: Performance Summary. 4 Conclusion Operating systems have many hidden parameter tweaking problems which are ideal applications for on-line Machine Learning algorithms. These parameters are often set to values which provide good average case performance on a test workload. For example, we have identified candidate parameters in device management, file systems, and network protocols. Previously the on-line algorithms for predicting as well as the best shifting expert were used to tune the time-out for spinning down the disk of a PC [HLSS00]. In this paper we use the weight updates of these algorithms for dynamically determining the best caching policy. This application is more elaborate because we needed to actively gather performance information about the caching policies via virtual caches. In future work we plan to do a more thorough study of feasibility of background rollover by building actual systems. Acknowledgements: Thanks to David P. Helmbold for an efficient dynamic programming approach to BestShifting( ), Ahmed Amer for data, and Ethan Miller many helpful insights. References [AAG  ar] Ismail Ari, Ahmed Amer, Robert Gramacy, Ethan Miller, Scott Brandt, and Darrell D. E. Long. ACME: Adaptive caching using multiple experts. In Proceedings of the 2002 Workshop on Distributed Data and Structures (WDAS 2002). Carleton Scientific, (to appear). [ACD  99] Martin Arlitt, Ludmilla Cherkasova, John Dilley, Rich Friedrich, and Tai Jin. Evaluating content management techniques for Web proxy caches. In Proceedings of the Workshop on Internet Server Performance (WISP99), May 1999. [BW02] O. Bousquet and M. K. Warmuth. Tracking a small set of experts by mixing past posteriors. J. of Machine Learning Research, 3(Nov):363–396, 2002. Special issue for COLT01. [CBFH  97] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427– 485, 1997. [CI97] Pei Cao and Sandy Irani. Cost-aware WWW proxy caching algorithms. In Proceedings of the 1997 Usenix Symposium on Internet Technologies and Systems (USITS-97), 1997. [HLSS00] David P. Helmbold, Darrell D. E. Long, Tracey L. Sconyers, and Bruce Sherrod. Adaptive disk spin-down for mobile computers. ACM/Baltzer Mobile Networks and Applications (MONET), pages 285–297, 2000. [HW98] M. Herbster and M. K. Warmuth. Tracking the best expert. Journal of Machine Learning, 32(2):151–178, August 1998. Special issue on concept drift. [JB00] Shudong Jin and Azer Bestavros. Greedydual* web caching algorithm: Exploiting the two sources of temporal locality in web request streams. Technical Report 2000-011, 4, 2000. [KW97] J. Kivinen and M. K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Information and Computation, 132(1):1–64, January 1997. [LW94] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212–261, 1994. [MS96] Lily Mummert and Mahadev Satyanarayanan. Long term distributed file reference tracing: Implementation and experience. Software - Practice and Experience (SPE), 26(6):705–736, June 1996.
2002
22
2,224
Timing and Partial Observability in the Dopamine System Nathaniel D. Daw1,3, Aaron C. Courville2,3, and David S. Touretzky1,3 1Computer Science Department, 2Robotics Institute, 3Center for the Neural Basis of Cognition Carnegie Mellon University, Pittsburgh, PA 15213 {daw,aaronc,dst}@cs.cmu.edu Abstract According to a series of influential models, dopamine (DA) neurons signal reward prediction error using a temporal-difference (TD) algorithm. We address a problem not convincingly solved in these accounts: how to maintain a representation of cues that predict delayed consequences. Our new model uses a TD rule grounded in partially observable semi-Markov processes, a formalism that captures two largely neglected features of DA experiments: hidden state and temporal variability. Previous models predicted rewards using a tapped delay line representation of sensory inputs; we replace this with a more active process of inference about the underlying state of the world. The DA system can then learn to map these inferred states to reward predictions using TD. The new model can explain previously vexing data on the responses of DA neurons in the face of temporal variability. By combining statistical model-based learning with a physiologically grounded TD theory, it also brings into contact with physiology some insights about behavior that had previously been confined to more abstract psychological models. 1 Introduction A series of models [1, 2, 3, 4, 5] based on temporal-difference (TD) learning [6] has explained most responses of primate dopamine (DA) neurons during conditioning [7] as an error signal for predicting reward, and has also identified the DA system as a substrate for conditioning behavior [8]. We address a troublesome issue from these models: how to maintain a representation of cues that predict delayed consequences. For this, we use a formalism that extends the Markov processes in which previous models were grounded. Even in the laboratory, the world is often poorly described as Markov in immediate sensory observations. In trace conditioning, for instance, nothing observable spans the delay between a transient stimulus and the reward it predicts. For DA models, this raises problems of coping with hidden state and of tracking temporal intervals. Most previous models address these issues using a tapped delay line representation of the world’s state. This augments the representation of current sensory observations with remembered past observations, dividing temporal intervals into a series of states to mark the passage of time. But linear combinations of tapped delay lines do not properly model variability in the intervals between events. Also, the augmented representation may poorly match the contingency structure of the experimental situation: for instance, depending on the amount of history retained, it may be insufficient to span delays, or it may contain old, irrelevant data. We propose a model that better reflects experimental situations by using a formalism that explicitly incorporates hidden state and temporal variability: a partially observable semiMarkov process. The proposal envisions the interaction between a cortical perceptual system that infers the world’s hidden state using an internal world model, and a dopaminergic TD system that learns reward predictions for these inferred states. This model improves on its predecessors’ descriptions of neuronal firing in situations involving temporal variability, and suggests additional connections with animal behavior. 2 DA models and temporal variability ... (a) (b) ISI ITI S R R S ISI ITI −1 0 1 δ S→ ←R Markov TD model −1 0 1 δ S→ ←R −1 0 1 δ S→ ←R (c) 0 1 2 3 Time → −0.1 0 0.1 S→ ←R semi−Markov TD model −0.1 0 0.1 S→ ←R −0.1 0 0.1 S→ ←R (d) Time → 0 1 2 3 0 1 δ ←S ←R Markov TD model 0 1 δ ←S ←R 0 1 δ ←S ←R 0 1 δ ←S ←R 0 1 δ ←S ←R (e) 0 2 4 Time → −0.1 0.1 ←S ←R semi−Markov TD model −0.1 0.1 ←S ←R −0.1 0.1 ←S ←R −0.1 0.1 ←S ←R −0.1 0.1 ←S ←R (f) Time → 0 2 4 Figure 1: S: stimulus; R: reward. (a,b) State spaces for the Markov tapped delay line (a) and our semi-Markov (b) TD models of a trace conditioning experiment. (c,d) Modeled DA activity (TD error) when an expected reward is delivered early (top), on time (middle) or late (bottom). The tapped delay line model (c) produces spurious negative error after an early reward, while, in accord with experiments, our semi-Markov model (d) does not. Shaded stripes under (d) and (f) track the model’s belief distribution over the world’s hidden state (given a one-timestep backward pass), with the ISI in white, the ITI in black, and gray for uncertainty between the two. (e,f) Modeled DA activity when reward timing varies uniformly over a range. The tapped delay line model (e) incorrectly predicts identical excitation to rewards delivered at all times, while, in accord with experiment, our model (f) predicts a response that declines with delay. Several models [1, 2, 3, 4, 5] identify the firing of DA neurons with the reward prediction error signal δt of a TD algorithm [6]. In the models, DA neurons are excited by positive error in reward prediction (caused by unexpected rewards or reward-predicting stimuli) and inhibited by negative prediction error (caused by the omission of expected reward). If a reward arrives as expected, the models predict no change in firing rate. These characteristics have been demonstrated in recordings of primate DA neurons [7]. In idealized form (neglecting some instrumental contingencies), these experiments and the others that we consider here are all variations on trace conditioning, in which a phasic stimulus such as a flash of light signals that reward will be delivered after a delay. TD systems map a representation of the state of the world to a prediction of future reward, but previous DA modeling exploited few experimental constraints on the form of this representation. Houk et al. [1] computed values using only immediately observable stimuli and allowed learning about rewards to accrue to previously observed stimuli using eligibility traces. But in trace conditioning, DA neurons show a timed pause in their background firing when an expected reward fails to arrive [7]. Because the Houk et al. [1] model does not learn temporal relationships, it cannot produce well timed inhibition. Montague et al. [2] and Schultz et al. [3] addressed these data using a tapped delay line representation of stimulus history [8]: at time t, each stimulus is represented by a vector whose nth element codes whether the stimulus was observed at time t −n. This representation allows the models to learn the temporal relationship between stimulus and reward, and to correctly predict phasic inhibition timelocked to omitted rewards. These models, however, mispredict the behavior of DA neurons when the interval between stimulus and reward varies. In one experiment [9], animals were trained to expect a constant stimulus-reward interval, which was later varied. When a reward is delivered earlier than expected, the tapped delay line models correctly predict that it should trigger positive error (dopaminergic excitation), but also incorrectly predict a further burst of negative error (inhibition, not seen experimentally) when the reward fails to arrive at the time it was originally expected (Figure 1c, top). In part, this occurs because the models do not represent the reward as an observation, so its arrival can have no effect on later predictions. More fundamentally, this is a problem with how the models partition events into a state space. Figure 1a illustrates how the tapped delay lines mark time in the interval between stimulus and reward using a series of states, each of which learns its own reward prediction. After the stimulus occurs, the model’s representation marches through each state in succession. But this device fails to capture a distribution over the interval between two events. If the second event has occurred, the interval is complete and the system should not expect reward again, but the tapped delay line continues to advance. This may be correctable, though awkwardly, by representing the reward with its own delay line, which can then learn to suppress further reward expectation after a reward occurs [10]. However, to our knowledge it is experimentally unclear whether the suppression of this response requires repeated experience with the situation, as this account predicts. Also, whether this works depends on how information from multiple cues is combined into an aggregate reward prediction (i.e. on the function approximator used: it is easy to verify that a standard linear combination of the delay lines does not suffice). The models have a similar problem with a related experiment [11] (Figure 1e) where the stimulus-reward interval varied uniformly over a range of delays throughout training. In this case, all substates within the interval see reward with the same (low) probability, so each produces identical positive error when reward occurs there. In animal experiments, however, stronger dopaminergic activity is seen for earlier rewards [11]. 3 A new model Both of these experiments demonstrate that current TD models of DA do not adequately treat variability in event timing. We address them with a TD model grounded in a formalism that incorporates temporal variability, a partially observable [12] semi-Markov [13] process. Such a process is described by three functions, O, Q, and D, operating over two sets: the hidden states S and observations O. Q associates each state with a probability distribution over possible successors. If the process is in state s ∈S, then the next state is s′ with probability Qss′. These discrete state transitions can occur irregularly in continuous time (which we approximate to arbitrarily fine discretization). The dwell time τ spent in s before making a transition is distributed with probability Dsτ; we define the indicator φt as one if the state transitioned between t and t + 1 and zero otherwise. On entering s, the process emits some observation o ∈O with probability Oso. Some observations are distinguished as rewarding; we separately write the reward magnitude of an observation as r. Note that the processes we consider in this paper do not contain decisions. In this formalism, a trace conditioning experiment can be treated as alternation between two states (Figure 1b). The states correspond to the intervals between stimulus and reward (interstimulus interval: ISI) and between reward and stimulus (intertrial interval: ITI). A stimulus is the likely observation when entering the ISI and a reward when entering the ITI. We will index variables both by the time t and by a discrete index n which counts state transitions; e.g. the nth state, sn, is entered at time t = Pn−1 k=1 τk and can thus also be written as st. If φt = 0 (if the state did not transition between t and t+1) then st+1 = st, ot+1 is null and rt+1 = 0 (i.e., nonempty observations and rewards occur only on transitions). State transitions may be unsignaled: ot+1 may be null even if φt = 1. An unsignaled transition into the ITI state occurs in our model when reward is omitted, a common experimental manipulation [7]. This example demonstrates the relationship between temporal variability and partial observability: if reward timing can vary, nothing in the observable state reveals whether a late reward is still coming or has been omitted completely. TD algorithms [6] approximate a function mapping each state to its value, defined as the expectation (with respect to variability in reward magnitude, state succession, and dwell times) of summed, discounted future reward, starting from that state. In the semi-Markov case [13], a state’s value is defined as the reward expectation at the moment it is entered; we do not count rewards received on the transition in. The value of the nth state entered is: Vsn = E  γτnrn+1 + γτn+τn+1rn+2 + ...  = E  γτn(rn+1 + Vsn+1)  where γ < 1 is a discounting parameter. We address partial observability by using model-based inference to determine a distribution over the hidden states, which then serves as a basis over which a modified TD algorithm can learn values. The approach is similar to the Q-learning algorithm of Chrisman [14]. In our setting, however, values can in principle be learned exactly, since without decisions, they are linear in the space of hidden states. For state inference, we assume that the brain’s sensory processing systems use an internal model of the semi-Markov process — that is, the functions O, Q, and D. Here we take the model as given, though we have treated parts of the problem of learning such models elsewhere [15]. A key assumption about this internal model is that its distributions over intervals, rewards and observations contain asymptotic uncertainty, that is, they are not arbitrarily sharp. When learning internal models, such uncertainty can result from an assumption that parameters of the world are constantly changing [16]. Thus, in the inference model for the trace conditioning experiment, the ISI duration is modeled with a probability distribution with some nonzero variance rather than an impulse function. The model likewise assigns a small probability to anomalous transitions and observations (e.g. unrewarded transitions into the ITI state). This uncertainty is present only in the internal model: most anomalous events never occur in our simulations. Given the model and a series of observations o1 . . . ot, we can determine the likelihood that each hidden state is active using a standard forward-backward algorithm for hidden semi-Markov models [17]. The important quantity is the probability, for each state, that the system left that state at time t. With a one-timestep backward pass (to match the onetimestep value backups in the TD rule), this is: βs,t = P(st = s, φt = 1|o1 . . . ot+1) By Bayes’ theorem, βs,t ∝P(ot+1|st = s, φt = 1) · P(st = s, φt = 1|o1 . . . ot). The first term can be computed by integrating over st+1 in the model: P(ot+1|st=s, φt=1) = P s′∈S Qss′ · Os′ot+1; the second requires integrating over possible state sequences and dwell times: P(st = s, φt = 1|o1 . . . ot) = dlastO X τ=1 Dsτ·Osot−τ+1·P(st−τ+1 = s, φt−τ = 1|o1 . . . ot−τ) where dlastO is the number of timesteps since the last non-null observation and P(st−τ+1 = s, φt−τ = 1|o1 . . . ot−τ), the chance that the process entered s at t −τ + 1, equals P s′∈S Qs′s ·P(st−τ = s′, φt−τ = 1|o1 . . . ot−τ), allowing recursive computation. β is used for TD learning because it represents the probability of a transition, which is the event that triggers a value update in fully observable semi-Markov TD. Due to partial observability, we may not be certain when transitions have occurred or from which states, so we perform TD updates to every state at every timestep, weighted by β. We denote our estimate of the value of state s as ˆVs, to distinguish it from the true value Vs. The update to ˆVs at time t is proportional to the TD error: δs,t = βs,t(E[γτ] · (rt+1 + E[ ˆVs′]) −ˆVs) where E[γτ] = P k γkP(τt = k|st = s, φt = 1, o1 . . . ot+1) is the expected discounting (since dwell time may be uncertain) and E[ ˆVs′] = P s′∈S ˆVs′P(st+1 = s′|st = s, φt = 1, ot+1) is the expected subsequent value. Both expectations are conditioned on the process having left state s at time t, and computed using the internal world model. As in previous models, we associate the error signal δ with DA activity. However, because of uncertainty as to the state of the world, the TD error signal is vector-valued rather than scalar. DA neurons could code this vector in a distributed manner, which might explain experimentally observed response variability between neurons [7]. Alternatively, δs,t can be approximated with a scalar, which performs well if the inferred state occupancy is sharply peaked. In our figures, we use such an approximation, plotting DA activity as the cumulative TD error over states (implicitly weighted by β): δt = P s∈S δs,t. An approximate version of the vector signal could be reconstructed at target areas by multiplying by βs,t/ P s′∈S βs′,t. Note that with full observability, the (vector) learning rule reduces to standard semi-Markov TD, and conversely with full unobservability, it nudges states in the direction of a value iteration backup. In fact, the algorithm is exact in that it has the same fixed point as value iteration, assuming the inference model matches the contingencies of the world. (Due to uncertainty it does so only approximately in our simulations.) We sketch the proof. With each TD update, ˆVs is nudged toward some target value with some step size βs,t; the fixed point is the average of the targets, weighted by their probabilities and their step sizes. Fixing some arbitrary t, the update targets and β are functions of the observations o1 . . . ot+1, which are generated according to P(o1 . . . ot+1). The fixed point is: ˆVs = P o1...ot+1 P(o1 . . . ot+1) · βs,t · E[γτ] · (rt+1 + E[ ˆVs′]) P o1...ot+1 P(o1 . . . ot) · βs,t Marginalizing out the observations reduces this to Bellman’s equation for ˆVs, which is also, of course, the fixed-point equation for value iteration. 4 Results When expected reward is delivered early, the semi-Markov model assumes that this signals an early transition into the ITI state, and it thus does not expect further reward or produce spurious negative error (Figure 1d, top). Because of variability in the model’s ISI estimate, an early transition, while improbable, better explains the data than some other path through the state space. The early reward is worth more than expected, due to reduced discounting, and is thus accompanied by positive error. The model can also infer a state transition from the passage of time, absent any observations. In Figure 1d (bottom), when the reward is delivered late, the system infers that the world has entered the ITI state without reward, producing negative error. Figure 1f shows our model’s behavior when the ISI is uniformly distributed [11]. (The dwell time distribution D in the inference model was changed to reflect this distribution, as an animal should learn a different model here.) Earlier-than-average rewards are worth more than expected (due to discounting) and cause positive prediction error, while laterthan-average rewards cause negative error because they are more heavily discounted. This is broadly consistent with the experimental finding of decreasing response with increasing delay [11]. Inhibition at longer delays has not so far been observed in this experiment, though inhibition is in general difficult to detect. If discovered, such inhibition would support the semi-Markov model. Because it combines a conditional probability model with TD learning, our approach can incorporate insights from previous behavioral theories into a physiological model. Our state inference approach is based on a hidden Markov model (HMM) account we previously advanced to explain animal learning about the temporal relationships of events [15]. The present theory (with the model learning scheme from that paper) would account for the same data. Our model also accommodates two important theoretical ideas from more abstract models of animal learning that previous TD models cannot. One is the notion of uncertainty in some of its internal parameters, which Kakade and Dayan [16] use to explain interval timing and attentional effects in learning. Second, Gallistel has suggested that animal learning processes are timescale invariant. For example, altering the speed of events has no effect on the number of trials it takes animals to learn a stimulus-reward association [18]. This is not true of Markov TD models because their transitions are clocked to a fixed timescale. With tapped delay lines, timescale dilation increases the number of marker states in Figure 1a and slows learning. But our semi-Markov model is timescale invariant: learning is induced by state transitions which in turn are triggered by events or by the passage of time on a scale controlled by the internal model. (The form of temporal discounting we use is not timescale invariant, but this can be corrected as in [5].) 5 Discussion We have presented a model of the DA system that improves on previous models’ accounts of data involving temporal variability and partial observability, because, unlike prior models, it is grounded in a formalism that explicitly incorporates these considerations. Like previous models, ours identifies the DA response with reward prediction error, but it differs in the representational systems driving the predictions. Previous models assumed that tapped delay lines transcribed raw sensory events; ours envisions that these events inform a more active process of inference about the underlying state of the world. This is a principled approach to the problem of representing state when events can be separated by delays. Simpler schemes may capture the neuronal data, which are sparse, but without addressing the underlying computational issues we identify, they are unlikely to generalize. For instance, Suri and Schultz [4] propose that reward delivery overrides stimulus representations, canceling pending predictions and eliminating the spurious negative error in Figure 1c (top). But this would disrupt the behaviorally demonstrated ability of animals to learn that a stimulus predicts a series of rewards. Such static representational rules are insufficient since different tasks have different mnemonic requirements. In our account, unlike more ad-hoc theories, the problem of learning an appropriate representation for a task is well specified: it is the problem of modeling the task. Though we have not simulated model learning here (this is an important area for future work), it is possible using online HMM learning, and we have used this technique in a model of conditioning [15]. Another issue for the future is extending our theory to encompass action selection. DA models often assume an actor-critic framework [1] in which reward predictions are used to evaluate action selection policies. Partial observability complicates such an extension here, since policies must be defined over belief states (distributions over the hidden states S) to accommodate uncertainty; our use of S as a linear basis for value predictions is thus an oversimplification. Puzzlingly, the data we consider suggest that animals build internal models but also use sample-based TD methods to predict values. Given a full world model (which could in principle be solved directly for V ), it seems unclear why TD learning should be necessary. But since the world model must be learned incrementally online, it may be infeasible to continually re-solve it, and parts of the model may be poorly specified. In this case, TD learning in the inferred state space could maintain a reasonably current and observationally grounded value function. (Our particular formulation, which relies extensively on the model in the TD rule, may not be ideal from this perspective.) Suri [19] and Dayan [20] have also proposed TD theories of DA that incorporate world models to explain behavioral effects, though they do not address the theoretical issues or dopaminergic data considered here. While those accounts use the world model for directly anticipating future events, we have proposed another role for it in state inference. Also unlike our theory, the others cannot explain the experiments discussed in [15] because their internal models cannot represent simultaneous or backward contingencies. However, they treat the two major issues we have neglected: world model learning and action planning. The formal models in question have roughly equivalent explanatory power: a semi-Markov model can be simulated (to arbitrarily fine temporal discretization) by a Markov model that subdivides its states by dwell time. There is also an isomorphism between higherorder and partially observable Markov models. Thus it would be possible to devise a state representation for a Markov model that copes properly with temporal variability. But doing so by elaborating the tapped delay line architecture would amount to building a clockwork engine for the inference process we describe, without the benefit of useful abstractions such as distributions over intervals; a clearer approach would subdivide the states in our model. Though there exist isomorphisms between the formal models, there are algorithmic differences that may make our proposal experimentally distinguishable from others. The inhibitory responses in Figure 1f reflect the way semi-Markov models account for the costs of delays; they would not be seen in a Markov model with subdivided states. Such inhibition is somewhat parameter-dependent, since if inference parameters assign high probability to unsignaled transitions the decrease in reward value with delay can be mitigated by increasing uncertainty about the hidden state. Nonetheless, should data not uphold our prediction of inhibitory responses to late rewards, they would suggest a different definition of a state’s value. One choice would be the subdivision of our semi-Markov states by dwell time discussed above, which in the experiment of Figure 1f would decrease TD error toward but not past zero for longer delays. In this case, later rewards are less surprising because the conditional probability of reward increases as time passes without reward. A related prediction suggested by our model is that DA responses not just to rewards but also to stimuli that signal reward might be modulated by their timing relative to expectation. Responses to reward-predicting stimuli disappear in overtrained animals, presumably because the stimuli come to be predicted by events in the previous trial [7]. In tapped delay line models, this is possible only for a constant ITI (since if expectancy is divided between a number of states, stimulus delivery in any one of them cannot be completely predicted away). But the response to a stimulus in the semi-Markov model can show behavior exactly analogous to the reward response in Figure 1f — positive or negative error depending on the time of delivery relative to expectation. So, even in an experiment involving a randomized ITI, the net stimulus response (averaged over the range of ITIs) could be attenuated. Such behavior occurred in our simulations; the modeled DA responses to the stimuli in Figures 1d and 1f are positive because they were taken after shorter-than-average ITIs. It is difficult to evaluate this observation against available data, since the experiment involving overtrained monkeys [7] contained minimal ITI variability. We have suggested that the TD error may be a vector signal, with different neurons signaling errors for different elements of a state distribution. This could be investigated experimentally by recording DA neurons as a situation of ambiguous reward expectancy (e.g. one reward or three) resolved into a situation of intermediate, determinate reward expectancy (e.g. two rewards). Neurons carrying an aggregate error should uniformly report no error, but with a vector signal, different neurons might report both positive and negative error. Acknowledgments This work was supported by National Science Foundation grants IIS-9978403 and DGE9987588. Aaron Courville was funded in part by a Canadian NSERC PGS B fellowship. We thank Sham Kakade and Peter Dayan for helpful discussions. References [1] JC Houk, JL Adams, and AG Barto. A model of how the basal ganglia generate and use neural signals that predict reinforcement. In JC Houk, JL Davis, and DG Beiser, editors, Models of Information Processing in the Basal Ganglia, pages 249–270. MIT Press, 1995. [2] PR Montague, P Dayan, and TJ Sejnowski. A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J Neurosci, 16:1936–1947, 1996. [3] W Schultz, P Dayan, and PR Montague. A neural substrate of prediction and reward. Science, 275:1593–1599, 1997. [4] RE Suri and W Schultz. A neural network with dopamine-like reinforcement signal that learns a spatial delayed response task. Neurosci, 91:871–890, 1999. [5] ND Daw and DS Touretzky. Long-term reward prediction in TD models of the dopamine system. Neural Comp, 14:2567–2583, 2002. [6] RS Sutton. Learning to predict by the method of temporal differences. Machine Learning, 3:9–44, 1988. [7] W Schultz. Predictive reward signal of dopamine neurons. J Neurophys, 80:1–27, 1998. [8] RS Sutton and AG Barto. Time-derivative models of Pavlovian reinforcement. In M Gabriel and J Moore, editors, Learning and Computational Neuroscience: Foundations of Adaptive Networks, pages 497–537. MIT Press, 1990. [9] JR Hollerman and W Schultz. Dopamine neurons report an error in the temporal prediction of reward during learning. Nature Neurosci, 1:304–309, 1998. [10] DS Touretzky, ND Daw, and EJ Tira-Thompson. Combining configural and TD learning on a robot. In ICDL 2, pages 47–52. IEEE Computer Society, 2002. [11] CD Fiorillo and W Schultz. The reward responses of dopamine neurons persist when prediction of reward is probabilistic with respect to time or occurrence. In Soc. Neurosci. Abstracts, volume 27: 827.5, 2001. [12] LP Kaelbling, ML Littman, and AR Cassandra. Planning and acting in partially observable stochastic domains. Artif Intell, 101:99–134, 1998. [13] SJ Bradtke and MO Duff. Reinforcement learning methods for continuous-time Markov Decision Problems. In NIPS 7, pages 393–400. MIT Press, 1995. [14] L Chrisman. Reinforcement learning with perceptual aliasing: The perceptual distinctions approach. In AAAI 10, pages 183–188, 1992. [15] AC Courville and DS Touretzky. Modeling temporal structure in classical conditioning. In NIPS 14, pages 3–10. MIT Press, 2001. [16] S Kakade and P Dayan. Acquisition in autoshaping. In NIPS 12, pages 24–30. MIT Press, 2000. [17] Y Guedon and C Cocozza-Thivent. Explicit state occupancy modeling by hidden semi-Markov models: Application of Derin’s scheme. Comp Speech and Lang, 4:167–192, 1990. [18] CR Gallistel and J Gibbon. Time, rate and conditioning. Psych Rev, 107(2):289–344, 2000. [19] RE Suri. Anticipatory responses of dopamine neurons and cortical neurons reproduced by internal model. Exp Brain Research, 140:234–240, 2001. [20] P Dayan. Motivated reinforcement learning. In NIPS 14, pages 11–18. MIT Press, 2001.
2002
23
2,225
Multiple Cause Vector Quantization David A. Ross and Richard S. Zemel Department of Computer Science University of Toronto {dross,zemel}@cs.toronto.edu Abstract We propose a model that can learn parts-based representations of highdimensional data. Our key assumption is that the dimensions of the data can be separated into several disjoint subsets, or factors, which take on values independently of each other. We assume each factor has a small number of discrete states, and model it using a vector quantizer. The selected states of each factor represent the multiple causes of the input. Given a set of training examples, our model learns the association of data dimensions with factors, as well as the states of each VQ. Inference and learning are carried out efficiently via variational algorithms. We present applications of this model to problems in image decomposition, collaborative filtering, and text classification. 1 Introduction Many collections of data exhibit a common underlying structure: they consist of a number of parts or factors, each of which has a small number of discrete states. For example, in a collection of facial images, every image contains eyes, a nose, and a mouth (except under occlusion), each of which has a range of different appearances. A specific image can be described as a composite sketch: a selection of the appearance of each part, depending on the individual depicted. In this paper, we describe a stochastic generative model for data of this type. This model is well-suited to decomposing images into parts (it can be thought of as a Mr. Potato Head model), but also applies to domains such as text and collaborative filtering in which the parts correspond to latent features, each having several alternative instantiations. This representational scheme is powerful due to its combinatorial nature: while a standard clustering/VQ method containing N states can represent at most N items, if we divide the N into j-state VQs, we can represent jN/j items. MCVQ is also especially appropriate for high-dimensional data in which many values may be unspecified for a given input case. 2 Generative Model In MCVQ we assume there are K factors, each of which is modeled by a vector quantizer with J states. To generate an observed data example of D dimensions, x ∈ℜD, we stochastically select one state for each VQ, and one VQ for each dimension. Given these selections, a single state from a single VQ determines the value of each data dimension xd. b k s k rd x d a d µ kj o kj K D J Figure 1: Graphical model representation of MCVQ. We let rd=1 represent all the variables rd=1,k, which together select a VQ for x1. Similarly, sk=1 represents all sk=1,j, which together select a state of VQ 1. The plates depict repetitions across the appropriate dimensions for each of the three variables: the K VQs, the J states (codebook vectors) per VQ, and the D input dimensions. The selections are represented as binary latent variables, S = {skj}, R = {rdk}, for d = 1...D, k = 1...K, and j = 1...J. The variable skj = 1 if and only if state j has been selected from VQ k. Similarly rdk = 1 when VQ k has been selected for data dimension d. These variables can be described equivalently as multinomials, sk ∈1...J, rd ∈1...K; their values are drawn according to their respective priors, ak and bd. The graphical model representation of MCVQ is given in Fig. 1. Assuming each VQ state specifies the mean as well as the standard deviation of a Gaussian distribution, and the noise in the data dimensions is conditionally independent, we have (where θ = {µdkj, σdkj}): P(x|R, S, θ) = Y d Y k,j N(xd ; µdkj, σdkj)rdk skj The resulting model can be thought of as a two-dimensional mixture model, in which J ∗K possible states exist for each data dimension (xd). The selections of states for the different data dimensions are joined along the J dimension and occur independently along the K dimension. 3 Learning and Inference The joint distribution over the observed vector x and the latent variables is P(x, R, S|θ) = P(R|θ)P(S|θ)P(x|R, S, θ) = Y d,k ardk dk Y k,j bskj kj Y d,k,j N(xd ; θ)rdkskj Given an input x, the posterior distribution over the latent variables, P(R, S|x, θ), cannot tractably be computed, since all the latent variables become dependent. We apply a variational EM algorithm to learn the parameters θ, and infer hidden variables given observations. We approximate the posterior distribution using a factored distribution, where g and m are variational parameters related to r and s respectively: Q(R, S|x, θ) =  Y d,k grdk dk  Y k,j mskj kj  The variational free energy, F(Q, θ) = EQ  −log P(x, R, S|θ) + log Q(R, S|x, θ)  is: F = EQ  X d,k rdk log(gdk/akj) + X k,j skj log(mkj/bkj) + X d,k,j rdkskj log N(xd ; θ)  = X k,j mkj log mkj + X d,k gdk log gdk + X d,k,j gdk mkj ϵdkj where ϵdkj = log σdkj + (xd−µdkj)2 2σ2 dkj , and we have assumed uniform priors for the selection variables. The negative of the free energy −F is a lower bound on the log likelihood of generating the observations. The variational EM algorithm improves this bound by iteratively improving −F with respect to Q (E-step) and to θ (M-step). Let C be the set of training cases, and Qc be the approximation to the posterior distribution over latent variables given the training case (observation) c ∈C. We further constrain this variational approach, forcing the {gc dk} to be consistent across all observations xc. Hence these parameters relating to the gating variables that govern the selection of a factor for a given observation dimension, are not dependent on the observation. This approach encourages the model to learn representations that conform to this constraint. That is, if there are several posterior distributions consistent with an observed data vector, it favours distributions over {rd} that are consistent with those of other observed data vectors. Under this formulation, only the {mc kj} parameters are updated during the E step for each observation c: mc kj = exp  − X d gdk ϵc dkj  / J X α=1 exp  − X d gdk ϵc dαk  The M step updates the parameters, µ and σ, from each hidden state kj to each input dimension d, and the gating variables {gdk}: gdk = exp  −1 C X c,j mc kj ϵc dkj  / K X β=1 exp  −1 C X c,j mc jβ ϵc djβ  µdkj = X c mc kjxc d / X c mc kj σ2 dkj = X c mc kj(xc d −µdkj)2 / X c mc kj A slightly different model formulation restricts the selections of VQs, {rdk}, to be the same for each training case. Variational EM updates for this model are identical to those above, except that the 1 C terms in the updates for gdk disappear. In practice, we obtain good results by replacing this 1 C term with an inverse temperature parameter, that is annealed during learning. This can be thought of as gradually moving from a generative model in which the rdk’s can vary across examples, to one in which they are the same for each example. The inferred values of the variational parameters specify a posterior distribution over the VQ states, which in turn implies a mixture of Gaussians for each input dimension. Below we use the mean of this mixture, ˆxc d = P k,j mc kj gdk µdkj, to measure the model’s reconstruction error on case c. 4 Related models MCVQ falls into the expanding class of unsupervised algorithms known as factorial methods, in which the aim of the learning algorithm is to discover multiple independent causes, or factors, that can well characterize the observed data. Its direct ancestor is Cooperative Vector Quantization [1, 2, 3], which models each data vector as a linear combination of VQ selections. Another part-seeking algorithm, non-negative matrix factorization (NMF) [4], utilizes a non-negative linear combination of non-negative basis functions. MCVQ entails another round of competition, from amongst the VQ selections rather than the linear combination of CVQ and NMF, which leads to a division of input dimensions into separate causes. The contrast between these approaches mirrors the development of the competitive mixture-of-experts algorithm which grew out of the inability of a cooperative, linear combination of experts to decompose inputs into separable experts. MCVQ also resembles a wide range of generative models developed to address image segmentation [5, 6, 7]. These are generally complex, hierarchical models designed to focus on a different aspect of this problem than that of MCVQ: to dynamically decide which pixels belong to which objects. The chief obstacle faced by these models is the unknown pose (primarily limited to position) of an object in an image, and they employ learned object models to find the single object that best explains each pixel. MCVQ adopts a more constrained solution w.r.t. part locations, assuming that these are consistent across images, and instead focuses on the assembling of input dimensions into parts, and the variety of instantiations of each part. The constraints built into MCVQ limit its generality, but also lead to rapid learning and inference, and enable it to scale up to high-dimensional data. Finally, MCVQ also closely relates to sparse matrix decomposition techniques, such as the aspect model [8], a latent variable model which associates an unobserved class variable, the aspect z, with each observation. Observations consist of co-occurrence statistics, such as counts of how often a specific word occurs in a document. The latent Dirichlet allocation model [9] can be seen as a proper generative version of the aspect model: each document/input vector is not represented as a set of labels for a particular vector in the training set, and there is a natural way to examine the probability of some unseen vector. MCVQ shares the ability of these models to associate multiple aspects with a given document, yet it achieves this by sampling from multiple aspects in parallel, rather than repeated sampling of an aspect within a document. It also imposes the additional selection of an aspect for each input dimension, which leads to a soft decomposition of these dimensions based on their choice of aspect. Below we present some initial experiments examining whether MCVQ can match the successful application of the aspect model to information retrieval and collaborative filtering problems, after evaluating it on image data. 5 Experimental Results 5.1 Parts-based Image Decomposition: Shapes and Faces The first dataset used to test our model consisted of 11 × 11 gray-scale images, as pictured in Fig. 2a. Each image in the set contains three shapes: a box, a triangle, and a cross. The horizontal position of each shape is fixed, but the vertical position is allowed to vary, uniformly and independently of the positions of the other shapes. A model containing 3 VQs, 5 states each, was trained on a set of 100 shape images. In this experiment, and all experiments reported herein, annealing proceeded linearly from an integer less than C to 1. The learned representation, pictured in Fig. 2b, clearly shows the specialization of each VQ to one of the shapes. The training set was selected so that none of the examples depict cases in which all three shapes are located near the top of the image. Despite this handicap, MCVQ is able to learn the full range of shape positions, and can accurately reconstruct such an image (Fig. 2c). In contrast, standard unsupervised methods such as Vector Quantization (Fig. 3a) and Principal Component Analysis (Fig. 3b) produce holistic representations of the data, in which each basis vector tries to account for variation observed across the entire image. Nonnegative matrix factorization does produce a parts-based representation (Fig. 3c), but captures less of the data’s structure. Unlike MCVQ, NMF does not group related parts, and its generative model does not limit the combination of parts to only produce valid images. As an empirical comparison, we tested the reconstruction error of each of the aforementioned methods on an independent test set of 629 images. Since each method has one or more free parameters (e.g. the # of principal components) we chose to relate models with similar description lengths1. Using a description length of about 5.9 × 105 bits, and pixel 1We define description length to be the number of bits required to represent the model, plus the for each component µ VQ 1 VQ 3 VQ 2 k = 3 k = 2 k = 1 G a) b) c) Original Reconstruction Figure 2: a) A sample of 24 training images from the Shapes dataset. b) A typical representation learned by MCVQ with 3 VQs and 5 states per VQ. c) Reconstruction of a test image: original (left) and reconstruction (right). b) c) Original PCA VQ d) NMF a) Figure 3: Other methods trained on shape images: a) VQ, b) PCA, and c) NMF. d) Reconstruction of a test image by the three methods (cf. Fig. 2c). values ranging from -1 to 1, the average r.m.s. reconstruction error was 0.21 for MCVQ (3 VQs), 0.22 for PCA, 0.35 for NMF, and 0.49 for VQ. Note that this metric may be useful in determining the number of VQs, e.g., MCVQ with 6 VQs had an eror of 0.6. As a more interesting visual application, we trained our model on a database of face images (www.ai.mit.edu/cbcl/projects).The dataset consists of 19 × 19 gray-scale images, each containing a single frontal or near-frontal face. A model of 6 VQs with 12 states each was trained on 2000 images, requiring 15 iterations of EM to converge. As with shape images, the model learned a parts-based representation of the faces. The reconstruction of two test images, along with the specific parts used to generate each, is illustrated in Fig. 4. It is interesting to note that the pixels comprising a single part need not be physically adjacent (e.g. the eyes) as long as their appearances are correlated. We again compared the reconstruction error of MCVQ with VQ, PCA, and NMF. The training and testing sets contained 1800 and 629 images respectively. Using a description length of 1.5×106 bits, and pixel values ranging from -1 to 1, the average r.m.s. reconstruction error number of bits to encode all the test examples using the model. This metric balances the large model cost and small encoding cost of VQ/MCVQ with the small model cost and large encoding cost of PCA/NMF. Figure 4: The reconstruction of two test images from the Faces dataset. Beside each reconstruction are the parts—the most active state in each of six VQs—used to generate it. Each part j ∈k is represented by its gated prediction (gdk ∗mkj) for each image pixel i. was 0.12 for PCA, 0.20 for NMF, 0.23 for MCVQ (both 3 and 6 VQs), and 0.28 for VQ. 5.2 Collaborative Filtering The application of MCVQ to image data assumes that the images are normalized, i.e., that the head is in a similar pose in each image. Normalization can be difficult to achieve in some image contexts; however, in many other types of applications, the input representation is more stable. For example, many information retrieval applications employ bag-of-words representations, in which a given word always occupies the same input element. We test MCVQ on a collaborative filtering task, utilizing the EachMovie dataset, where the input vectors are ratings by users of movies, and a given element always corresponds to the same movie. The original dataset contains ratings, on a scale from 1 to 6, of a set of 1649 movies, by 74,424 users. In order to reduce the sparseness of the dataset, since many users rated only a few movies, we only included users who rated at least 75 movies and movies rated by at least 126 users, leaving a total of 1003 movies and 5831 users. The remaining dataset was still very sparse, as the maximum user rated 928 movies, and the maximum movie was rated by 5401 users. We split the data randomly into 4831 users for a training set, and 1000 users in a test set. We ran MCVQ with 8 VQs and 6 states per VQ on this dataset. An example of the results, after 18 iterations of EM, is shown in Fig. 5. Note that in the MCVQ graphical model (Fig. 1), all the observation dimensions are leaves, so an input variable whose value is not specified in a particular observation vector will not play a role in inference or learning. This makes inference and learning with sparse data rapid and efficient. We compare the performance of MCVQ on this dataset to the aspect model. We implemented a version of the aspect model, with 50 aspects and truncated Gaussians for ratings, and used “tempered EM” (with smoothing) to fit the parameters[10]. For both models, we train the model on the 4831 users in the training set, and then, for each test user, we let the model observe some fixed number of ratings and hold out the rest. We evaluate the models by measuring the absolute difference between their predictions for a held-out rating and the user’s true rating, averaged over all held-out ratings for all test users (Fig. 6). The Fugitive 5.8 (6) Pulp Fiction 5.5 (4) Cinema Paradiso 5.6 (6) Shawshank Redemption 5.5 (5) Terminator 2 5.7 (5) Godfather: Part II 5.3 (5) Touch of Evil 5.4 (-) Taxi Driver 5.3 (6) Robocop 5.4 (5) Silence of the Lambs 5.2 (4) Rear Window 5.2 (6) Dead Man Walking 5.1 (-) Kazaam 1.9 (-) Brady Bunch Movie 1.4 (1) Jean de Florette 2.1 (3) Billy Madison 3.2 (-) Rent-a-Kid 1.9 (-) Ready to Wear 1.3 (-) Lawrence of Arabia 2.0 (3) Clerks 3.0 (4) Amazing Panda Adventure 1.7 (-) A Goofy Movie 0.8 (1) Sense Sensibility 1.6 (-) Forrest Gump 2.7 (2) Best of Wallace & Gromit 5.6 (-) Tank Girl 5.5 (6) Mediterraneo 5.3 (6) Sling Blade 5.4 (5) The Wrong Trousers 5.4 (6) Showgirls 5.3 (4) Three Colors: Blue 4.9 (5) One Flew ... Cuckoo’s Nest 5.3 (6) A Close Shave 5.3 (5) Heidi Fleiss... 5.2 (5) Jean de Florette 4.9 (6) Dr. Strangelove 5.2 (5) Robocop 2.6 (2) Talking About Sex 2.4 (5) Jaws 3-D 2.2 (-) The Beverly Hillbillies 2.0 (-) Dangerous Ground 2.5 (2) Barbarella 2.0 (4) Richie Rich 1.9 (-) Canadian Bacon 1.9 (4) Street Fighter 2.0 (-) The Big Green 1.8 (2) Getting Even With Dad 1.5 (-) Mrs. Doubtfire 1.7 (-) Figure 5: The MCVQ representation of two test users in the EachMovie dataset. The 3 most conspicuously high-rated (bold) and low-rated movies by the most active states of 4 of the 8 VQs are shown, where conspicuousness is the deviation from the mean rating for a given movie. Each state’s predictions, µdkj, can be compared to the test user’s true ratings (in parentheses); the model’s prediction is a convex combination of state predictions. Note the intuitive decomposition of movies into separate VQs, and that different states within a VQ may predict very different rating patterns for the same movies. 200 300 400 500 600 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 Number of observed ratings Mean test prediction error MCVQ Aspect Figure 6: The average absolute deviation of predicted and true values of held-out ratings is compared for MCVQ and the aspect model. Note that the number of users per x-bin decreases with increasing x, as a user must rate at least x+1 movies to be included. 5.3 Text Classification MCVQ can also be used for information retrieval from text documents, by employing the bag-of-words representation. We present preliminary results on the NIPS corpus (available at www.cs.toronto.edu/˜roweis/data.html), which consists of the full text of the NIPS conference proceedings, volumes 1 to 12. The data was pre-processed to remove common words (e.g. the), and those appearing in fewer than five documents, resulting in a vocabulary of 14,265 words. For each of the 1740 papers in the corpus, we generated a vector containing the number of occurrences of each word in the vocabulary. These vectors were normalized so that each contained the same number of words. A model of 8 VQs, 8 states each, was trained on the data, converging after 15 iterations of EM. A sample of the results is shown in Fig. 7. When trained on text data, the values of {gdk} provide a segmentation of the vocabulary into subsets of words with correlated frequencies. Within a particular subset, the words can be positively correlated, indicating that they tend to appear in the same documents, or negatively correlated, indicating that they seldom appear together. 6 Conclusion We have presented a novel method for learning factored representations of data which can be efficiently learned, and employed across a wide variety of problem domains. MCVQ combines the cooperative nature of some methods, such as CVQ, NMF, and LSA, that Predictive Sequence Learning in Recurrent Neocortical Circuits The Relevance Vector Machine R. P. N. Rao & T. J. Sejnowski Michael E. Tipping afferent ekf latent ltp svms hme similarity extraction lgn niranjan som gerstner svm svr classify net interneurons freitas detection zador margin svs classes weights excitatory kalman search soma kernel hyperparameters classification functions membrane wp data depression risk kopf class units query critic mdp spline jutten chip barn mdp documents stack pomdps tresp pes ocular correlogram pomdps chess suffix prioritized saddle cpg retinal interaural littman portfolio nuclei singh hyperplanes axon surround epsp prioritized players knudsen elevator tensor behavioural cmos bregman pomdp Figure 7: The representation of two documents by an MCVQ model with 8 VQs and 8 states per VQ. For each document we show the states selected for it from 4 VQs. The bold (plain) words for each state are those most conspicuous by their above (below) average predicted frequency. use multiple causes to generate input, with competitive aspects of clustering methods. In addition, it gains combinatorial power by splitting the input into subsets, and can readily handle sparse, high-dimensional data. One direction of further research involves extending the applications described above, including applying MCVQ to other dimensions of the NIPS corpus such as authors to find groupings of authors based on word-use frequency. An important theoretical direction is to incorporate Bayesian learning for selecting the number and size of each VQ. References [1] R.S. Zemel. A Minimum Description Length Framework for Unsupervised Learning. PhD thesis, Dept. of Computer Science, University of Toronto, Toronto, Canada, 1993. [2] G. Hinton and R.S. Zemel. Autoencoders, minimum description length, and Helmholtz free energy. In G. Tesauro J. D. Cowan and J. Alspector, editors, Advances in Neural Information Processing Systems 6. Morgan Kaufmann Publishers, San Mateo, CA, 1994. [3] Z. Ghahramani. Factorial learning and the EM algorithm. In G. Tesauro, D.S. Touretzky, and T.K. Leen, editors, Advances in Neural Information Processing Systems 7. MIT Press, Cambridge, MA, 1995. [4] D.D. Lee and H.S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401:788–791, October 1999. [5] C. Williams and N. Adams. DTs: Dynamic trees. In M.J. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information Processing Systems 11. MIT Press, Cambridge, MA, 1999. [6] G.E. Hinton, Z. Ghahramani, and Y.W. Teh. Learning to parse images. In S.A. Solla, T.K. Leen, and K.R. Muller, editors, Advances in Neural Information Processing Systems 12. MIT Press, Cambridge, MA, 2000. [7] N. Jojic and B.J. Frey. Learning flexible sprites in video layers. In CVPR, 2001. [8] T. Hofmann. Probabilistic latent semantic analysis. In Proc. of Uncertainty in Artificial Intelligence, UAI’99, Stockholm, 1999. [9] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent Dirichlet allocation. In T.K. Leen, T. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13. MIT Press, Cambridge, MA, 2001. [10] T. Hofmann. Learning what people (don’t) want. In European Conference on Machine Learning, 2001.
2002
24
2,226
Unsupervised Color Constancy Kinh Tieu Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 tieu@ai.mit.edu Erik G. Miller Computer Science Division UC Berkeley Berkeley, CA 94720 egmil@cs.berkeley.edu Abstract In [1] we introduced a linear statistical model of joint color changes in images due to variation in lighting and certain non-geometric camera parameters. We did this by measuring the mappings of colors in one image of a scene to colors in another image of the same scene under different lighting conditions. Here we increase the flexibility of this color flow model by allowing flow coefficients to vary according to a low order polynomial over the image. This allows us to better fit smoothly varying lighting conditions as well as curved surfaces without endowing our model with too much capacity. We show results on image matching and shadow removal and detection. 1 Introduction The number of possible images of an object or scene, even when taken from a single viewpoint with a fixed camera, is very large. Light sources, shadows, camera aperture, exposure time, transducer non-linearities, and camera processing (such as auto-gain-control and color balancing) can all affect the final image of a scene. These effects have a significant impact on the images obtained with cameras and hence on image processing algorithms, often hampering or eliminating our ability to produce reliable recognition algorithms. Addressing the variability of images due to these photic parameters has been an important problem in machine vision. We distinguish photic parameters from geometric parameters, such as camera orientation or blurring, that affect which parts of the scene a particular pixel represents. We also note that photic parameters are more general than “lighting parameters” and include anything which affects the final RGB values in an image given that the geometric parameters and the objects in the scene have been fixed. We present a statistical linear model of color change space that is learned by observing how the colors in static images change jointly under common, naturally occurring lighting changes. Such a model can be used for a number of tasks, including synthesis of images of new objects under different lighting conditions, image matching, and shadow detection. Results for each of these tasks will be reported. Several aspects of our model merit discussion. First, it is obtained from video data in a completely unsupervised fashion. The model uses no prior knowledge of lighting conditions, surface reflectances, or other parameters during data collection and modeling. It also has no built-in knowledge of the physics of image acquisition or “typical” image color changes, such as brightness changes. Second, it is a single global model and does not need to be re-estimated for new objects or scenes. While it may not apply to all scenes equally well, it is a model of frequently occurring joint color changes, which is meant to apply to all scenes. Third, while our model is linear in color change space, each joint color change that we model (a 3-D vector field) is completely arbitrary, and is not itself restricted to being linear. This gives us great modeling power, while capacity is controlled through the number of basis fields allowed. After discussing previous work in Section 2, we introduce the color flow model and how it is obtained from observations in Section 3. In Section 4, we show how the model and a single observed image can be used to generate a large family of related images. We also give an efficient procedure for finding the best fit of the model to the difference between two images. In Section 5 we give preliminary results for image matching (object recognition) and shadow detection. 2 Previous work The color constancy literature contains a large body of work on estimating surface reflectances and various photic parameters from images. A common approach is to use linear models of reflectance and illuminant spectra [2]. Gray world algorithms [3] assume the average reflectance of all the surfaces in a scene is gray. White world algorithms [4] assume the brightest pixel corresponds to a scene point with maximal reflectance. Brainard and Freeman attacked this problem probabilistically [5] by defining prior distributions on particular illuminants and surfaces. They used a new, maximum local mass estimator to choose a single best estimate of the illuminant and surface. Another technique is to estimate the relative illuminant or mapping of colors under an unknown illuminant to a canonical one. Color gamut mapping [6] uses the convex hull of all achievable RGB values to represent an illuminant. The intersection of the mappings for each pixel in an image is used to choose a “best” mapping. [7] trained a back-propagation multi-layer neural network to estimate the parameters of a linear color mapping. The approach in [8] works in the log color spectra space where the effect of a relative illuminant is a set of constant shifts in the scalar coefficients of linear models for the image colors and illuminant. The shifts are computed as differences between the modes of the distribution of coefficients of randomly selected pixels of some set of representative colors. [9] bypasses the need to predict specific scene properties by proving that the set of images of a gray Lambertian convex object under all lighting conditions form a convex cone.1 We wanted a model which, based upon a single image (instead of three required by [9]), could make useful predictions about other images of the same scene. This work is in the same spirit, although we use a statistical method rather than a geometric one. 3 Color flows In the following, let C = {(r, g, b)T ∈R3 : 0 ≤r ≤255, 0 ≤g ≤255, 0 ≤b ≤255} be the set of all possible observable image color 3-vectors. Let the vector-valued color of an image pixel p be denoted by c(p) ∈C. Suppose we are given two P-pixel RGB color images I1 and I2 of the same scene taken under two different photic parameters θ1 and θ2 (the images are registered). Each pair of 1This result depends upon the important assumption that the camera, including the transducers, the aperture, and the lens introduce no non-linearities into the system. The authors’ results on color images also do not address the issue of metamers, and assume that light is composed of only the wavelengths red, green, and blue. a b c d e f Figure 1: Matching non-linear color changes. b is the result of squaring the value of a (in HSV) and re-normalizing it to 255. c-f are attempts to match b with a using four different algorithms. Our algorithm (f) was the only one to capture the non-linearity. corresponding image pixels pk 1 and pk 2, 1 ≤k ≤P, in the two images represents a singlecolor mapping c(pk 1) 7→c(pk 2) that is conveniently represented by the vector difference: d(pk 1, pk 2) = c(pk 2) −c(pk 1). (1) By computing P vector differences (one for each pair of pixels) and placing each at the point c(pk 1) in color space C, we have a partially observed color flow: Φ′(c(pk 1)) = d(pk 1, pk 2), 1 ≤k ≤P (2) defined at points in C for which there are colors in image I1. To obtain a full color flow (i.e. a vector field Φ defined at all points in C) from a partially observed color flow Φ′, we must address two issues. First, there will be many points in C at which no vector difference is defined. Second, there may be multiple pixels of a particular color in image I1 that are mapped to different colors in image I2. We use a radial basis function estimator which defines the flow at a color point (r, g, b)T as the weighted proximity-based average of nearby observed “flow vectors”. We found empirically that σ2 = 16 (with colors on a 0–255 scale) worked well. Note that color flows are defined so that a color point with only a single nearby neighbor will inherit a flow vector that is nearly parallel to its neighbor. The idea is that if a particular color, under a photic parameter change θ1 7→θ2, is observed to get a little bit darker and a little bit bluer, for example, then its neighbors in color space are also defined to exhibit this behavior. 3.1 Structure in the space of color flows Consider a flat Lambertian surface that may have different reflectances as a function of the wavelength. While in principle it is possible for a change in lighting to map any color from such a surface to any other color independently of all other colors2, we know from experience that many such joint maps are unlikely. This suggests that while the marginal distribution of mappings for a particular color is broadly distributed, the space of possible joint color maps (i.e., color flows) is much more compact3. In learning a statistical model of color flows, many common color flows can be anticipated such as ones that make colors a little darker, lighter, or more red. These types of flows can be well modeled with a simple global 3x3 matrix A that maps a color c1 in image I1 to a color c2 in image I2 via c2 = Ac1. (3) However, there are many effects which linear maps cannot model. Perhaps the most significant is the combination of a large brightness change coupled with a non-linear gain-control adjustment or brightness re-normalization by the camera. Such photic changes will tend 2By carefully choosing properties such as the surface reflectance of a point as a function of wavelength and lighting any mapping ˜Φ can, in principle, be observed even on a flat Lambertian surface. However the metamerism which would cause such effects is uncommon in practice [10, 11] 3We will address below the significant issue of non-flat surfaces and shadows, which can cause highly “incoherent”maps. Figure 2: Evidence of non-linear color changes. The first two images are of the top and side of a box covered with multi-colored paper. The quotient image is shown next. The rightmost image is an ideal quotient image, corresponding to a linear lighting model. Figure 3: Effects of the first three eigenflows. See text. to leave the bright and dim parts of the image alone, while spreading the central colors of color space toward the margins. For a linear imaging process, the ratio of the brightnesses of two images, or quotient image [12], should vary smoothly except at surface normal boundaries. However as shown in Figure 2, the quotient image is a function not only of surface normal, but also of albedo– direct evidence of a non-linear imaging process. Another pair of images exhibiting a nonlinear color flow is shown in Figures 1a and b. Notice that the brighter areas of the original image get brighter and the darker portions get darker. 3.2 Color eigenflows We wanted to capture the structure in color flow space by observing real-world data in an unsupervised fashion. A one square meter color palette was printed on standard non-glossy plotter paper using every color that could be produced by a Hewlett Packard DesignJet 650C. The poster was mounted on a wall in our office so that it was in the direct line of overhead lights and computer monitors but not the single office window. An inexpensive video camera (the PC-75WR, Supercircuits, Inc.) with auto-gain-control was aimed at the poster so that the poster occupied about 95% of the field of view. Images of the poster were captured using the video camera under a wide variety of lighting conditions, including various intervals during sunrise, sunset, at midday, and with various combinations of office lights and outdoor lighting (controlled by adjusting blinds). People used the office during the acquisition process as well, thus affecting the ambient lighting conditions. It is important to note that a variety of non-linear normalization mechanisms built into the camera were operating during this process. We chose image pairs Ij = (Ij 1, Ij 2), 1 ≤j ≤800, by randomly and independently selecting individual images from the set of raw images. Each image pair was then used to estimate a full color flow Φ(Ij). We used 4096 distinct RGB colors (equally spaced in RGB space), so Φ(Ij) was represented by a vector of 3 ∗4096 = 12288 components. We modeled the space of color flows using principal components analysis (PCA) because: 1) the flows are well represented (in an L2 sense) by a small number of principal components, and 2) finding the optimal description of a difference image in terms of color flows was computationally efficient using this representation (see Section 4). We call the principal components of the color flow data “color eigenflows”, or just eigenflows,4 for short. We emphasize that these principal components of color flows have nothing to do with the distribution of colors in images, but only model the distribution of changes in color. This is a key and potentially confusing point. Our work is very different from approaches that compute principal components in the intensity or color space itself [14, 15]. Perhaps the most important difference is that our model is a global model for all images, while the 4PCA has been applied to motion vector fields [13], and these have also been termed “eigenflows”. a 1 2 3 4 0 5 10 15 20 25 rms error image color flow linear diagonal gray world b Figure 4: Image matching. Top row: original images. Bottom row: best approximation to original images using eigenflows and the source image a. Reconstruction errors per pixel component for four methods are shown in b. above methods are models only for a particular set of images, such as faces. 4 Using color flows to synthesize novel images How do we generate a new image from a source image and a color flow Φ? For each pixel p in the new image, its color c′(p) can be computed as c′(p) = c(p) + αΦ(ˆc(p)), (4) where c(p) is color in the source image and α is a scalar multiplier that represents the “quantity of flow”. ˆc(p) is interpreted to be the color vector closest to c(p) (in color space) at which Φ has been computed. RGB values are clipped to 0–255. Figure 3 shows the effect of the first three eigenflows on an image of a face. The original image is in the middle of each row while the other images show the application of each eigenflow with α values between ±4 standard deviations. The first eigenflow (top row) represents a generic brightness change that could probably be represented well with a linear model. Notice, however, the third row. Moving right from the middle image, the contrast grows. The shadowed side of the face grows darker while the lighted part of the face grows lighter. This effect cannot be achieved with a simple matrix multiplication as given in Equation 3. It is precisely these types of non-linear flows we wish to model. We stress that the eigenflows were only computed once (on the color palette data), and that they were applied to the face image without any knowledge of the parameters under which the face image was taken. 4.1 Flowing one image to another Suppose we have two images and we pose the question of whether they are images of the same object or scene. We suggest that if we can “flow” one image to another then the images are likely to be of the same scene. Let us treat an image I as a function that takes a color flow and returns a difference image D by placing at each (x,y) pixel in D the color change vector Φ(c(px,y)). The difference image basis for I and set of eigenflows Ψi, 1 ≤i ≤E, is Di = I(Ψi). The set of images S that can be formed using a source image and a set of eigenflows is S = {S : S = I + PE i=1 γiDi}, where the γi’s are scalars, and here I is just an image, and not a function. In our experiments, we used E = 30 of the top eigenvectors. We can only flow image I1 to another image I2 if it is possible to represent the difference image as a linear combination of the Di’s, i.e. if I2 ∈S. We find the optimal (in the least-squares sense) γi’s by solving the system D = E X i=1 γiDi, (5) a b e c d f Figure 5: Modeling lighting changes with color flows. a. Image with strong shadow. b. Same image under more uniform lighting conditions. c. Flow from a to b using eigenflows. d. Flow from a to b using linear. Evaluating the capacity of the color flow model. e. Mirror image of b. f. Failure to flow b to e implies that the model is not overparameterized. using the pseudo-inverse, where D = I2 −I1. The error residual represents a match score for I1 and I2. We point out again that this analysis ignores clipping effects. While clipping can only reduce the error between a synthetic image and a target image, it may change which solution is optimal in some cases. 5 Experiments 5.1 Image matching One use of the color change model is for image matching. An ideal system would flow matching images with zero error, and have large errors for non-matching images. We first examined our ability to flow a source image to a matching target image under different photic parameters. We compared our system to 3 other commonly used methods: linear, diagonal, and gray world. The linear method finds the matrix A in Equation 3 that minimizes the L2 error between the synthetic and target images; diagonal does the same with a diagonal A; gray world linearly matches the mean R, G, B values of the synthetic and target images. While our goal was to reduce the numerical difference between two images using flows, it is instructive to examine one example that was particularly visually compelling, shown in Figure 1. In a second experiment (Figure 4), we matched images of a face taken under various camera parameters but with constant lighting. Color flows outperforms the other methods in all but one task, on which it was second. 5.2 Local flows In another test, the source and target images were taken under very different lighting conditions. Furthermore, shadowing effects and lighting direction changed between the two images. None of the methods could handle these effects when applied globally. Thus we repeatedly applied each method on small patches of the image. Our method again performed the best, with an RMS error of 13.8 per pixel component, compared with errors of 17.3, 20.1, and 20.6 for the other methods. Figure 5 shows obvious visual artifacts with the linear method, while our method seems to have produced a much better synthetic image, especially in the shadow region at the edge of the poster. a b c d Figure 6: Backgrounding with color flows. a A background image. b A new object and shadow have appeared. c For each of the two regions (from background subtraction), a “flow” was done between the original image and the new image based on the pixels in each region. d The color flow of the original image using the eigenflow coefficients recovered from the shadow region. The color flow using the coefficients from the non-shadow region are unable to give a reasonable reconstruction of the new image. Synthesis on patches of images greatly increases the capacity of the model. We performed one experiment to measure the over-fitting of our method versus the others by trying to flow an original image to its reflection (Figure 5). The RMS error per pixel component was 33.2 for our method versus 41.5, 47.3, and 48.7 for the other methods. Note that while our method had lower error (which is undesirable), there was still a significant spread between matching images and non-matching images. We believe we can improve differentiation between matching and non-matching image pairs by assigning a cost to the change in γi across each image patch. For non-matching images, we would expect the γi’s to vary rapidly to accommodate the changing image. For matching images, sharp changes would only be necessary at shadow boundaries or changes in the surface orientation relative to directional light sources. 5.3 Shadows Shadows confuse tracking algorithms [16], backgrounding schemes and object recognition algorithms. For example, shadows can have a dramatic effect on the magnitude of difference images, despite the fact that no “new objects” have entered a scene. Shadows can also move across an image and appear as moving objects. Many of these problems could be eliminated if we could recognize that a particular region of an image is equivalent to a previously seen version of the scene, but under a different lighting. Figure 6a shows how color flows may be used to distinguish between a new object and a shadow by flowing both regions. A constant color flow across an entire region may not model the image change well. However, we can extend our basic model to allow linearly or quadratically (or other low order polynomially) varying fields of eigenflow coefficients. That is, we can find the best least squares fit of the difference image allowing our γ estimates to vary linearly or quadratically over the image. We implemented this technique by computing flows γx,y between corresponding image patches (indexed by x and y), and then minimizing the following form: arg min M X x,y (γx,y −Mcx,y)T Σ−1 x,y(γx,y −Mcx,y). (6) Here, each cx,y is a vector polynomial of the form [x y 1]T for the linear case and [x2 xy y2 x y 1]T for the quadratic case. M is an Ex3 matrix in the linear case and an Ex6 matrix in the quadratic case. The Σ−1 x,y’s are the error covariances in the estimate of the γx,y’s for each patch. Allowing the γ’s to vary over the image greatly increases the capacity of a matcher, but by limiting this variation to linear or quadratic variation, the capacity is still not able to qualitatively match “non-matching” images. Note that this smooth variation in eigenflow coefficients can model either a nearby light source or a smoothly curving surface, since either of these conditions will result in a smoothly varying lighting change. constant linear quadratic shadow 36.5 12.5 12.0 non-shadow 110.6 64.8 59.8 Table 1: Error residuals for shadow and non-shadow regions after color flows. We consider three versions of the experiment: 1) a single vector of flow coefficients, 2) linearly varying γ’s, 3) quadratically varying γ’s. In each case, the residual error for the shadow region is much lower than for the non-shadow region (Table 1). 5.4 Conclusions Except for the synthesis experiments, most of the experiments in this paper are preliminary and only a proof of concept. Much larger experiments need to be performed to establish the utility of the color change model for particular applications. However, since the color change model represents a compact description of lighting changes, including nonlinearities, we are optimistic about these applications. References [1] E. Miller and K. Tieu. Color eigenflows: Statistical modeling of joint color changes. In IEEE ICCV, volume 1, pages 607–614, 2001. [2] D. H. Marimont and B. A. Wandell. Linear models of surface and illuminant spectra. J. Opt. Soc. Amer., 11, 1992. [3] G. Buchsbaum. A spatial processor model for object color perception. J. Franklin Inst., 310, 1980. [4] J. J. McCann, J. A. Hall, and E. H. Land. Color mondrian experiments: The study of average spectral distributions. J. Opt. Soc. Amer., A(67), 1977. [5] D. H. Brainard and W. T. Freeman. Bayesian color constancy. J. Opt. Soc. Amer., 14(7):1393– 1411, 1997. [6] D. A. Forsyth. A novel algorithm for color constancy. IJCV, 5(1), 1990. [7] V. C. Cardei, B. V. Funt, and K. Barnard. Modeling color constancy with neural networks. In Proc. Int. Conf. Vis., Recog., and Action: Neural Models of Mind and Machine, 1997. [8] R. Lenz and P. Meer. Illumination independent color image representation using logeigenspectra. Technical Report LiTH-ISY-R-1947, Link¨oping University, April 1997. [9] P. N. Belhumeur and D. Kriegman. What is the set of images of an object under all possible illumination conditions? IJCV, 28(3):1–16, 1998. [10] W. S. Stiles, G. Wyszecki, and N. Ohta. Counting metameric object-color stimuli using frequency limited spectral reflectance functions. J. Opt. Soc. Amer., 67(6), 1977. [11] L. T. Maloney. Evaluation of linear models of surface spectral reflectance with small numbers of parameters. J. Opt. Soc. Amer., A1, 1986. [12] A. Shashua and R. Riklin-Raviv. The quotient image: Class-based re-rendering and recognition with varying illuminations. IEEE PAMI, 3(2):129–130, 2001. [13] J. J. Lien. Automatic Recognition of Facial Expressions Using Hidden Markov Models and Estimation of Expression Intensity. PhD thesis, Carnegie Mellon University, 1998. [14] M. Turk and A. Pentland. Eigenfaces for recognition. J. Cog. Neuro., 3(1):71–86, 1991. [15] M. Soriano, E. Marszalec, and M. Pietikainen. Color correction of face images under different illuminants by rgb eigenfaces. In Proc. 2nd Int. Conf. on Audio- and Video-Based Biometric Person Authentication, pages 148–153, 1999. [16] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers. Wallflower: Principles and practice of background maintenance. In IEEE CVPR, pages 255–261, 1999.
2002
25
2,227
Value-Directed Compression of POMDPs Pascal Poupart Departement of Computer Science University of Toronto Toronto, ON, M5S 3H5 ppoupart@cs.toronto.edu Craig Boutilier Department of Computer Science University of Toronto Toronto, ON, M5S 3H5 cebly@cs.toronto.edu Abstract We examine the problem of generating state-space compressions of POMDPs in a way that minimally impacts decision quality. We analyze the impact of compressions on decision quality, observing that compressions that allow accurate policy evaluation (prediction of expected future reward) will not affect decision quality. We derive a set of sufficient conditions that ensure accurate prediction in this respect, illustrate interesting mathematical properties these confer on lossless linear compressions, and use these to derive an iterative procedure for finding good linear lossy compressions. We also elaborate on how structured representations of a POMDP can be used to find such compressions. 1 Introduction Partially observable Markov decision processes (POMDPs) provide a rich framework for modeling a wide range of sequential decision problems in the presence of uncertainty. Unfortunately, the application of POMDPs to real world problems remains limited due to the intractability of current solution algorithms, in large part because of the exponential growth of state spaces with the number of relevant variables. Ideally, we would like to mitigate this source of intractability by compressing the state space as much as possible without compromising decision quality. Our aim in solving a POMDP is to maximize future reward based on our current beliefs about the world. By compressing its belief state, an agent may lose relevant information, which results in suboptimal policy choice. Thus an important aspect of belief state compression lies in distinguishing relevant information from that which can be safely discarded. A number of schemes have been proposed for either directly or indirectly compressing POMDPs. For example, approaches using bounded memory [8, 10] and state aggregation—eitherdynamic [2] or static [5, 9]—can be viewed in this light. In this paper, we study the effect of static state-space compression on decision quality. We first characterize lossless compressions—those that do not lead to any error in expected value—by deriving a set of conditions that guarantee decision quality will not be impaired. We also characterize the specific case of linear compressions. This analysis leads to algorithms that find good compression schemes, including methods that exploit structure in the POMDP dynamics (as exhibited, e.g., in graphical models). We then extend these concepts to lossy compressions. We derive a (somewhat loose) upper bound on the loss in decision quality when the conditions for lossless compression (of some required dimensionality) are not met. Finally we propose a simple optimization program to find linear lossy compressions that minimizes this bound, and describe how structured POMDP models can be used to implement this scheme efficiently. 2 Background and Notation 2.1 POMDPs A POMDP is defined by: a set of states  ; a set  of actions  ; a set  of observations  ; a transition function  , where     denotes the transition probability    ; an observation function  , where    denotes the probability     of making observation  in state  ; and a reward function  , where  denotes the immediate reward associated with state  .1 We assume discrete state, action and observation sets and we focus on discounted, infinite horizon POMDPs with discount factor  "!$#&% . Policies and value functions for POMDPs are typically defined over belief space, where a belief state ' is a distribution over capturing an agent’s knowledge about the current state of the world. Belief state ' can be updated in response to a specific action-observation pair () +* using Bayes rule: '    -,/.0213' 4 5    -     ( . is a normalization constant). We denote the (unnormalized) mapping 7698 : , where, in matrix form, we have  698 : ;=< ,>+  <    ; ?    <  . Note that a belief state ' and reward function  can be viewed respectively as   -dimensional row and column vectors. We define '@A,B'DCE . Solving a POMDP consists of finding an optimal policy F mapping belief states to actions. The value GH of a policy F is the expected sum of discounted rewards and is defined as: G H  '@A,2I '@KJL!7M : G H N H+OP Q8 : '@  (1) A number of techniques [11] based on value iteration or policy iteration can be used to compute optimal or approximately optimal policies for POMDPs. 2.2 Conditional Independence and Additive Separability When our state space is defined by a set of variables, POMDPs can often be represented concisely in a factored way by specifying the transition, observation and reward functions using a dynamic Bayesian network (DBN). Such representations exploit the fact that transitions associated with each variable depend only on a small subset of variables. These representations can often be exploited to solve POMDPs without state space enumeration [2]. Recently, Pfeffer [13] showed that conditional independence combined with some form of additive separability can enable efficient inference in many DBNs. Roughly, a function can be additively separated when it decomposes into a sum of smaller terms. For instance,    RTS is separable if there exist conditional distributions U   RV and WX S  , and .ZY\[ ] ^%@_ , such that + R`Sa,&.b9U   RTcJde%gfL.be9Wh   SI . This ensures that one need only know the marginals of R and S (instead of their joint distribution) to infer  . Pfeffer shows how additive separability in the CPTs of a DBN can be exploited to identify families of self-sufficient variables. A self-sufficient family consists of a set of subsets of variables such that the marginals of each subset are sufficient to predict the marginals of the same subsets at the next time step. Hence, if we require the probabilities of a few variables, and can identify a self-sufficient family containing those variables, then we need only compute marginals over this family when monitoring belief state. 1The ideas presented in this paper generalize to cases when i and j also depend on actions. ~b b’ b r r’ T π T π T π ~b b’ b r r’ b) a) f T ~b’ R R R R ~ ~ π ~ ~ g f T T ~b’ f g R R R R ~ ~ π π π π Figure 1: a) Functional flow of a POMDP (dotted arrows) and a compressed POMDP (solid arrows) where the next belief state is accurately predicted. b) Functional flow of a POMDP (dotted arrows) and a compressed POMDP (solid arrows) where the next compressed belief state is accurately predicted. 2.3 Invariant and Krylov Subspaces We briefly review several linear algebraic concepts used later (see [15] for more details). Let be a vector subspace. We say is invariant with respect to matrix  if it is closed under multiplication by  (i.e.,  Y  Y ). A Krylov subspace $ &   is the smallest subspace that contains  and is invariant with respect to  . A basis for a Krylov subspace can easily be generated by repeatedly multiplying  by  (i.e., , c c c   ). If $ &   is  -dimensional, one can show that  is the last linearly independent vector in this sequence and that all subsequent vectors are linear combinations of . In a DBN, families of self-sufficient variables naturally correspond to invariant subspaces. For instance, suppose  is a linear function that depends only on self-sufficient family R! SD "  . If we regress  through the dynamics of the DBN—i.e., if we multiply  by the transition matrix  698 : —the resulting function will also be defined over the truth values of R! and SD " . Hence, when a family of variables is self-sufficient, the subspace of linear functions defined over the truth values of that family is invariant w.r.t. 3698 : . 3 Lossless Compressions If a compression of the state space of a POMDP allows us to accurately evaluate all policies, we say the compression is lossless, since we have sufficient information to select the optimal policy. We provide one characterization of lossless compressions. We then specialize this to the linear case, and discuss the use of compact POMDP representations. Let  be a compression function that maps each belief state ' into some lower dimensional compressed belief state #' (see Figure 1(a)). Here # ' can be viewed as a bottleneck (e.g., in the sense of the information bottleneck [17]) that filters the information contained in ' before it’s used to estimate future rewards. We desire a compression  such that # ' corresponds to the smallest statistic sufficient for accurately predicting the current reward  as well as the next belief state '  (since we can accurately predict all following rewards from '  ). Such a compression  exists if we can also find mappings $4698 : and #  such that:  , # &%' and  698 : ,&$ 698 : %'3 Y   Y  (2) Since we are only interested in predicting future rewards, we don’t really need to accurately estimate the next belief state '  ; we could just predict the next compressed belief state # '  since it captures all information in '  relevant for estimating future rewards. Figure 1(b) illustrates the resulting functional flow, where #  698 : represents the transition function that directly maps one compressed belief state to the next compressed belief state. Eq. 2 can then be replaced by the following weaker but still sufficient conditions: &, #  %' and  %D 698 : , #  698 : %' 3 Y   Y  (3) Given an  , #  and # h698 : satisfying Eq. 3, we can evaluate a policy F using the compressed POMDP dynamics as follows: # G H  # '@A, #   # '@ JL!7M : # G H  #  H+O  P Q8 :  # ' e (4) Once # G-H is found, we can recover the original value function GIH , # G-H %  . Indeed, Eq. 1 and Eq. 4 are equivalent: Theorem 1 Let  , #  and # h698 : satisfy Eq. 3 and let GHV, # G-H %  . Then Eq. 1 holds iff Eq. 4 does. Proof G-H  '@A,2 H  '@KJL!70 : G-H )gH+OP Q8 : '@   # GH   'Ee , # I  '@  JL!0 : # G-H   NgH+O PQ48 :5'Eee  # GH   'Ee , # I  '@  JL!0 : # G-H  # gH+OP Q8 :   'Eee  # G7H  # '@D, #  # '@cJ !0 : # GH  # gHOP Q8 :5 #'@e 3.1 Linear compressions We say  is a linear compression when  is a linear function, representable by some matrix  . In this case, the approximate transition and reward functions #  68 : and #  must also be linear (assuming Eq. 3 is satisfied). Eq. 3 can be rewritten in matrix notation:  ,  #  and  698 :  ,  #  698 : 3  (5) In a linear compression,  can be viewed as effecting a change of basis for the value function, with the columns of  defining a subspace in which the compressed value function lies. Furthermore, the rank of  indicates the dimensionality of the compressed state space. When Eq. 5 is satisfied, the columns of  span a subspace that contains  and that is invariant with respect to each  698 : . Intuitively, Eq. 5 says that a sufficient statistic must be able to “predict itself” at the next time step (hence the subspace is invariant), and that it must predict the current reward (hence the subspace contains  ). Formally: Theorem 2 Let #  698 : , #  and  satisfy Eq. 5. Then the range of  contains  and is invariant with respect to each  698 : . Proof Eq. 5 ensures  is a linear combination of the columns of  , so it lies in the range of  . It also requires that the columns of each 698 :  are linear combinations of the columns of  , so  is invariant with respect to each 7698 : . Thus, the best linear lossless compression corresponds to the smallest invariant subspace that contains  . This is by definition the Krylov subspace $  698 :5 Y   Y   7 . Using this fact we can easily compute the best lossless linear compression by iteratively multiplying  by each  68 : until the Krylov basis is obtained. We then let the Krylov basis form the columns of  , and compute #  and each # h698 : by solving each part of Eq. 5. Finally, we can solve the POMDP in the compressed state space by using #  and #  698 : . Note that this technique can be viewed as a generalization of Givan et al’s MDP model minimization technique [3]. It is interesting to note that Littman et al. [9] proposed a similar iterative algorithm to compress POMDPs based on predicting future observations.2 2Assuming that rewards are functions of the observations. 3.2 Structured Linear Compressions When a POMDP is specified in compactly, say, using a DBN, the size of the state space may be exponentially larger than the specification. The practical need to avoid state enumeration is a key motivation for POMDP compression. However, the complexity of the search for a good compression must also be independent of the state space size. Unfortunately, the iterative Krylov algorithm involves repeatedly multiplying explicit transition matrices and basis vectors. We consider several ways in which a compact POMDP specification can be exploited to construct a linear compression without state enumeration. One solution lies in exploiting DBN structure and context-specific independence. If transition, observation and reward functions are represented using DBNs and structured CPTs (e.g., decision trees or algebraic decision diagrams), then the matrix operations required by the Krylov algorithm can be implemented effectively [1, 7]. Although this approach can offer substantial savings, the DTs or ADDs that represent the basis vectors of the Krylov subspace may still be much larger than the dimensionality of the compressed state space and the original DBN specifications. Alternatively, families of self-sufficient variables corresponding to invariant subspaces can be identified by exploiting additive separability. Starting with the variables upon which  depends, we can recursively grow a family of variables until it is self-sufficient with respect to each  698 : . The corresponding subspace is invariant and necessarily contains  . Assuming a tractable self-sufficient family is found, a compact basis can then be constructed by using all indicator functions for each subset of variables in this family (e.g., if R$ S   is one such subset of binary variables, then eight basis vectors will correspond to this set). This approach allows us to quickly identify a good compression by a simple inspection of the additive separability structure of the DBN. The resulting compression is not necessarily optimal; however, it is the best among those corresponding to some such family. It is important to note that the dynamics #  698 : and reward #  of the compressed POMDP can be constructed easily (i.e., without state enumeration) from this  and the original DBN model. Pfeffer [13] notes that observations tend to reduce the amount of additive separability present in a DBN, thereby increasing the size of self-sufficient families. Therefore, we should point out that lossless compressions of POMDPs that exploit self-sufficiency and offer an acceptable degree of compression may not exist. Hence lossy compressions are likely to be required in many cases. Finally, we ask whether the existence of lossless compressions requires some form of structure in the POMDP. We argue that this is almost always the case. Suppose a transition matrix h698 : and a reward vector  are chosen uniformly at random. The odds that  falls into a proper invariant subspace of 68 : are essentially zero since there are infinitely more vectors in the full space than in all the proper invariant subspaces put together. This means that if a POMDP can be compressed, it must almost certainly be because its dynamics exhibit some structure. We have described how context-specific independence and additive separability can be exploited to identify some linear lossless compressions. However they do not guarantee that the optimal compression will be found, so it remains an open question whether other types of structure could be used in similar ways. 4 Lossy compressions Since we cannot generally find effective lossless compressions, we also consider lossy compressions. We propose a simple approach to find linear lossy compressions that “almost satisfy” Eq. 5. Table 1 outlines a simple optimization program to find lossy compressions that minimize a weighted sum of the max-norm residual errors,  and  , in Eq. 5. Here  and  are weights that allow us to vary the degree to which the two components of Eq. 5    J   s.t. f  @2f  #     (6) f    h698 :  f  # h698 :     3 Y   Y  (7)     , % Table 1: Optimization program for linear lossy compressions should be satisfied. The unknowns of the program are all the entries of #  , # h698 : and  as well as  and  . The constraint     , % is necessary to preserve scale, otherwise  could be driven down to 0 simply by setting all the entries of  to 0. Since # h68 : and #  multiply  , some constraints are nonlinear. However, it is possible to solve this optimization program by solving a series of LPs (linear programs). We alternate solving the LP that adjusts #  and # h698 : while keeping  fixed, and solving the LP that adjusts  while keeping #  and #  698 : fixed. This guarantees that the objective function decreases at each iteration and will converge, but not necessarily to a local optimum. 4.1 Max-norm Error Bound The quality of the compression resulting from this program depends on the weights  and  . Ideally, we would like to set  and  in a way that   J   represents the loss in decision quality associated with compressing the state space. If we can bound the error  of evaluating any policy using the compressed POMDP, then the difference in expected total return between the policy that is best w.r.t. the compressed POMDP and the true optimal policy is at most   . Let  be  H EG7Hhf # G-H'%   . Theorem 3 gives an upper bound on as a linear combination of the max-norm residual errors in Eq. 5. Theorem 3 Let L,  H EGH f # G7H %  ,  ,E f #  %  ,  ,  698 : @h698 : f # h698 : %'  and # Gg,   H # G-H . Then  Z     J    "! $#    . We omit the proof due to lack of space. It essentially consists of a sequence of substitutions of the type % &  '%   (  and  %2J (  )%  J* &  . We suspect that the above error bound will grossly overestimate the loss in decision quality, however we intend to use it mostly as a guide for setting  and  . Here !   + # G,  .?%f"!  is typically much greater than % e% fV!  because of the factor  # G,/  , which means that  has a much higher impact on the loss in decision quality than  . Intuitively, this makes sense because the error  in predicting the next compressed belief state may compound over time, so we should set  significantly higher than  . 4.2 Structured Compressions As with lossless compressions, solving the program in Table 1 may be intractable due to the size of . There are 0    constraints and   #  unknown entries in matrix  .3 We describe several techniques that allow one to exploit problem structure to find an acceptable lossy compression without state space enumeration. One approach is related to the basis function model proposed in [4], in which we restrict  to be functions over some small set of factors (subsets of state variables.) This ensures that the number of unknown parameters in any column of  (which we optimize in Table 1) is 3Assuming 1 2 is small, the 3 1 2 3 4 variables in each 1 576 8 9 and 3 1 2 3 variables in 1 j are unproblematic. linear in the number of instantiations of each factor. By keeping factors small, we maintain a manageable set of unknowns. To deal with the 0    constraints, we can exploit the structure imposed on  and the DBN structure to reduce the number of constraints to something (in the many cases) polynomial in the number of state variables. This can be achieved using the techniques described in [4, 16] to rewrite an LP with many fewer constraints or to generate small subsets of constraints incrementally. These techniques are rather involved, so we refer to the cited papers for details. By searching within a restricted set of structured compressions and by exploiting DBN structure it is possible to efficiently solve the optimization program in Table 1. The question of factor selection remains: on what factors should  be defined? A version of this question has been tackled in [12, 14] in the context of selecting a basis to approximately solve MDPs. The techniques proposed in those papers could be adapted to our optimization program. An alternative method for structuring the computation of  involves additive separability. Let < (    ) be subsets of variables, and  < < #  be a function over < and the compressed state space # . We restrict each column of  to be a separable function of the  < ; that is, column  (corresponding to state #  ; ) is 0 < <  < < #  ;  for some parameters  < . Here the  < can be viewed as weights indicating the importance of the contribution of each  < in the separable function. Given a family of subsets, the parameters over which we optimize to determine  are now the  < and the entries of each function  < < # A . While nonlinear, the same alternating minimization scheme described earlier can be used to optimize these two classes of parameters of  in turn. Note that the number of variables is dependent only on the size of the subsets < and the compressed state space # . Furthermore, this form of additive separability lends itself to the same compact constraint generation techniques mentioned above. Finally, the (discrete) search for decent subsets < can be interleaved with optimization of the compression mapping for fixed sets < . 5 Preliminary Experiments We report on preliminary experiments with the coffee problem described in [2]. Given its relatively small size (32 states, 3 observations and 2 actions), these results should be viewed as simply illustrating the feasibility and potential of the algorithms proposed in Secs. 3.1 and 4.1. Further experiments for the structured versions (Secs. 3.2 and 4.2) are necessary to assess the degree of compression achievable with large, realistic problems. The 32-dimensional belief space can be compressed without any loss to a 7-dimensional subspace using the Krylov subspace algorithm described in Section 3.1. For further compression, we applied the optimization program described in Table 1 by setting the weights  and  to % and 55 respectively. The alternating variable technique was iterated %  times, with the best solution chosen from % random restarts (to mitigate the effects of local optima). Figure 2 shows the loss in expected return (w.r.t. the optimal policy) when policy computed using varying degrees of compression is executing for %95 stages. The loss is sampled from 100,000 random initial belief states, averaged over 10 runs. These policies manage to achieve expected returns with less than loss. In contrast, the average loss of a random policy is about   (or   ). 6 Concluding Remarks We have presented an in-depth theoretical analysis of the impact of static compressions on decision quality. We derived a set of conditions that guarantee compression does not impair decision quality, leading to interesting mathematical properties for linear compressions that allow us to exploit structure in the POMDP dynamics. We also proposed a simple 3 4 5 6 7 0 0.1 0.2 0.3 Dimensionality of Compressed Space Average Loss (Absolute) 0% 1% 2% 3% Average Loss (Relative) Figure 2: Average loss for various lossy compressions optimization program to search for good lossy compressions. Preliminary results suggest that significant compression can be achieved with little impact on decision quality. This research can be extended in various directions. It would be interesting to carry out a similar analysis in terms of information theory (instead of linear algebra) since the problem of identifying information in a belief state relevant to predicting future rewards can be modeled naturally using information theoretic concepts [6]. Dynamic compressions could also be analyzed since, as we solve a POMDP, the set of reasonable policies shrinks, allowing greater compression. References [1] C. Boutilier, R. Dearden, and M. Goldszmidt. Stochastic dynamic programming with factored representations. Artificial Intelligence, 121:49–107, 2000. [2] C. Boutilier and D. Poole. Computing optimal policies for partially observable decision processes using compact representations. Proc. AAAI-96, pp.1168–1175, Portland, OR, 1996. [3] R. Givan, T. Dean, and M. Greig. Equivalence notions and model minimization in Markov decision processes. Artificial Intelligence, to appear, 2002. [4] C. Guestrin, D. Koller, and R. Parr. Max-norm projections for factored MDPs. Proc. IJCAI-01, pp.673–680, Seattle, WA, 2001. [5] C. Guestrin, D. Koller, and R. Parr. Solving factored POMDPs with linear value functions. IJCAI-01 Worksh. on Planning under Uncertainty and Inc. Info., Seattle, WA, 2001. [6] C. Guestrin and D. Ormoneit. Information-theoretic features for reinforcement learning. Unpublished manuscript. [7] J. Hoey, R. St-Aubin, A. Hu, and C. Boutilier. SPUDD: Stochastic planning using decision diagrams. Proc. UAI-99, pp.279–288, Stockholm, 1999. [8] M. L. Littman. Memoryless policies: theoretical limitations and practical results. In D. Cliff, P. Husbands, J. Meyer, S. W. Wilson, eds., Proc. 3rd Intl. Conf. Sim. of Adaptive Behavior, Cambridge, 1994. MIT Press. [9] M. L. Littman, R. S. Sutton, and S. Singh. Predictive representations of state. Proc.NIPS-02, Vancouver, 2001. [10] R. A. McCallum. Hidden state and reinforcement learning with instance-based state identification. IEEE Transations on Systems, Man, and Cybernetics, 26(3):464–473, 1996. [11] K. Murphy. A survey of POMDP solution techniques. Technical Report, U.C. Berkeley, 2000. [12] R. Patrascu, P. Poupart, D. Schuurmans, C. Boutilier, C. Guestrin. Greedy linear valueapproximation for factored Markov decision processes. AAAI-02, pp.285–291, Edmonton, 2002. [13] A. Pfeffer. Sufficiency, separability and temporal probabilistic models. Proc. UAI-01, pp.421– 428, Seattle, WA, 2001. [14] P. Poupart, C. Boutilier, R. Patrascu, and D. Schuurmans. Piecewise linear value function approximation for factored MDPs. AAAI-02, pp.292–299, Edmonton, 2002. [15] Y. Saad. Iterative Methods for Sparse Linear Systems. PWS, Boston, 1996. [16] D. Schuurmans and R. Patrascu. Direct value-approximation for factored MDPs. Proc. NIPS01, Vancouver, 2001. [17] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. 37th Annual Allerton Conf. on Comm., Contr. and Computing, pp.368–377, 1999.
2002
26
2,228
Constraint Classification for Multiclass Classification and Ranking Sariel Har-Peled Dan Roth Dav Zimak Department of Computer Science University of Illinois Urbana, IL 61801 sariel,danr,davzimak  @uiuc.edu Abstract The constraint classification framework captures many flavors of multiclass classification including winner-take-all multiclass classification, multilabel classification and ranking. We present a meta-algorithm for learning in this framework that learns via a single linear classifier in high dimension. We discuss distribution independent as well as margin-based generalization bounds and present empirical and theoretical evidence showing that constraint classification benefits over existing methods of multiclass classification. 1 Introduction Multiclass classification is a central problem in machine learning, as applications that require a discrimination among several classes are ubiquitous. In machine learning, these include handwritten character recognition [LS97, LBD  89], part-of-speech tagging [Bri94, EZR01], speech recognition [Jel98] and text categorization [ADW94, DKR97]. While binary classification is well understood, relatively little is known about multiclass classification. Indeed, the most common approach to multiclass classification, the oneversus-all (OvA) approach, makes direct use of standard binary classifiers to encode and train the output labels. The OvA scheme assumes that for each class there exists a single (simple) separator between that class and all the other classes. Another common approach, all-versus-all (AvA) [HT98], is a more expressive alternative which assumes the existence of a separator between any two classes. OvA classifiers are usually implemented using a winner-take-all (WTA) strategy that associates a real-valued function with each class in order to determine class membership. Specifically, an example belongs to the class which assigns it the highest value (i.e., the “winner”) among all classes. While it is known that WTA is an expressive classifier [Maa00], it has limited expressivity when trained using the OvA assumption since OvA assumes that each class can be easily separated from the rest. In addition, little is known about the generalization properties or convergence of the algorithms used. This work is motivated by several successful practical approaches, such as multiclass support vector machines (SVMs) and the sparse network of winnows (SNoW) architecture that rely on the WTA strategy over linear functions. Our aim is to improve the understanding of such classifier systems and to develop more theoretically justifiable algorithms that realize the full potential of WTA. An alternative interpretation of WTA is that every example provides an ordering of the classes (sorted in descending order by the assigned values), where the “winner” is the first class in this ordering. It is thus natural to specify the ordering of the classes for an example directly, instead of implicitly through WTA. In Section 2, we introduce constraint classification, where each example is labeled with a set of constraints relating multiple classes. Each such constraint specifies the relative order of two classes for this example. The goal is to learn a classifier consistent with these constraints. Learning is made possible by a simple transformation mapping each example into a set of examples (one for each constraint) and the application of any binary classifier on the mapped examples. In Section 3, we present a new algorithm for constraint classification that takes on the properties of the binary classification algorithm used. Therefore, using the Perceptron algorithm, it is able to learn a consistent classifier if one exists, using the winnow algorithm it can learn attribute efficiently, and using the SVM, it provides a simple implementation of multiclass SVM. The algorithm can be implemented with a subtle change to the standard (via OvA) approach to training a network of linear threshold gates. In Section 4, we discuss both VC-dimension and margin-based generalization bounds presented a companion paper[HPRZ02]. Our generalization bounds apply to WTA classifiers over linear functions, for which VC-style bounds were not known. In addition to multiclass classification, constraint classification generalizes multilabel classification, ranking on labels, and of course, binary classification. As a result, our algorithm provides new insight into these problems, as well as new, powerful tools for solving them. For example, in Section , we show that the commonly used OvA assumption can cause learning to fail, even when a consistent classifier exists. Section 5 provides empirical evidence that the constraint classification outperforms the OvA approach. 2 Constraint Classification Learning problems often assume that examples,    , are drawn    from fixed probability distribution,  , over   . is referred to as the instance space and  is referred to as the output space (label set). Definition 2.1 (Learning) Given  examples, ! "#%$&#'$(%)*)*)+-,.#/,# , drawn 0  - from 123 , a hypothesis class 4 and an error function 56/ 7 89 :4<; >= )?  , a learning algorithm @2AB#4  attempts to output a function CD 4 , where C 6' E; , that minimizes the expected error on a randomly drawn example. Definition 2.2 (Permutations) Denote the set of full orders over ?F)***0G  as /H , consisting of all permutations of ?/**)(0G  . Similarly, I  H denotes the set of all partial orders over ?/*)*(G  . A partial order, J KI LH , defines a binary relation, MN and can be represented by set of pairs on which M N holds, J1 PO')Q RM N O  . In addition, for any set of pairs J $SPOT$*%*)*))VUWXO)U-  , we refer to J both as a set of pairs and as the partial order produced by the transitive closure of J with respect to MYN . Given two partial orders Z 0[ \I  H , Z is consistent with [ (denoted Z^] [ ) if for every +0XO'2 ?F)***0G S_ , `MbacO holds whenever `Md:O . If Je f H is a full order, then it can be represented by a list of G integers where `M N O if  precedes O in the list. The size of a partial order, Q JFQ is the number of pairs specified in J . Definition 2.3 (Constraint Classification) Constraint classification is a learning problem where each example +3. g h <I  H is labeled according to a partial order ! KI  H . A constraint classifier, C^6 i; I  H , is consistent with example  if  is consistent with Cjk (  ] C+k ). When Q cQ'l7J , we call it J -constraint classification. Internal Output Size of Problem Representation Space (  ) Hypothesis Mapping binary   ?/*?     ( ? multiclass $T)**)H :  H  ?F)***0G    $  H   ) G  ? ! -multilabel $T)**)H :  H  ?/**)(0G  " #$  "  $%  H  & ( ! AG  !  ranking $T)**)H :  H   H #$ '( $%  H   * G  ? constraint* $T)**)H :  H  I  H #$ '( $%  H   * – J -constraint* $T)**)H :  H  I  H N #$ '(  $%  H   * J Table 1: Definitions for various learning problems (notice that the hypothesis for constraint classification is always a full order) and the size of the resultant mapping to ) -constraint classification. *,+.-0/1*,23 is a variant of *4+.-5/1*42 that returns the 6 maximal indices with respect to 798;:< . *,+.-0=?>0+?@ is a linear sorting function (see Definition 2.6). Definition 2.4 (Error Indicator Function) For any j# ^ i ^I  H , and hypothesis C 6 E; I  H , the indicator function 5 j#W0Ck indicates an error on example  , 5 j#W0Ck` ? if BA ] CjW , and = otherwise. For example, if G DC and example j#8 "+ FEG %)HE3C  T , Cc$TW: HE3G*?/C  , and C _ W` "C%E3$G*?S , then CW$ is correct since E precedes G and E precedes C in the full order FEG*?FC  whereas C _ is incorrect since C precedes E in C%E3$G*?S . Definition 2.5 (Error) Given an example +# drawn    from j23 , the true error of C g4 , where C 6L ; JI is defined to be K5LML3ACk ONPQJR 5:j#W0CkTS . Given  #+ $ # $ %)**))+ , # , # , the empirical error of C 4 with respect to  is defined to be K5LML'PB0CkR $ U VWU QXZY [  \4]?^ V 5`j#W0Ck*Q . In this paper, we consider constraint classification problems where hypotheses are functions from  to  H that output a permutation of ?/*)*(G  . Definition 2.6 (Linear Sorting Function) Let K _$ )*)*H  be a set of G vectors, where $T)**)H D  . Given   , a linear sorting classifier is a function C 6  ;  H computed in the following way: CjW`  `'( baj$ H  ) where #$ '( returns a permutation of ?/*)*(G  where  precedes O if  dcegfP # . In the case that & * hf * ,  precedes O if 9i!O . Constraint classification can model many well-studied learning problems including multiclass classification, ranking and multilabel classification. Table 1 shows a few interesting classification problems expressible as constraint classification. It is easy to show: Lemma 2.7 (Problem mappings) All of the learning problems in Table 1 can be expressed as constraint classification problems. Consider a C -class multiclass example, $G/ . It is transformed into the G -constraint example,  FG*?S%>_G%EF()_GC' > . If we find a constraint classifier that correctly labels  according to the given constraints where kjl Bcmb$n  , jl dco_  , and jl dcoqp  , then G r#$  $% _  j, p s` P . If instead we are given a ranking example j _G%E3)?FC' S , it can be transformed into  FG$E/%>FE3)?>()?/C' S . 3 Learning In this section, G -class constraint classification is transformed into binary classification in higher dimension. Each example j# 9 I  H becomes a set of examples in  H   ?/*?  with each constraint PO' contributing a single ‘positive’ and a single ‘negative’ example. Then, a separating hyperplane for the expanded example set (in  H  ) can be viewed as a linear sorting function over G linear functions, each in  dimensional space. 3.1 Kesler’s Construction Kesler’s construction for multiclass classification was first introduced by Nilsson in 1965[Nil65, 75–77] and can also be found more recently[DH73]. This subsection extends the Kesler construction for constraint classification. Definition 3.1 (Chunk) A vector  $ *)*) H    H   , 0 -  , is broken into G chunks $T)*)(H  where the  -th chunk,   Y c$.]   $ *)*)#   . Definition 3.2 (Expansion) Let j#  be a vector  # embedded in G3 dimensions, by writing the coordinates of  in the  -th chunk of a vector in  H Y   $.] . Denote by  " the zero vector of length ! . Then +V can be written as the concatenation of three vectors, j#  E Y  $]  #j Y H W ]  2  H  . Finally,  +#XO     jPO' , is the embedding of  in the  -th chunk and   in the O -th chunk of a vector in  H  . Definition 3.3 (Expanded Example Sets) Given an example +# , where  # and  I  H , we define the expansion of j# into a set of examples as follows,   j#`  j#PO'%)?>   PO':   H  ? L A set of negative examples is defined as the reflection of each expanded example through the origin, specifically  R+3R     ?>     *?S8     H   ? L and the set of both positive and negative examples is denoted by  1   j#  R+# . The expansion of a set of examples,  , is defined as the union of all of the expanded examples in the set,  AL` Yb[  \4]T^ V  +3  H  ` ?F)? ` Note that the original Kesler construction produces only   . We also create   to simplify the analysis and to maintain consistency when learning non-linear functions (such as SVM). 3.2 Algorithm Figure 1 (a) shows a meta-learning algorithm for constraint classification that finds a linear sorting function by using any algorithm for learning a binary classifier. Given a set of examples !  I  H , the algorithm simply finds a separating hyperplane Cj ` "k  for  A`#  H   ?/*?  . Suppose C correctly classifies  *?SR " j#PO'(*?>:  PL , then  $ EB %&  d &f c = , and the constraint +0XO' on  (dictating that d &1c  'f ) is consistent with C  . Therefore, if C  correctly classifies all f  A` , then #$ '( $  H  ) is a consistent linear sorting function. This framework is significant to multiclass classification in many ways. First, the hypothesis learned above is more expressive than when the OvA assumption is used. Second, it is easy to verify that other algorithmic-specific properties are maintained by the above transformation. For example, attribute efficiency is preserved when using the winnow algorithm. Finally, the multiclass support vector machine can be implemented by learning a hyperplane to separate  PL with maximal margin. 3.3 Comparison to “One-Versus-All” A common approach to multiclass classification (  ?F)*)(0G  ) is to make the oneversus-all (OvA) assumption, namely, that each class can be separated from the rest using Algorithm CONSTRCLASSLEARN INPUT: !  $& $%(**)*>-,.F,2 , where   I  H , OUTPUT: A classifier C I begin 4\    C       CD6  H  ;  ?/*?  C ` d   L   H    Calculate  PL8  H  ?F  ?  C g@2 AL( 4 8 4 Set C I kR D `'( $  H  ( end Algorithm ONLINECONCLASSLEARN INPUT:  #+W$&# $*%*)*))k,.#F,Y# , where f  I  H  , OUTPUT: A classifier C I begin Initialize j$&**) H 8  H  Repeat until converge for B ?/   do for all O/XO I :  do if f )ni!f (W then promote lf> demote Wf X Set C I W`  `'( $  H && ( end (a) (b) Figure 1: (a) Meta-learning algorithm for constraint classification with linear sorting functions (see Definition 2.6). @2. ,  is any binary learning algorithm returning a separating hyperplane. (b) Online meta-algorithm for constraint classification with linear sorting functions (see Definition 2.6). The particular online algorithm used determines how $ *)*)H  is initialized and the promotion and demotion strategies. a binary classification algorithm. Learning proceeds by learning G independent binary classifiers, one corresponding to each class, where example j# is considered positive for classifier  and negative for all others. It is easy to construct an example where the OvA assumption causes the learning to fail even when there exists a consistent linear sorting function. (see Figure 2) Notice, since the existence of a consistent linear sorting function (w.r.t.  ) implies the existence of a separating hyperplane (w.r.t.  PL ), any learning algorithm guaranteed to separate two separable point sets (e.g. the Perceptron algorithm) is guaranteed to find a consistent linear sorting function. In Section 5, we use the perceptron algorithm to find a consistent classifier for an extension of the example in Figure 2 to  $ when OvA fails. 3.4 Comparison to Newtorks of Linear Threshold Gates (Perceptron) It is possible to implement the algorithm in Section using a network of linear classifiers such as multi-output Perceptron [AB99], SNoW [CCRR99, Rot98], and multiclass SVM [CS00, WW99]. Such a network has  9 as input and G outputs, each represented by a weight vector, k2  , where the  -th output computes  S (see Figure 1 (b)). Typically, a label is mapped, via fixed transformation, into a G -dimensional output vector, and each output is trained separately, as in the OvA case. Alternately, if the online perceptron algorithm is plugged into the meta-algorithm in Section , then updates are performed according to a dynamic transformation. Specifically, given j# , for every constraint XO   , if  Vdiegf V ,  is ‘promoted’ and qf is ‘demoted’. Using a network in this results in an ultraconservative online algorithm for multiclass classification [CS01]. This subtle change enables the commonly used network of linear threshold gates to learn every hypothesis it is capable of representing. + − +− f = 0 f = 0 f = 0 + − Figure 2: A 3-class classification example in  showing that one-versus-all (OvA) does not converge to a consistent hypothesis. Three classes (squares, triangles, and circles) should be separated from the rest. Solid points act as  points in their respective classes. The OvA assumption will attempt to separate the circles from squares and triangles with a single separating hyperplane, as well as the other 2 combinations. Because the solid points are weighted, all OvA classifiers are required to classify them correctly or suffer  mistakes, thus restricting what the final hypotheses will be. As a result, the OvA assumption will misclassify point outlined with a double square since the square classifier predicts “not square” and the circle classifier predicts “circle”. One can verify that there exists a WTA classifier for this example. Dataset Features Classes Training Examples Testing Examples glass 9 6 214 – vowel 10 11 528 462 soybean 35 19 307 376 audiology 69 24 200 26 ISOLET 617 26 6238 1559 letter 16 26 16000 4000 Synthetic* 100 3 50000 50000 Table 2: Summary of problems from the UCI repository. The synthetic data is sampled from a random linear sorting function (see Section 5). 4 Generalization Bounds A PAC-style analysis of multiclass functions that uses an extended notion of VC-dimension for multiclass case [BCHL95] provides poor bounds on generalization for WTA, and the current best bounds rely on a generalized notion of margin [ASS00]. In this section, we prove tighter bounds using the new framework. We seek generalization bounds for learning with 4 , the class of linear sorting functions (Definition 2.6). Although both VC-dimension-based (based on growth function) and margin-based bounds for the class of hyperplanes in  H  are known [Vap98, AB99], they cannot directly be applied since  PL produces points that are random, but not independently drawn. It turns out that bounds can be derived indirectly by using known bounds for constraint classification. Due to space considerations see[HPRZ02], where natural extensions to the growth function and margin are used to develop generalization bounds. 5 Experiments As in previous multiclass classification work [DB95, ASS00], we tested our algorithm on a suite of problems from the Irvine Repository of machine learning [BM98] (see Table 2). In addition, we created a simple experiment using synthetic data. The data was generated according to a WTA function over G randomly generated linear functions in  $ =F= , each with weight vectors inside the unit ball. Then,  = K training and  = K testing examples were audiology glass vowel letter isolet soybean synthetic* 0 20 40 60 80 % Error Constraint Classification One versus All Figure 3: Comparison of constraint classification meta-algorithm using the Perceptron algorithm to multi-output Perceptron using the OvA assumption. All of the results for the constraint classification algorithm are competitive with the known. The synthetic data would converge to  error using constraint classification but would not converge using the OvA approach. randomly sampled within a ball of radius E around the origin and labeled with the linear function that produced the highest value. A comparison is made between the OvA approach (Section ) and the constraint classification approach. Both were implemented on the same network of multi-output Perceptron network with G +?> weights (with one threshold per class). Constraint classification used the modified update rule discussed in Section . Each update was performed as follows:   $ h for promotion and   _    for demotion. The networks were initialized with weights all = . For each multiclass example j# ,28  ?F)***0G  , a constraint classification example j# N 8  I  H was created, where  N  /,1V    O^ ?F)***0G RF,  . Notice error (Definition 2.4) of  N  corresponds to the traditional error for multiclass classification. Figure 3 shows that constraint classification outperforms the multioutput Perceptron when using the OvA assumption. 6 Discussion We think constraint classification provides two significant contributions to multiclass classification. Firstly, it provides a conceptual generalization that encompasses multiclass classification, multilabel classification, and label ranking problems in addition to problems with more complex relationships between labels. Secondly, it reminds the community that the Kesler construction can be used to extend any learning algorithm for binary classification to the multiclass (or constraint) setting. Section 5 showed that the constraint approach to learning is advantageous over the oneversus-all approach on both real-world and synthetic data sets. However, preliminary experiments using various natural language data sets, such as part-of-speech tagging, do not yield any significant difference between the two approaches. We used a common transformation [EZR01] to convert raw data to approximately three million examples in one hundred thousand dimensional boolean feature space. There were about 50 different partof-speech tags. Because the constraint approach is more expressive than the one-versus-all approach, and because both approaches use the same hypothesis space ( G linear functions), we expected the constraint approach to achieve higher accuracy. Is it possible that a difference would emerge if more data were used? We find it unlikely since both methods use identical representations. Perhaps, it is instead a result of the fact that we are working in very high dimensional space. Again, we think this is not the case, since it seems that “most” random winner-take-all problems (as with the synthetic data) would cause the one-versus-all assumption to fail. Rather, we conjecture that for some reason, natural language problems (along with the transformation)are suited to the one-versus-all approach and do not require a more complex hypothesis. Why, and how, this is so is a direction for future speculation and research. 7 Conclusions The view of multiclass classification presented here simplifies the implementation, analysis, and understanding of many preexisting approaches. Multiclass support vector machines, ultraconservative online algorithms, and traditional one-versus-all approaches can be cast in this framework. It would be interesting to see if it could be combined with the error-correcting output coding method in [DB95] that provides another way to extend the OvA approach. Furthermore, this view allows for a very natural extension of multiclass classification to constraint classification – capturing within it complex learning tasks such as multilabel classification and ranking. Because constraint classification is a very intuitive approach and its implementation can be carried out by any discriminant technique, and not only by optimization techniques, we think it will have useful real-world applications. References [AB99] M. Anthony and P. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge, England, 1999. [ADW94] C. Apte, F. Damerau, and S. M. Weiss. Automated learning of decision rules for text categorization. Information Systems, 12(3):233–251, 1994. [ASS00] E. Allwein, R.E. Schapire, and Y. Singer. Reducing multiclass to binary: A unifying approach for margin classifiers. In Proc. 17th International Conf. on Machine Learning, pages 9–16. Morgan Kaufmann, San Francisco, CA, 2000. [BCHL95] S. Ben-David, N. Cesa-Bianchi, D. Haussler, and P. Long. Characterizations of learnability for classes of 0.AU valued functions. J. Comput. Sys. Sci., 50(1):74–86, 1995. [BM98] C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998. [Bri94] E. Brill. Some advances in transformation-based part of speech tagging. In AAAI, Vol. 1, pages 722–727, 1994. [CCRR99] A. Carlson, C. Cumby, J. Rosen, and D. Roth. The SNoW learning architecture. Technical Report UIUCDCS-R-992101, UIUC Computer Science Department, May 1999. [CS00] K. Crammer and Y. Singer. On the learnability and design of output codes for multiclass problems. In Computational Learing Theory, pages 35–46, 2000. [CS01] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. In COLT/EuroCOLT, pages 99–115, 2001. [DB95] T. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2:263–286, 1995. [DH73] R. Duda and P. Hart. Pattern Classification and Scene Analysis. Wiley, New York, 1973. [DKR97] I. Dagan, Y. Karov, and D. Roth. Mistake-driven learning in text categorization. In EMNLP-97, The Second Conference on Empirical Methods in Natural Language Processing, pages 55–63, 1997. [EZR01] Y. Even-Zohar and D. Roth. A sequential model for multi class classification. In EMNLP-2001, the SIGDAT Conference on Empirical Methods in Natural Language Processing, pages 10–19, 2001. [HPRZ02] S. Har-Peled, D. Roth, and D. Zimak. Constraint classification: A new approach to multiclass classification. In Proc. 13th International Conf. of Algorithmic Learning Theory, pages 365–397, 2002. [HT98] T. Hastie and R. Tibshirani. Classification by pairwise coupling. In NIPS-10, The 1997 Conference on Advances in Neural Information Processing Systems, pages 507–513. MIT Press, 1998. [Jel98] F. Jelinek. Statistical Methods for Speech Recognition. The MIT Press, Cambridge, Massachusetts, 1998. [LBD 89] Y. Le Cun, B. Boser, J. Denker, D. Hendersen, R. Howard, W. Hubbard, and L. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:pp 541, 1989. [LS97] D. Lee and H. Seung. Unsupervised learning by convex and conic coding. In Michael C. Mozer, Michael I. Jordan, and Thomas Petsche, editors, Advances in Neural Information Processing Systems, volume 9, page 515. The MIT Press, 1997. [Maa00] W. Maass. On the computational power of winner-take-all. Neural Computation, 12(11):2519–2536, 2000. [Nil65] Nils J. Nilsson. Learning Machines: Foundations of trainable pattern-classifying systems. McGraw-Hill, New York, NY, 1965. [Rot98] D. Roth. Learning to resolve natural language ambiguities: A unified approach. In Proc. of AAAI, pages 806–813, 1998. [Vap98] V. Vapnik. Statistical Learning Theory. Wiley, 605 Third Avenue, New York, New York, 10158-0012, 1998. [WW99] J. Weston and C. Watkins. Support vector machines for multiclass pattern recognition. In Proceedings of the Seventh European Symposium On Artificial Neural Networks, 4 1999.
2002
27
2,229
Neural Decoding of Cursor Motion Using a Kalman Filter W. Wu M. J. Black  Y. Gao E. Bienenstock  M. Serruya  A. Shaikhouni  J. P. Donoghue  Division of Applied Mathematics,  Dept. of Computer Science,  Dept. of Neuroscience,  Division of Biology and Medicine, Brown University, Providence, RI 02912 weiwu@cfm.brown.edu, black@cs.brown.edu, gao@cfm.brown.edu, elie@dam.brown.edu, Mijail Serruya@brown.edu, Ammar Shaikhouni@brown.edu, john donoghue@brown.edu Abstract The direct neural control of external devices such as computer displays or prosthetic limbs requires the accurate decoding of neural activity representing continuous movement. We develop a real-time control system using the spiking activity of approximately 40 neurons recorded with an electrode array implanted in the arm area of primary motor cortex. In contrast to previous work, we develop a control-theoretic approach that explicitly models the motion of the hand and the probabilistic relationship between this motion and the mean firing rates of the cells in 70   bins. We focus on a realistic cursor control task in which the subject must move a cursor to “hit” randomly placed targets on a computer monitor. Encoding and decoding of the neural data is achieved with a Kalman filter which has a number of advantages over previous linear filtering techniques. In particular, the Kalman filter reconstructions of hand trajectories in off-line experiments are more accurate than previously reported results and the model provides insights into the nature of the neural coding of movement. 1 Introduction Recent results have demonstrated the feasibility of direct neural control of devices such as computer cursors using implanted electrodes [5, 9, 11, 14]. These results are enabled by a variety of mathematical “decoding” methods that produce an estimate of the system “state” (e.g. hand position) from a sequence of measurements (e.g. the firing rates of a collection of cells). Here we argue that such a decoding method should (1) have a sound probabilistic foundation; (2) explicitly model noise in the data; (3) indicate the uncertainty in estimates of hand position; (4) make minimal assumptions about the data; (5) require a minimal amount of “training” data; (6) provide on-line estimates of hand position with short delay (less than 200ms); and (7) provide insight into the neural coding of movement. To that Monitor Tablet Manipulandum Trajectory Target 2 4 6 8 10 12 14 16 2 4 6 8 10 12 a b Figure 1: Reconstructing 2D hand motion. (a) Training: neural spiking activity is recorded while the subject moves a jointed manipulandum on a 2D plane to control a cursor so that it hits randomly placed targets. (b) Decoding: true target trajectory (dashed (red): dark to light) and reconstruction using the Kalman filter (solid (blue): dark to light). end, we propose a Kalman filtering method that provides a rigorous and well understood framework that addresses these issues. This approach provides a control-theoretic model for the encoding of hand movement in motor cortex and for inferring, or decoding, this movement from the firing rates of a population of cells. Simultaneous recordings are acquired from an array consisting of  microelectrodes [6] implanted in the arm area of primary motor cortex (MI) of a Macaque monkey; recordings from this area have been used previously to control devices [5, 9, 10, 11, 14]. The monkey views a computer monitor while gripping a two-link manipulandum that controls the 2D motion of a cursor on the monitor (Figure 1a). We use the experimental paradigm of [9], in which a target dot appears in a random location on the monitor and the task requires moving a feedback dot with the manipulandum so that it hits the target. When the target is hit, it jumps to a new random location. The trajectory of the hand and the neural activity of  cells are recorded simultaneously. We compute the position, velocity, and acceleration of the hand along with the mean firing rate for each of the cells within non-overlapping    time bins. In contrast to related work [8, 15], the motions of the monkey in this task are quite rapid and more “natural” in that the actual trajectory of the motion is unconstrained. The reconstruction of hand trajectory from the mean firing rates can be viewed probabilistically as a problem of inferring behavior from noisy measurements. In [15] we proposed a Kalman filter framework [3] for modeling the relationship between firing rates in motor cortex and the position and velocity of the subject’s hand. This work focused on off-line reconstruction using constrained motions of the hand [8]. Here we consider new data from the on-line environmental setup [9] which is more natural, varied, and contains rapid motions. With this data we show that, in contrast to our previous results, a model of hand acceleration (in addition to position and velocity) is important for accurate reconstruction. In the Kalman framework, the hand movement (position, velocity and acceleration) is modeled as the system state and the neural firing rate is modeled as the observation (measurement). The approach specifies an explicit generative model that assumes the observation (firing rate in    ) is a linear function of the state (hand kinematics) plus Gaussian noise . Similarly, the hand state at time is assumed to be a linear function of the hand state at the previous time instant plus Gaussian noise. The Kalman filter approach provides a recursive, on-line, estimate of hand kinematics from the firing rate in non-overlapping time bins. The This is a crude assumption but the firing rates can be square-root transformed [7] making them more Gaussian and the mean firing rate can be subtracted to achieve zero-mean data. results of reconstructing hand trajectories from pre-recorded neural firing rates are compared with those obtained using more traditional fixed linear filtering techniques [9, 12] using overlapping   windows. The results indicate that the Kalman filter decoding is more accurate than that of the fixed linear filter. 1.1 Related Work Georgopoulos and colleagues [4] showed that hand movement direction may be encoded by the neural ensemble in the arm area of motor cortex (MI). This early work has resulted in a number of successful algorithms for decoding neural activity in MI to perform offline reconstruction or on-line control of cursors or robotic arms. Roughly, the primary methods for decoding MI activity include the population vector algorithm [4, 5, 7, 11], linear filtering [9, 12], artificial neural networks [14], and probabilistic methods [2, 10, 15]. This population vector approach is the oldest method and it has been used for the real-time neural control of 3D cursor movement [11]. This work has focused primarily on “center out” motions to a discrete set of radial targets (in 2D or 3D) rather than natural, continuous, motion that we address here. Linear filtering [8, 12] is a simple statistical method that is effective for real-time neural control of a 2D cursor [9]. This approach requires the use of data over a long time window (typically     to   ). The fixed linear filter, like population vectors and neural networks [14] lack both a clear probabilistic model and a model of the temporal hand kinematics. Additionally, they provide no estimate of uncertainty and hence may be difficult to extend to the analysis of more complex temporal movement patterns. We argue that what is needed is a probabilistically grounded method that uses data in small time windows (e.g.      or less) and integrates that information over time in a recursive fashion. The CONDENSATION algorithm has been recently introduced as a Bayesian decoding scheme [2], which provides a probabilistic framework for causal estimation and is shown superior to the performance of linear filtering when sufficient data is available (e.g. using firing rates for several hundred cells). Note that the CONDENSATION method is more general than the Kalman filter proposed here in that it does not assume linear models and Gaussian noise. While this may be important for neural decoding as suggested in [2], current technology makes the method impractical for real-time control. For real-time neural control we exploit the Kalman filter [3, 13] which has been widely used for estimation problems ranging from target tracking to vehicle control. Here we apply this well understood theory to the problem of decoding hand kinematics from neural activity in motor cortex. This builds on the work that uses recursive Bayesian filters to estimate the position of a rat from the firing activity of hippocampal place cells [1, 16]. In contrast to the linear filter or population vector methods, this approach provides a measure of confidence in the resulting estimates. This can be extremely important when the output of the decoding method is to be used for later stages of analysis. 2 Methods Decoding involves estimating the state of the hand at the current instant in time; i.e. x        representing -position,  -position, -velocity,  -velocity, acceleration, and  -acceleration at time  "! where ! #    in our experiments. The Kalman filter [3, 13] model assumes the state is linearly related to the observations z %$'&)( which here represents a *,+ vector containing the firing rates at time  for * observed neurons within    . In our experiments, *   cells. We briefly review the Kalman filter algorithm below; for details the reader is referred to [3, 13]. Encoding: We define a generative model of neural firing as z     x  q  (1) where    "   ,  is the number of time steps in the trial, and  $ & ( is a matrix that linearly relates the hand state to the neural firing. We assume the noise in the observations is zero mean and normally distributed, i.e. q   "   % $ &)( ( . The states are assumed to propagate in time according to the system model x    x  w  (2) where   $ &   is the coefficient matrix and the noise term w   "     $ &   . This states that the hand kinematics (position, velocity, and acceleration) at time   is linearly related to the state at time  . Once again we assume these estimates are normally distributed. In practice,         might change with time step  , however, here we make the common simplifying assumption they are constant. Thus we can estimate the Kalman filter model from training data using least squares estimation: argmin A !#" $ % &'& x  ( x  &'& ) argmin H ! $ % &*& z    x  &'& ) where &*&  &'& is the + ) norm. Given  and  it is then simple to estimate the noise covariance matrices  and  ; details are given in [15]. Decoding: At each time step  the algorithm has two steps: 1) prediction of the a priori state estimate ,x "  ; and 2) updating this estimate with new measurement data to produce an a posteriori state estimate ,x  . In particular, these steps are: I. Discrete Kalman filter time update equations: At each time  , we obtain the a priori estimate from the previous time  " , then compute its error covariance matrix, "  : ,x "   ,x  " (3) "  . "     (4) II. Measurement update equations: Using the estimate ,x "  and firing rate z  , we update the estimate using the measurement and compute the posterior error covariance matrix: ,x  /,x "  0   z    ,x "   (5)   21  0   3"  (6) where  represents the state error covariance after taking into account the neural data and 0  is the Kalman gain matrix given by 0  "      "     # " (7) This 0  produces a state estimate that minimizes the mean squared error of the reconstruction (see [3] for details). Note that  is the measurement error matrix and, depending on the reliability of the data, the gain term, 0  , automatically adjusts the contribution of the new measurement to the state estimate. Method Correlation Coefficient    MSE (  ) ) Kalman (0   lag) (0.768, 0.912) 7.09 Kalman (70   lag) (0.785, 0.932) 7.07 Kalman (140   lag) (0.815, 0.929) 6.28 Kalman (210   lag) (0.808, 0.891) 6.87 Kalman (no acceleration) (0.817, 0.914) 6.60 Linear filter (0.756, 0.915) 8.30 Table 1: Reconstruction results for the fixed linear and recursive Kalman filter. The table also shows how the Kalman filter results vary with lag times (see text). 3 Experimental Results To be practical, we must be able to train the model (i.e. estimate  ,  ,  ,  ) using a small amount of data. Experimentally we found that approximately 3.5 minutes of training data suffices for accurate reconstruction (this is similar to the result for fixed linear filters reported in [9]). As described in the introduction, the task involves moving a manipulandum freely on a   +   tablet (with a   +   workspace) to hit randomly placed targets on the screen. We gather the mean firing rates and actual hand trajectories for the training data and then learn the models via least squares (the computation time is negligible). We then test the accuracy of the method by reconstructing test trajectories offline using recorded neural data not present in the training set. The results reported here use approximately 1 minute of test data. Optimal Lag: The physical relationship between neural firing and arm movement means there exists a time lag between them [7, 8]. The introduction of a time lag results in the measurements, z  , at time  , being taken from some previous (or future) instant in time  " for some integer  . In the interest of simplicity, we consider a single optimal time lag for all the cells though evidence suggests that individual time lags may provide better results [15]. Using time lags of 0, 70, 140, 210   we train the Kalman filter and perform reconstruction (see Table 1). We report the accuracy of the reconstructions with a variety of error measures used in the literature including the correlation coefficient (  ) and the mean squared error (MSE) between the reconstructed and true trajectories. From Table 1 we see that optimal lag is around two time steps (or 140   ); this lag will be used in the remainder of the experiments and is similar to our previous findings [15] which suggested that the optimal lag was between 50-100   . Decoding: At the beginning of the test trial we let the predicted initial condition equal the real initial condition. Then the update equations in Section 2 are applied. Some examples of the reconstructed trajectory are shown in Figure 2 while Figure 3 shows the reconstruction of each component of the state variable (position, velocity and acceleration in and  ). From Figure 3 and Table 1 we note that the reconstruction in  is more accurate than in the direction (the same is true for the fixed linear filter described below); this requires further investigation. Note also that the ground truth velocity and acceleration curves are computed from the position data with simple differencing. As a result these plots are quite noisy making an evaluation of the reconstruction difficult. 2 4 6 8 10 12 14 16 18 20 22 2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20 22 2 4 6 8 10 12 14 16 18 20 2 4 6 8 10 12 14 16 18 20 22 2 4 6 8 10 12 14 16 18 20 Figure 2: Reconstructed trajectories (portions of 1min test data – each plot shows 50 time instants (3.5  )): true target trajectory (dashed (red)) and reconstruction using the Kalman filter (solid (blue)). 3.1 Comparison with linear filtering Fixed linear filters reconstruct hand position as a linear combination of the firing rates over some fixed time period [4, 9, 12]; that is,     $  $  %   "   where  is the -position (or, equivalently, the  -position) at time   !  !      ,      , where  is the number of time steps in a trial,  is the constant offset,   "  is the firing rate of neuron  at time  "  , and   are the filter coefficients. The coefficients can be learned from training data using a simple least squares technique. In our experiments here we take    which means that the hand position is determined from firing data over   . This is exactly the method described in [9] which provides a fair comparison for the Kalman filter; for details see [12, 15]. Note that since the linear filter uses data over a long time window, it does not benefit from the use of time-lagged data. Note also that it does not explicitly reconstruct velocity or acceleration. The linear filter reconstruction of position is shown in Figure 4. Compared with Figure 3, we see that the results are visually similar. Table 1, however, shows that the Kalman filter gives a more accurate reconstruction than the linear filter (higher correlation coefficient and lower mean-squared error). While fixed linear filtering is extremely simple, it lacks many of the desirable properties of the Kalman filter. Analysis: In our previous work [15], the experimental paradigm involved carefully designed hand motions that were slow and smooth. In that case we showed that acceleration was redundant and could be removed from the state equation. The data used here is more “natural”, varied, and rapid and we find that modeling acceleration improves the prediction of the system state and the accuracy of the reconstruction; Table 1 shows the decrease in accuracy with only position and velocity in the system state (with 140ms lag). 4 Conclusions We have described a discrete linear Kalman filter that is appropriate for the neural control of 2D cursor motion. The model can be easily learned using a few minutes of training data and provides real-time estimates of hand position every    given the firing rates of 42 5 10 15 20 5 10 15 20 x-position 5 10 15 20 0 5 10 y-position 5 10 15 20 2 1 0 1 2 x-velocity 5 10 15 20 2 0 2 y-velocity 5 10 15 20 1 0 1 x-acceleration time (second) 5 10 15 20 2 1 0 1 2 y-acceleration time (second) Figure 3: Reconstruction of each component of the system state variable: true target motion (dashed (red)) and reconstruction using the Kalman filter (solid (blue)). 20  from a 1min test sequence are shown. 5 10 15 20 5 10 15 20 x-position time (second) 5 10 15 20 0 5 10 y-position time (second) Figure 4: Reconstruction of position using the linear filter: true target trajectory (dashed (red)) and reconstruction using the linear filter (solid (blue)). cells in primary motor cortex. The estimated trajectories are more accurate than the fixed linear filtering results being used currently. The Kalman filter proposed here provides a rigorous probabilistic approach with a well understood theory. By making its assumptions explicit and by providing an estimate of uncertainty, the Kalman filter offers significant advantages over previous methods. The method also estimates hand velocity and acceleration in addition to 2D position. In contrast to previous experiments, we show, for the natural 2D motions in this task, that incorporating acceleration into the system and measurement models improves the accuracy of the decoding. We also show that, consistent with previous studies, a time lag of        improves the accuracy. Our future work will evaluate the performance of the Kalman filter for on-line neural control of cursor motion in the task described here. Additionally, we are exploring alternative measurement noise models, non-linear system models, and non-linear particle filter decoding methods. Finally, to get a complete picture of current methods, we are pursuing further comparisons with population vector methods [7] and particle filtering techniques [2]. Acknowledgments. This work was supported in part by: the DARPA Brain Machine Interface Program, NINDS Neural Prosthetics Program and Grant #NS25074, and the National Science Foundation (ITR Program award #0113679). We thank J. Dushanova, C. Vargas, L. Lennox, and M. Fellows for their assistance. References [1] Brown, E., Frank, L., Tang, D., Quirk, M., and Wilson, M. (1998). A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. J. of Neuroscience, 18(18):7411–7425. [2] Gao, Y., Black, M. J., Bienenstock, E., Shoham, S., and Donoghue, J. P. (2002). Probabilistic inference of hand motion from neural activity in motor cortex. Advances in Neural Information Processing Systems 14, The MIT Press. [3] Gelb, A., (Ed.) (1974). Applied Optimal Estimation. MIT Press. [4] Georgopoulos, A., Schwartz, A., and Kettner, R. (1986). Neural population coding of movement direction. Science, 233:1416–1419. [5] Helms Tillery, S., Taylor, D., Isaacs, R., Schwartz, A. (2000) Online control of a prosthetic arm from motor cortical signals. Soc. for Neuroscience Abst., Vol. 26. [6] Maynard, E., Nordhausen C., Normann, R. (1997). The Utah intracortical electrode array: A recording structure for potential brain-computer interfaces. Electroencephalography and Clinical Neuophysiology 102, pp. 228–239. [7] Moran, D. and Schwartz, B. (1999). Motor cortical representation of speed and direction during reaching. J. of Neurophysiology, 82(5):2676–2692. [8] Paninski, L., Fellows, M., Hatsopoulos, N., and Donoghue, J. P. (2001). Temporal tuning properties for hand position and velocity in motor cortical neurons. submitted, J. of Neurophysiology. [9] Serruya, M. D., Hatsopoulos, N. G., Paninski, L., Fellows, M. R., and Donoghue, J. P. (2002). Brain-machine interface: Instant neural control of a movement signal. Nature, (416):141–142. [10] Serruya. M., Hatsopoulos, N., Donoghue, J., (2000) Assignment of primate M1 cortical activity to robot arm position with Bayesian reconstruction algorithm. Soc. for Neuro. Abst., Vol. 26. [11] Taylor. D., Tillery, S., Schwartz, A. (2002). Direct cortical control of 3D neuroprosthetic devices. Science, Jun. 7;296(5574):1829-32. [12] Warland, D., Reinagel, P., and Meister, M. (1997). Decoding visual information from a population of retinal ganglion cells. J. of Neurophysiology, 78(5):2336–2350. [13] Welch, G. and Bishop, G. (2001). An introduction to the Kalman filter. Technical Report TR 95-041, University of North Carolina at Chapel Hill, Chapel Hill,NC 27599-3175. [14] Wessberg, J., Stambaugh, C., Kralik, J., Beck, P., Laubach, M., Chapin, J., Kim, J., Biggs, S., Srinivasan, M., and Nicolelis, M. (2000). Real-time prediction of hand trajectory by ensembles of cortical neurons in primates. Nature, 408:361–365. [15] Wu, W., Black, M. J., Gao, Y., Bienenstock, E., Serruya, M., and Donoghue, J. P., Inferring hand motion from multi-cell recordings in motor cortex using a Kalman filter, SAB’02Workshop on Motor Control in Humans and Robots: On the Interplay of Real Brains and Artificial Devices, Aug. 10, 2002, Edinburgh, Scotland, pp. 66–73. [16] Zhang, K., Ginzburg, I., McNaughton, B., Sejnowski, T., Interpreting neuronal population activity by reconstruction: Unified framework with application to hippocampal place cells, J. Neurophysiol. 79:1017–1044, 1998.
2002
28
2,230
On the Dirichlet Prior and Bayesian Regularization Harald Steck Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 harald@ai.mit.edu Tommi S. Jaakkola Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 tommi@ai.mit.edu Abstract A common objective in learning a model from data is to recover its network structure, while the model parameters are of minor interest. For example, we may wish to recover regulatory networks from high-throughput data sources. In this paper we examine how Bayesian regularization using a product of independent Dirichlet priors over the model parameters affects the learned model structure in a domain with discrete variables. We show that a small scale parameter - often interpreted as "equivalent sample size" or "prior strength" - leads to a strong regularization of the model structure (sparse graph) given a sufficiently large data set. In particular, the empty graph is obtained in the limit of a vanishing scale parameter. This is diametrically opposite to what one may expect in this limit, namely the complete graph from an (unregularized) maximum likelihood estimate. Since the prior affects the parameters as expected, the scale parameter balances a trade-off between regularizing the parameters vs. the structure of the model. We demonstrate the benefits of optimizing this trade-off in the sense of predictive accuracy. 1 Introduction Regularization is essential when learning from finite data sets. In the Bayesian approach, regularization is achieved by specifying a prior distribution over the parameters and subsequently averaging over the posterior distribution. This regularization provides not only smoother estimates of the parameters compared to maximum likelihood but also guides the selection of model structures. It was pointed out in [6] that a very large scale parameter of the Dirichlet prior can degrade predictive accuracy due to severe regularization of the parameter estimates. We complement this discussion here and show that a very small scale parameter can lead to poor over-regularized structures when a product of (conjugate) Dirichlet priors is used over multinomial conditional distributions (Section 3). Section 4 demonstrates the effect of the scale parameter and how it can be calibrated. We focus on the class of Bayesian network models throughout this paper. 2 Regularization of Parameters We briefly review Bayesian regularization of parameters. We follow the assumptions outlined in [6] : multinomial sample, complete data, parameter modularity, parameter independence, and Dirichlet prior. Note that the Dirichlet prior over the parameters is often used for two reasons: (1) the conjugate prior permits analytical calculations, and (2) the Dirichlet prior is intimately tied to the desirable likelihood-equivalence property of network structures [6]. The Dirichlet prior over the parameters 8' I11"i is given by (1) where 8Xi l11"i pertains to variable X i in state Xi given that its parents IIi are in joint state 'Tri . The number of variables in the domain is denoted by n, and i = 1, ... , n. The normalization terms in Eq. 1 involve the Gamma function r(·). There are a number of approaches to specifying the positive hyper-parameters O:Xi ,11"i of the Dirichlet prior [2, 1, 6]; we adopt the common choice, (2) where p is a (marginal) prior distribution over the (joint) states, as this assignment ensures likelihood equivalence of the network structures [6]. Due to lack of prior knowledge, p is often chosen to be uniform, p(Xi,'Tri ) = 1/ (IXil·IIIil), where lXii, IIIi l denote the number of (joint) states [1]. The scale parameter 0: of the Dirichlet prior is positive and independent of i, i.e., 0: = L Xi ,11"i O:Xi ,11"i ' The average parameter value OXi l11"i ' which typically serves as the regularized parameter estimate given a network structure m , is given by o = E [8] = N Xi ,11"i + O:Xi, 11"i Xi l11"i p(Ox i l ~i I D,m) Xi l11"i N + ' 7ri Q 7ri (3) where N Xi ,11"i are the cell-counts from data D; E[·] is the expectation. Positive hyper-parameters O:Xi, 11"i lead to regularized parameter estimates, i.e., the estimated parameters become "smoother" or "less extreme" when the prior distribution p is close to uniform. An increasing scale parameter 0: leads to a stronger regularization, while in the limit 0: -+ 0, the (unregularized) maximum likelihood estimate is obtained, as expected. 3 Regularization of Structure In the remainder of this paper, we outline effects due to Bayesian regularization of the Bayesian network structure when using a product of Dirichlet priors. Let us briefly introduce relevant notation. In the Bayesian approach to structure learning, the posterior probability of the network structure m is given by p(mID) = p(Dlm)p(m)/p(D), where p(D) is the (unknown) probability of given data D , and p(m) denotes the prior distribution over the network structures; we assume p(m) > 0 for all m. Following the assumptions outlined in [6], including the Dirichlet prior over the parameters 8, the marginal likelihood p(Dlm) = Ep(O lm) [p(Dlm, 8)] can be calculated analytically. Pretending that the (i.i.d.) data arrived in a sequential manner, it can be written as N n N(k-l) + 0: k k p(Dlm) = II II X:(':~l) Xi ,11"i , k= l i=l N k + O:11"k 7r i i (4) where N(k-l) denotes the counts implied by data D(k-l) seen before step k along the sequence (k = 1, ... , N). The (joint) state of variable Xi and its parents IIi occurring in the kth data point is denoted by xf, 7rf. In Eq. 4, we also decomposed the joint probability into a product of conditional probabilities according to the Bayesian network structure m. Eq. 4 is independent of the sequential ordering of the data points, and the ratio in Eq. 3 corresponds to the one in Eq. 4 when based on data D(k- l) at each step k along the sequence. 3.1 Limit of Vanishing Scale-Parameter This section is concerned with the limit of a vanishing scale parameter of the Dirichlet prior, a -+ O. In this limit Bayesian regularization depends crucially on the number of zero-cell-counts in the contingency table implied by the data, or in other words, on the number of different configurations (data points) contained in the data. Let the Effective Number of Parameters (EP) be defined as n dk';) = l: [ l: I(Nxi,1rJ - l: I(N1rJ ], (5) where N Xi ,1ri' N1ri are the (marginal) cell counts in the contingency table implied by data D; m refers to the Bayesian network structure, and 1(·) is an indicator function such that I(z) = 0 if z = 0 and I(z) = 1 otherwise. When all cell counts are positive, EP is identical to the well-known number of parameters (P), dk';) = 4m ) = L:i(IXil - l)IIIil, which play an important role in regularizing the learned network structure. The key difference is that EP accounts for zero-cellcounts implied by the data. Let us now consider the behavior of the marginal likelihood (cf. Eq. 4) in the limit of a small scale parameter a. We find Proposition 1: Under the assumptions concerning the prior distribution outlined in Section 2, the marginal likelihood of a Bayesian network structure m vanishes in the limit a -+ 0 if the data D contain two or more different configurations. This property is independent of the network structure. The leading polynomial order is given by d(=l p(Dlm) "-' a EP as a -+ 0, (6) which depends both on the network structure and the data. However, the dependence on the data is through the number of different data points only. This holds independently of a particular choice of strictly positive prior distributions P(Xi' IIi). If the prior over the network structures is strictly positive, this limiting behavior also holds for the posterior probability p( miD) . In the following we give a derivation of Proposition 1 that also facilitates the intuitive understanding of the result. First, let us consider the behavior of the Dirichlet distribution in the limit a -+ O. The hyper-parameters a X i ,1ri vanish when a -+ 0, and thus the Dirichlet prior converges to a discrete distribution over the parameter simplex in the sense that the probability mass concentrates at a particular, randomly chosen corner of the simplex containing B. I1ri (cf. [9]). Since the randomly chosen points (for different 7ri, i) do not change when sampling (several) data points from the distribution implied by the model, it follows immediately that the marginal likelihood of any network structure vanishes whenever there are two or more different configurations contained in the data. This well-known fact also shows that the limit a -+ 0 actually corresponds to a very strong prior belief [9, 12]. This is in contrast to many traditional interpretations where the limit a -+ 0 is considered as "no prior information", often motivated by Eq. 3. As pointed out in [9, 12], the interpretation of the scale parameter a as "equivalent sample size" or as the" strength" of prior belief may be misleading, particularly in the case where O:Xi, 1ri < 1 for some configurations Xi, 7ri. A review of different notions of "noninformative" priors (including their limitations) can be found in [7]. Note that the noninformative prior in the sense of entropy is achieved by setting O:Xi ,1ri = 1 for each Xi, 7ri and for all i = 1, ... , n. This is the assignment originally proposed in [2]; however, this assignment generally is inconsistent with Eq. 2, and hence with likelihood equivalence [6]. In order to explain the behavior of the marginal likelihood in leading order of the scale parameter 0:, the properties of the Dirichlet distribution are not sufficient by themselves. Additionally, it is essential that the probability distribution described by a Bayesian network decomposes into a product of conditional probabilities, and that there is a Dirichlet prior pertaining to each variable for each parent configuration. All these Dirichlet priors are independent of each other under the standard assumption of parameter independence. Obviously, the ratio (for given k and i) in Eq. 4 can only vanish in the limit 0: --+ 0 if N(~ - ~ = 0 while N(~- l) > 0; in other Xi , 7ri 7ri words, the parent-configuration 7rf must already have occurred previously along the sequence (7rf is "old"), while the child-state xf occurs simultaneously with this parent-state for the first time (xf, 7rf is "new"). In this case, the leading polynomial order of the ratio (for given k and i) is linear in 0:, assuming P(Xi' IIi) > 0; otherwise the ratio (for given k and i) converges to a finite positive value in the limit 0: --+ O. Consequently, the dependence of the marginal likelihood in leading polynomial order on 0: is completely determined by the number of different configurations in the data. It follows immediately that the leading polynomial order in 0: is given by EP (d. Eq. 5). This is because the first term counts the number of all the different joint configurations of Xi , IIi in the data, while the second term ensures that EP counts only those configurations where (xf, 7rf) is "new" while 7rf is "old". Note that the behavior of the marginal likelihood in Proposition 1 is not entirely determined by the network structure in the limit 0: --+ 0, as it still depends on the data. This is illustrated in the following example. First, let us consider two binary variables, Xo and Xl, and the data D containing only two data points, say (0,0) and (1,1). Given data D, three Dirichlet priors are relevant regarding graph ml, Xo --+ Xl, but only two Dirichlet priors pertain to the empty graph, mo. The resulting additional "flexibility" due to an increased number of priors favours more complex models: p(Dlmd ~ 0:, while p(Dlmo) ~ 0:2 . Second, let us now assume that all possible configurations occur in data D. Then we still have p(Dlmo) ~ 0:2 for the empty graph. Concerning graph ml, however, the marginal likelihood now also involves the vanishing terms due to the two priors pertaining to BX1 lxo=o and BXl lxo=l, and it hence becomes p(Dlmd ~ 0:3 . This dependence on the data can be formalized as follows. Let us compare the marginal likelihoods of two graphs, say m+ and m - . In particular, let us consider two graphs that are identical except for a single edge, say A +- B between the variables A and B. Let the edge be present in graph m+ and absent in m-. The fact that the marginal likelihood decomposes into terms pertaining to each of the variables (d. Eq. 4) entails that all the terms regarding the remaining variables cancel out in the Bayes factor p(Dlm+)/p(Dlm-), which is the standard relative Bayesian score. With the definition of the Effective Degrees of Freedom (EDF)l (7) we immediately obtain from Proposition 1 that p(Dlm+)/p(Dlm- ) ~ o:dEDF in the INote that EDF are not necessarily non-negative. limit a -+ 0, and hence Proposition 2: Let m+ and m- be the two network structures as defined above. Let the prior belief be given according to Eq. 2. Then in the limit a -+ 0: I p(Dlm+) {-oo if d EDF > 0, ( ) og p(Dlm- ) -+ +00 if dEDF < O. 8 The result holds independently of a particular choice of strictly positive prior distributions P(Xi' IIi). If the prior over the network structures is strictly positive, this limiting behavior also holds for the posterior ratio. A positive value of the log Bayes factor indicates that the presence of the edge A f- B is favored, given the parents IIA ; conversely, a negative relative score suggests that the absence of this edge is preferred. The divergence of this relative Bayesian score implies that there exists a (small) positive threshold value ao > 0 such that, for any a < ao, the same graph(s) are favored as in the limit. Since Proposition 2 applies to every edge in the network, it follows immediately that the empty (complete) graph is assigned the highest relative Bayesian score when EDF are positive (negative). Regularization of network structure in the case of positive EDF is therefore extreme, permitting only the empty graph. This is precisely the opposite of what one may have expected in this limit, namely the complete graph corresponding to the unregularized maximum likelihood estimate (MLE). In contrast, when EDF are negative, the complete graph is favored. This agrees with MLE. Roughly speaking, positive (negative) EDF correspond to large (small) data sets. It is thus surprising that a small data set, where one might expect an increased restriction on model complexity, actually gives rise to the complete graph, while a large data set yields the - most regularized - empty graph in the limit a -+ O. Moreover, it is conceivable that a "medium" sized data set may give rise to both positive and negative EDF. This is because the marginal contingency tables implied by the data with respect to a sparse (dense) graph may contain a small (large) number of zero-cell-counts. The relative Bayesian score can hence become rather unstable in this case, as completely different graph structures are optimal in the limit a -+ 0, namely graphs where each variable has either the maximal number of parents or none. Note that there are two reasons for the hyper-parameters a Xi ,1fi to take on small values (cf. Eq. 2): (1) a small equivalent sample size a, or (2) a large number of joint states, i.e. IXi l· IIIil » a , due to a large number of parents (with a large number of states). Thus, these hyper-parameters can also vanish in the limit of a large number of configurations (x, 1f) even though the scale parameter a remains fixed. This is precisely the limit defining Dirichlet processes [4], which, analogously, produce discrete samples. With a finite data set and a large number of joint configurations, only the typical limit in Proposition 2 is possible. This follows from the fact that a large number of zero-cell-counts forces EDF to be negative. The surprising behavior implied by Proposition 2 therefore does not carryover to Dirichlet processes. As found in [8], however, the use of a product of Dirichlet process priors in non parametric inference can also lead to surprising effects. When dEDF = 0, it is indeed true that the value of the log Bayes factor can converge to any (possibly finite) value as a -+ O. Its value is determined by the priors P(Xi' IIi), as well as by the counts implied by the data. The value of the Bayes factor can be therefore easily set by adjusting the prior weights p(Xi' 1fi). 3.2 Large Scale-Parameter In the other limiting case, where a -+ 00, the Bayes factor approaches a finite value, which in general depends on the given data and on the prior distributions p(Xi' IIi). lBF 2 1.5 1 0.5 z=3 a ~ 100 150 200 250 300 -0.5 z=o -1 Figure 1: The log Bayes factor (lBF) is depicted as a function of the scale parameter 0:. It is assumed that the two variables A and B are binary and have no parents; and that the "data" imply the contingency table: NA=O,B=O = NA= l,B= l = 10 + z and NA=l,B=O = NA=O,B=l = 10 - z, where z is a free parameter determining the statistical dependence between A and B. The prior p(Xi,IIi) was chosen to be uniform. This can be seen easily by applying the Stirling approximation in the limit 0: -+ 00 after rewriting Eq. 4 in terms of Gamma functions (cf. also [2, 6]). When the popular choice of a uniform prior p(Xi,IIi) is used [1], then p(Dlm+) log p(Dlm-) -+ 0 as 0:-+00, (9) which is independent of the data. Hence, neither the presence nor the absence of the edge between A and B is favored in this limit. Given a uniform prior over the network structures, p(m) =const, the posterior distribution p(mID) over the graphs thus becomes increasingly spread out as 0: grows, permitting more variable network structures. The behavior of the Bayes factor between the two limits 0: -+ 0 and 0: -+ 00 is exemplified for positive EDF in Figure 1: there are two qualitatively different behaviors, depending on the degree of statistical dependence between A and B. A sufficiently weak dependence results in a monotonically increasing Bayes factor which favors the absence of the edge A +- B at any finite value of 0:. In contrast, given a sufficiently strong dependence between A and B, the log Bayes factor takes on positive values for all (finite) 0: exceeding a certain value 0:+ of the scale parameter. Moreover, 0:+ grows as the statistical dependence between A and B diminishes. Consequently, given a domain with a range of degrees of statistical dependences, the number of edges in the learned graph increases monotonically with growing scale parameter 0: when each variable has at most one parent (i. e., in the class of trees or forests). This is because increasingly weaker statistical dependencies between variables are recovered as 0: grows; the restriction to forests excludes possible "interactions" among (several) parents of a variable. As suggested by our experiments, this increase in the number of edges can also be expected to hold for general Bayesian network structures (although not necessarily in a monotonic way). This reveals that regularization of network structure tends to diminish with a growing scale parameter. Note that this is in the opposite direction to the regularization of parameters (cf. Section 2). Hence, the scale parameter 0: of the Dirichlet prior determines the trade-off between regularizing the parameters vs. the structure of the Bayesian network model. If a uniform prior over the network structures is chosen, p(m) = const, the above discussion also holds for the posterior ratio (instead of the Bayes factor). The behavior is more complicated, however, when a non-uniform prior is assumed. For instance, when a prior is chosen that penalizes the presence of edges, the posterior favours the absence of an edge not only when the scale parameter is sufficiently small, but also when it is sufficiently large. This is apparent from Fig. 1, when the log Bayes factor is compared to a positive threshold value (instead of zero). 4 Example This section exemplifies that the entire model (parameters and structure) has to be considered when learning from data. This is because regularization of model structure diminishes, while regularization of parameters increases with a growing scale parameter a of the Dirichlet prior, as discussed in the previous sections. When the entire model is taken into account, one can use a sensitivity analysis in order to determine the dependence of the learned model on the scale parameter a, given the prior p(Xi' IIi) (cf. Eq. 2). The influence of the scale parameter a on predictive accuracy of the model can be assessed by cross-validation or, in a Bayesian approach, prequential validation [11, 3]. Another possibility is to treat the scale parameter a as an additional parameter of the model to be learned from data. Hence, prior belief regarding the parameters e can then enter only through the (normalized) distributions P(Xi' IIi). Howeverl. note that this is sufficient to determine the (average) prior parameter estimate e (cf. Eq. 3) , i.e., when N = O. Assuming an (improper) uniform prior distribution over a, its posterior distribution is p(aID) ex: p(Dla), given data D. Then aD = argmaxaP(Dla), where p(Dla) = I:m P(Dla,m)p(m)2 can be calculated exactly if the summation is feasible (like in the example below). Alternatively, assuming that the posterior over a is strongly peaked, the likelihood may also be approximated by summing over the k most likely graphs m only (k = 1 in the most extreme case; empirical Bayes). Subsequently, model structure m and parameters B can be learned with respect to the Bayesian score employing aD. In the following, the effect of various values assigned to the scale parameter a is exemplified concerning the data set gathered from Wisconsin high-school students by Sewell and Shah [10]. This domain comprises 5 discrete variables, each with 2 or 4 states; the sample size is 10,318. In this small domain, exhaustive search in the space of Bayesian network structures is feasible (29,281 graphs). Both the prior distributions p(m) for all m and P(Xi' IIi) are chosen to be uniform. Figure 2 shows that the number of edges in the graph with the highest posterior probability grows with an increasing value of the scale parameter, as expected (cf. Section 3). In addition, cross-validation indicates best predictive accuracy of the learned model at a ~ 100, ... ,300, while the likelihood p(Dla) takes on its maximum at aD ~ 69. Both approaches agree on the same network structure, which is depicted in Fig. 3. This graph can easily be interpreted in a causal manner, as outlined in [5].3 We note that this graph was also obtained in [5] due, however, to additional constraints concerning network structure, as a rather small prior strength of a = 5 was used. For comparison, Fig. 3 also shows the highest-scoring unconstraint graph due to a = 5, which does not permit a causal interpretation, cf. also [5]. This illustrates that the "right" choice of the scale parameter a of the Dirichlet prior, when accounting for both model structure and parameters, can have a crucial impact on the learned network structure and the resulting insight in the ("true") dependencies among the variables in the domain. 2We assume that m and a are independent a priori, p(mla) = p(m). 3Since we did not impose any constraints on the network structure, unlike to [5] , Markov-equivalence leaves the orientation of the edge between the variables IQ and CP unspecified. a a. XV 5 p(D la) p(D laD) 5 6 0.045 10 ·w 50 7 0.044 0.13 100 7 0.040 0.05 200 7 0.040 10- 14 300 7 0.040 10-30 500 7 0.042 10- 65 1, 000 8 0.047 10-151 Figure 2: As a function of a: number of arcs (a.) in the highest-scoring graph; average KL divergence in 5-fold crossvalidation (XV 5), std= 0.006; likelihood of a when treated as an additional model parameter (aD = 69). Acknowledgments SES: socioeconomic status SEX: gender of student PE: parental encouragement CP: college plans IQ: intelligence quotient Figure 3: Highest-scoring (unconstraint) graphs when a = 5 (left) , and when a = 46, ... ,522 (right). Note that the latter graph can also be obtained at a = 5 when additional constraints are imposed on the structure, cf. [5]. We would like to thank Chen-Hsiang Yeang and the anonymous referees for valuable comments. Harald Steck acknowledges support from the German Research Foundation (DFG) under grant STE 1045/1-1. Tommi Jaakkola acknowledges support from Nippon Telegraph and Telephone Corporation, NSF ITR grant IIS-0085836, and from the Sloan Foundation in the form of the Sloan Research Fellowship. References [1] W. Buntine. Theory refinement on Bayesian networks. Conference on Uncertainty in Artificial Intelligence, pages 52- 60. Morgan Kaufmann, 1991. [2] G. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9:309- 47, 1992. [3] A. P. Dawid. Statistical theory. The prequential approach. Journal of the Royal Statistical Society, Series A , 147:277- 305, 1984. [4] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1:209- 30, 1973. [5] D. Heckerman. A tutorial on learning with Bayesian networks. In M. I. Jordan (Ed.), Learning in Graphical Models, pages 301- 54. Kluwer, 1996. [6] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: the combination of knowledge and statistical data. Machine Learning, 20:197243,1995. [7] R. E. Kass and L. Wasserman. Formal rules for selecting prior distributions: a review and annotated bibliography. Technical Report 583, CMU, 1993. [8] S. Petrone and A. E. Raftery. A note on the Dirichlet process prior in Bayesian nonparametric inference with partial exchangeability. Technical Report 297, University of Washington, Seattle, 1995. [9] J. Sethuraman and R. C. Tiwari. Convergence of Dirichlet measures and the interpretation of their parameter. In S. S. Gupta and J. O. Berger (Eds.), Statistical Decision Theory and Related Topics III, pages 305- 15, 1982. [10] W. Sewell and V. Shah. Social class, parental encouragement, and educational aspirations. American Journal of Sociology, 73:559- 72, 1968. [11] M. Stone. Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, Series B, 36:111- 47, 1974. [12] S. G. Walker and B. K. Mallick. A note on the scale parameter of the Dirichlet process. The Canadian Journal of Statistics, 25:473- 9, 1997.
2002
29
2,231
A Differential Semantics for Jointree Algorithms James D. Park and Adnan Darwiche Computer Science Department University of California, Los Angeles, CA 90095 {jd,darwiche}@cs.ucla.edu Abstract A new approach to inference in belief networks has been recently proposed, which is based on an algebraic representation of belief networks using multi–linear functions. According to this approach, the key computational question is that of representing multi–linear functions compactly, since inference reduces to a simple process of evaluating and differentiating such functions. We show here that mainstream inference algorithms based on jointrees are a special case of this approach in a very precise sense. We use this result to prove new properties of jointree algorithms, and then discuss some of its practical and theoretical implications. 1 Introduction It was recently shown that the probability distribution of a belief network can be represented using a multi–linear function, and that most probabilistic queries of interest can be retrieved directly from the partial derivatives of this function [2]. Although the multi–linear function has an exponential number of terms, it can be represented using a small arithmetic circuit in certain situations [3].1 Once a belief network is represented as an arithmetic circuit, probabilistic inference is then performed by evaluating and differentiating the circuit, using a very simple procedure which resembles back–propagation in neural networks. We show in this paper that mainstream inference algorithms based on jointrees [14, 8] are a special-case of the arithmetic–circuit approach proposed in [2]. Specifically, we show that each jointree is an implicit representation of an arithmetic circuit; that the inward–pass in jointree propagation evaluates this circuit; and that the outward– pass differentiates the circuit. Using these results, we prove new useful properties of jointree propagation. We also suggest a new interpretation of the process of factoring graphical models into jointrees, as a process of factoring exponentially– sized multi–linear functions into arithmetic circuits of smaller size. 1For example, it was shown recently that real–world belief networks with treewidth up to 60 can be compiled into arithmetic circuits with few thousand nodes [3]. Such networks have local structure, and are outside the scope of mainstream algorithms for inference in belief networks whose complexity is exponential in treewidth. A B true true θb|a = .2 true false θ¯b|a = .8 false true θb|¯a = .7 false false θ¯b|¯a = .3 A true θa = .6 false θ¯a = .4 A C true true θc|a = .8 true false θ¯c|a = .2 false true θc|¯a = .15 false false θ¯c|¯a = .85 Figure 1: The CPTs of belief network B ←A →C. This paper is structured as follows. Sections 2 and 3 are dedicated to a review of inference approaches based on arithmetic circuits and jointrees. Section 4 shows that the jointree approach is a special case of the arithmetic–circuit approach, and discusses some practical implications of this finding. Finally, Section 5 closes with a new perspective on factoring graphical models. Proofs of all theorems can be found in the long version of this paper [11]. 2 Belief networks as multi–linear functions A belief network is a factored representation of a probability distribution. It consists of two parts: a directed acyclic graph (DAG) and a set of conditional probability tables (CPTs). For each node X and its parents U, we have a CPT that specifies the distribution of X given each instantiation u of the parents; see Figure 1.2 A belief network is a representational factorization of a probability distribution, not a computational one. That is, although the network compactly represents the distribution, it needs to be processed further if one is to obtain answers to arbitrary probabilistic queries. Mainstream algorithms for inference in belief networks operate on the network to generate a computational factorization, allowing one to answer queries in time which is linear in the factorization size. A most influential computational factorization of belief networks is the jointree [14, 8, 6]. Standard jointree factorizations are structure–based: their size depend only on the network topology and is invariant to local CPT structure. This observation has triggered much research for alternative, finer–grained factorizations, since real-world networks can exhibit significant local structure that leads to smaller factorizations if exploited. We discuss next one of the latest proposals in this direction, which calls for using arithmetic circuits as a computational factorization of belief networks [2]. This proposal is based on viewing each belief network as a multi–linear function, which can be represented compactly using an arithmetic circuit. The multi–linear function itself contains two types of variables. First, evidence indicators, where for each variable X in the network , we have a variable λx for each value x of X. Second, network parameters, where for each variable X and its parents U in the network, we have a variable θx|u for each value x of X and instantiation u of U. The multi–linear function has a term for each instantiation of the network variables, which is constructed by multiplying all evidence indicators and network parameters that are consistent with that instantiation. For example, the multi–linear function of the network in Figure 1 has eight terms corresponding to the eight instantiations of variables A, B, C: f = λaλbλcθaθb|aθc|a+λaλbλ¯cθaθb|aθ¯c|a+. . .+λ¯aλ¯bλ¯cθ¯aθ¯b|¯aθ¯c|¯a. We will often refer to such a multi–linear function as the network polynomial. 2Variables are denoted by upper–case letters (A) and their values by lower–case letters (a). Sets of variables are denoted by bold–face upper–case letters (A) and their instantiations are denoted by bold–face lower–case letters (a). For a variable A with values true and false, we use a to denote A= true and ¯a to denote A= false. Finally, for a variable X and its parents U, we use θx|u to denote the CPT entry corresponding to Pr(x | u). * * + + + * * * *           BCD λλλλBλλλλDθθθθD|BC CE λλλλCλλλλEθθθθE|C ABC λλλλAθθθθAθθθθB|AθθθθC|A A C B D E Figure 2: On the left: An arithmetic circuit which computes the function λaλbθaθb|a+ λaλ¯bθaθ¯b|a+ λ¯aλbθ¯aθb|¯a+ λ¯aλ¯bθ¯aθ¯b|¯a. The circuit is a DAG, where leaf nodes represent function variables and internal nodes represent arithmetic operations. On the right: A belief network structure and its corresponding jointree. Given the network polynomial f, we can answer any query with respect to the belief network. Specifically, let e be an instantiation of some network variables, and suppose we want to compute the probability of e. We can do this by simply evaluating the polynomial f while setting each evidence indicator λx to 1 if x is consistent with e, and to 0 otherwise. For the network in Figure 1, we can compute the probability of evidence e = b¯c by evaluating its polynomial above under λa = 1,λ¯a = 1,λb = 1, λ¯b = 0 and λc = 0, λ¯c = 1. This leads to θaθb|aθ¯c|a+θ¯aθb|¯aθ¯c|¯a, which equals the probability of b, ¯c in this case. We use f(e) to denote the result of evaluating the polynomial f under evidence e as given above. This algebraic representation of belief networks is attractive as it allows us to obtain answers to a large number of probabilistic queries directly from the derivatives of the network polynomial [2]. For example, the posterior marginal Pr(x|e) for a variable X ̸∈E equals 1 f(e) ∂f(e) ∂λx , where ∂f(e) ∂λx is the partial derivative of f wrt λx evaluated at e. Second, the probability of evidence e after having retracted the value of some variable X from e, Pr(e −X), equals P x ∂f(e) ∂λx . Third, the posterior marginal Pr(x, u|e) for a variable X and its parents U equals θx|u f(e) ∂f(e) ∂θx|u . The network polynomial has an exponential number of terms, yet one can represent it compactly in certain cases using an arithmetic circuit; see Figure 2. The (first) partial derivatives of an arithmetic circuit can all be computed simultaneously in time linear in the circuit size [2, 12]. The procedure resembles the back–propagation algorithm for neural networks as it evaluates the circuit in a single upward–pass, and then differentiates it through a single downward–pass. The main computational question is then that of generating the smallest arithmetic circuit that computes the network polynomial. A structure–based approach for this has been given in [2], which is guaranteed to generate a circuit whose size is bounded by O(n exp(w)), where n is the number of nodes in the network and w is its treewidth. A more recent approach, however, which exploits local structure has been presented in [3] and was shown experimentally to generate small arithmetic circuits for networks whose treewidth is up to 60. As we show in the rest of this paper, the process of factoring a belief network into a jointree is yet another method for generating an arithmetic circuit for the network. Specifically, we show that the jointree structure is an implicit representation of such a circuit, and that jointree propagation corresponds to circuit evaluation and differentiation. Moreover, the difference between Shenoy–Shafer and Hugin propagation turns out to be a difference in the numeric scheme used for circuit differentiation [11]. 3 Jointree Algorithms We now review jointree algorithms, which are quite influential in graphical models. Let B be a belief network. A jointree for B is a pair (T , L), where T is a tree and L is a function that assigns labels to nodes in T . A jointree must satisfy three properties: (1) each label L(i) is a set of variables in the belief network; (2) each network variable X and its parents U (a family) must appear together in some label L(i); (3) if a variable appears in the labels of i and j, it must also appear in the label of each node k on the path connecting them. The label of edge ij in T is defined as L(i) ∩L(j). We will refer to the nodes of a jointree (and sometimes their labels) as clusters. We will also refer to the edges of a jointree (and sometimes their labels) as separators. Figure 2 depicts a belief network and one of its jointrees. Jointree algorithms start by constructing a jointree for a given belief network [14, 8, 6]. They also associate tables (also called potentials) with clusters and separators.3 The conditional probability table (CPT or CP Table) of each variable X with parents U, denoted θX|U, is assigned to a cluster that contains X and U. In addition, an evidence table over variable X, denoted λX, is assigned to a cluster that contains X. Figure 2 depicts the assignments of evidence and CP tables to clusters. Evidence e is entered into a jointree by initializing evidence tables as follows: we set λX(x) to 1 if x is consistent with evidence e, and we set λX(x) to 0 otherwise. Given some evidence e, a jointree algorithm propagates messages between clusters. After passing two message per edge in the jointree, one can compute the marginals Pr(C, e) for every cluster C. There are two main methods for propagating messages in a jointree: the Shenoy–Shafer architecture [14] and the Hugin architecture [8]. Shenoy–Shafer propagation proceeds as follows [14]. First, evidence e is then entered into the jointree. A cluster is then selected as the root and message propagation proceeds in two phases, inward and outward. In the inward phase, messages are passed toward the root. In the outward phase, messages are passed away from the root. Cluster i sends a message to cluster j only when it has received messages from all its other neighbors k. A message from cluster i to cluster j is a table Mij defined as follows: Mij = P C\S φi Q k̸=j Mki, where C are the variables of cluster i, S are the variables of separator ij, and φi is the multiplication of all evidence and CP tables assigned to cluster i. Once message propagation is finished, we have Pr(C, e) = φi Q k Mki, where C are the variables of cluster i. Hugin propagation proceeds similarly to Shenoy–Shafer by entering evidence; selecting a cluster as root; and propagating messages in two phases, inward and outward [8]. The Hugin method, however, differs in some major ways. It maintains a table Φij with each separator, whose entries are initialized to 1s. It also maintains a table Φi with each cluster i, initialized to the multiplication of all CPTs and evidence tables assigned to cluster i. Cluster i passes a message to neighboring cluster j only when i has received messages from all its other neighbors k. When cluster i is ready to send a message to cluster j, it does the following. First, it saves the table of separator Φij into Φold ij . Second, it computes a new separator table Φij = P C\S Φi, where C are the variables of cluster i and S are the variables of separator ij. Third, it computes a message to cluster j: Mij = Φij Φold ij . Finally, it multiplies the computed message into the table of cluster j: Φj = ΦjMij. After the inward and outward– passes of Hugin propagation are completed, we have: Pr(C, e) = Φi, where C are the variables of cluster i. 3A table is an array which is indexed by variable instantiations. Specifically, a table φ over variables X is indexed by the instantiations x of X. Its entries φ(x) are in [0, 1]. 4 Jointrees as arithmetic circuits We now show that every jointree (together with a root cluster and a particular assignment of evidence and CP tables to clusters) corresponds precisely to an arithmetic circuit that computes the network polynomial. We also show that the inward– pass of the Shenoy–Shafer architecture evaluates this circuit, while the outward–pass differentiates it. We show a similar result for the Hugin architecture. Definition 1 Given a root cluster, a particular assignment of evidence and CP tables to clusters, the arithmetic circuit embedded in a jointree is defined as follows:4 Nodes: The circuit includes: an output addition node f; an addition node s for each instantiation of a separator S; a multiplication node c for each instantiation of a cluster C; an input node λx for each instantiation x of variable X; an input node θx|u for each instantiation xu of family XU. Edges: The children of the output node f are the multiplication nodes generated by the root cluster; the children of an addition node s are all compatible nodes generated by the child cluster; the children of a multiplication node c are all compatible nodes generated by child separators, and all compatible input nodes assigned to cluster C. Hence, separators contribute addition nodes and clusters contribute multiplication nodes. Moreover, the structure of the jointree dictates how these nodes are connected into a circuit. The arithmetic circuit in Figure 2 is embedded in the jointree A −AB, with cluster A as the root, and with tables λA, θA assigned to cluster A and tables λB and θB|A assigned to cluster B. Theorem 1 The circuit embedded in a jointree computes the network polynomial. Therefore, by constructing a jointree one is generating a compact representation of the network polynomial in terms of an arithmetic circuit. We are now ready to state our basic results on the differential semantics of jointree propagation, but we need some notational conventions first. In the following three theorems: f denotes the circuit embedded in a jointree or its (unique) output node; s denotes a separator instantiation or the addition node generated by that instantiation; and c denotes a cluster instantiation or the multiplication node generated by that instantiation. Moreover, the value that a circuit node v takes under evidence e is denoted v(e). Recall that a circuit (or network polynomial) is evaluated under evidence e by setting each input λx to 1 if x is consistent with e; and to 0 otherwise. Finally, recall that ∂f/∂v represents the derivative of the circuit output with respect to node v. Our first result relates to Shenoy–Shafer propagation. Theorem 2 The messages produced using Shenoy–Shafer propagation on a jointree under evidence e have the following semantics. For each inward message Mij, we have Mij(s) = s(e). For each outward message Mji, we have Mji(s) = ∂f(e) ∂s . Hence, if we interpret separator instantiations as addition nodes in a circuit as given by Definition 1, we get that a message directed towards the jointree root contains the values of these addition nodes, while a message directed outward from the root contains the partial derivatives of the circuit output with respect to these nodes. Shenoy–Shafer propagation does not compute derivatives with respect to input nodes λx and θx|u, but these can be obtained using local computations as follows. 4Given a root cluster, one can direct the jointree by having arrows point away from the root, which also defines a parent/child relationship between clusters and separators. Theorem 3 If evidence table λX is assigned to cluster i with variables C: ∂f(e) ∂λx =  X C\X Y j Mji Y ψ̸=λX ψ  (x), (1) where ψ ranges over all evidence and CP tables assigned to cluster i. Moreover, if CPT θX|U is assigned to cluster i with variables C: ∂f(e) ∂θx|u =  X C\XU Y j Mji Y ψ̸=θX|U ψ  (xu), (2) where ψ ranges over all evidence and CP tables assigned to cluster i. Therefore, even though Shenoy–Shafer propagation does not fully differentiate the embedded circuit, the differentiation process can be completed through local computations after propagation has finished.5 We now discuss some applications of the partial derivatives with respect to evidence indicators λx and network parameters θx|u. Fast retraction & evidence flipping. Suppose jointree propagation has been performed using evidence e, which gives us access directly to the probability of e. Suppose now we are interested in the probability of a different evidence e′, which results from changing the value of some variable X in e to a new value x. The probability of e′ in this case is equal to ∂f(e) ∂λx [2], which can be obtained as given by Equation 1. The ability to perform this computation efficiently is crucial for algorithms that try to approximate maximum aposteriori hypothesis (MAP) using local search [9, 10]. Another application of this derivative is in computing the probability of evidence e′, which results from retracting the value of some variable X from e: Pr(e′) = P x ∂f(e) ∂λx . This computation is key to analyzing evidence conflict, as it allows us to determine the extent to which one piece of evidence is contradicted by the remaining pieces. Sensitivity analysis & parameter learning. The derivative ∂Pr(e) ∂θx|u is essential for sensitivity analysis—it is the basis for an efficient approach that identifies minimal network parameters changes that are necessary to satisfy constraints on probabilistic queries [1]. This derivative is also crucial for gradient ascent approaches for learning network parameters as it is required to compute the gradient 5Hugin propagation also corresponds to circuit evaluation/differentiation: Theorem 4 Cluster tables, separator tables and messages produced using Hugin propagation under evidence e have the following semantics: For table Φi of cluster i with variables C: Φi(c) = c(e) ∂f(e) ∂c . For table Φij of separator ij with variables S: Φij(s) = s(e) ∂f(e) ∂s . For each inward message Mij, we have Mij(s) = s(e). For each outward message Mji, we have Mji(s) = ∂f(e) ∂s if s(e) ̸= 0. Again, Hugin propagation does not compute derivatives with respect to input nodes λx and θx|u. Even for addition and multiplication nodes, it only retains derivatives multiplied by values. Hence, if we want to recover the derivative with respect to, say, multiplication node c, we must know the value of this node and it must be different than zero. In such a case, we have ∂f(e)/∂c = Φi(c)/c(e), where Φi is the table associated with the cluster i that generates node c. One can also compute the quantity v ∂f/∂v for input nodes using equations similar to those in Theorem 3. But such quantities will be useful for obtaining derivatives only if the values of such input nodes are not zero. Hence, Shenoy–Shafer propagation is more informative than Hugin propagation as far as the computation of derivatives is concerned. used for deciding moves in the search space [13]. This derivative equals ∂f(e) ∂θx|u , and can be obtained as given by Equation 2. The only other method we are aware of to compute this derivative (beyond the one in [2]) is the one using the identity ∂Pr(e)/∂θx|u = Pr(x, u, e)/θx|u, which requires θx|u ̸= 0 [13]. Hence, our results seem to suggest the first general approach for computing this derivative using standard jointree propagation. Bounding rounding errors. Jointree propagation gives exact results only when infinite precision arithmetic is used. In practice, however, finite precision floating– point arithmetic is typically used, in which case the differential semantics of jointree propagation can be used to bound the rounding error in the computed probability of evidence. See the full paper [11] for details on computing this bound. 5 A new perspective on factoring graphical models We have shown in this paper that each jointree can be viewed as an implicit representation of an arithmetic circuit which computes the network polynomial, and that jointree propagation corresponds to an evaluation and differentiation of the circuit. These results have been useful in unifying the circuit approach presented in [2] with jointree approaches, and in uncovering more properties of jointree propagation. Another outcome of these results relates to the level at which it is useful to phrase the problem of factoring graphical probabilistic models. Specifically, the perspective we are promoting here is that probability distributions defined by graphical models should be viewed as multi–linear functions, and the construction of jointrees should be viewed as a process of constructing arithmetic circuits that compute these functions. That is, the fundamental object being factored is a multi–linear function, and the fundamental result of the factorization is an arithmetic circuit. A graphical model is a useful abstraction of the multi–linear function, and a jointree is a useful structure for embedding the arithmetic circuit. This view of factoring is useful since it allows us to cast the factoring problem in more refined terms, which puts us in a better position to exploit the local structure of graphical models in the factorization process. Note that the topology of a graphical model defines the form of the multi–linear function, while the model’s local structure (as exhibited in its CPTs) constrains the values of variables appearing in the function. One can factor a multi–linear function without knowledge of such constraints, but the resulting factorizations will not be optimal. For a dramatic example, consider a fully connected network with variables X1, . . . , Xn, where all parameters are equal to 1 2. Any jointree for the network will have a cluster of size n, leading to O(exp(n)) complexity. There is, however, a circuit of O(n) size here, since the network polynomial can be easily factored as: f = ( 1 2) n Qn i=1(λxi + λ ¯ xi). Hence, in the presence of local structure, it appears more promising to factor the graphical model into the more refined arithmetic circuit since not every arithmetic circuit can be embedded in a jointree. This promise is made apparent by the results in [3], which we sketch next. First, the multi–linear function of a belief network is “encoded” using a propositional theory, which is expressive enough to capture the form of the multi–linear function in addition to constraints on its variables. The theory is then compiled into a special logical form, known as deterministic decomposable negation normal form. An arithmetic circuit is finally extracted from that form. The method was able to generate relatively small arithmetic circuits for a significant suite of real–world belief networks with treewidths up to 60. It is worth mentioning here that the above perspective is in harmony with recent approaches that represent probabilistic models using algebraic decision diagrams (ADDs), citing the promise of ADDs in exploiting local structure [5]. ADDs and related representations, such as edge–valued decision diagrams, are known to be compact representations of multi–linear functions. Moreover, each of these representations can be expanded in linear time into an arithmetic circuit that satisfies some strong properties [4]. Hence, such representations are special cases of arithmetic circuits as well. We finally note that the relationship between multi–linear functions (polynomials in general) and arithmetic circuits is a classical subject of algebraic complexity theory [15]. In this field of complexity, computational problems are expressed as polynomials, and a central question is that of determining the size of the smallest arithmetic circuit that computes a given polynomial, leading to the notion of circuit complexity. Using this notion, it is then meaningful to talk about the circuit complexity of a graphical model: the size of the smallest arithmetic circuit that computes the multi–linear function induced by the model. Acknowledgment This work has been partially supported by NSF grant IIS9988543 and MURI grant N00014-00-1-0617. References [1] H. Chan and A. Darwiche. When do numbers really matter? JAIR, 17: 265–287, 2002. [2] A. Darwiche. A differential approach to inference in Bayesian networks. In UAI’00, pages 123–132, 2000. To appear in JACM. [3] A. Darwiche. A logical approach to factoring belief networks. In KR’02, pages 409– 420, 2002. [4] A. Darwiche. On the factorization of multi–linear functions. Technical Report D–128, UCLA, Los Angeles, Ca 90095, 2002. [5] J. Hoey, R. St-Aubin, A. Hu, and G. Boutilier. SPUDD: Stochastic planning using decision diagrams. In UAI’99, pages 279–288, 1999. [6] C. Huang and A. Darwiche. Inference in belief networks: A procedural guide. IJAR, 15(3): 225–263, 1996. [7] M. Iri. Simultaneous computation of functions, partial derivatives and estimates of rounding error. Japan J. Appl. Math., 1:223–252, 1984. [8] F. V. Jensen, S.L. Lauritzen, and K.G. Olesen. Bayesian updating in recursive graphical models by local computation. Comp. Stat. Quart., 4:269–282, 1990. [9] J. Park. MAP complexity results and approximation methods. In UAI’02, pages 388–396, 2002. [10] J. Park and A. Darwiche. Approximating MAP using stochastic local search. In UAI’01, pages 403–410, 2001. [11] J. Park and A. Darwiche. A differential semantics for jointree algorithms. Technical Report D–118, UCLA, Los Angeles, Ca 90095, 2001. [12] G. Rote. Path problems in graphs. Computing Suppl., 7:155–189, 1990. [13] S. Russell, J. Binder, D. Koller, and K. Kanazawa. Local learning in probabilistic networks with hidden variables. In UAI’95, pages 1146–1152, 1995. [14] P. P. Shenoy and G. Shafer. Propagating belief functions with local computations. IEEE Expert, 1(3):43–52, 1986. [15] J. von zur Gathen. Algebraic complexity theory. Ann. Rev. Comp. Sci., 3:317–347, 1988.
2002
3
2,232
Scaling of Probability-Based Optimization Algorithms J. L. Shapiro Department of Computer Science University of Manchester Manchester, M13 9PL U.K. jls@cs.man.ac.uk Abstract Population-based Incremental Learning is shown require very sensitive scaling of its learning rate. The learning rate must scale with the system size in a problem-dependent way. This is shown in two problems: the needle-in-a haystack, in which the learning rate must vanish exponentially in the system size, and in a smooth function in which the learning rate must vanish like the square root of the system size. Two methods are proposed for removing this sensitivity. A learning dynamics which obeys detailed balance is shown to give consistent performance over the entire range of learning rates. An analog of mutation is shown to require a learning rate which scales as the inverse system size, but is problem independent. 1 Introduction There has been much recent work using probability models to search in optimization problems. The probability model generates candidate solutions to the optimization problem. It is updated so that the solutions generated should improve over time. Usually, the probability model is a parameterized graphical model, and updating the model involves changing the parameters and possibly the structure of the model. The general scheme works as follows, • Initialize the model to some prior (e.g. a uniform distribution); • Repeat - Sampling step: generate a data set by sampling from the probability model; - Testing step: test the data as solutions to the problem; - Selection step: create a improved data set by selecting the better solutions and removing the worse ones; - Learning step: create a new probability model from the old model and the improved data set (e.g. as a mixture of the old model and the most likely model given the improved data set); • until (stopping criterion met) Different algorithms are largely distinguished by the class of probability models used. For reviews of the approach including the different graphical models which have been used, see [3, 6]. These algorithms have been called Estimation of Distribution Algorithms (EDA); I will use that term here. EDAs are related to genetic algorithms; instead of evolving a population, a generative model which produces the population at each generation is evolved. A motivation for using EDAs instead of GAs is that is that in EDAs the structure of the graphical model corresponds to the form of the crossover operator in GAs (in the sense that a given graph will produce data whose probability will not change much under a particular crossover operator). If the EDA can learn the structure of the graph, it removes the need to set the crossover operator by hand (but see [2] for evidence against this). In this paper, a very simple EDA is considered on very simple problems. It is shown that the algorithm is extremely sensitive to the value of learning rate. The learning rate must vanish with the system size in a problem dependent way, and for some problems it has to vanish exponentially fast. Two correctives measures are considered: a new learning rule which obeys detailed balance in the space of parameters, and an operator analogous to mutation which has been proposed previously. 2 The Standard PBIL Algorithm The simplest example of a EDA is Population-based Incremental Learning (PBIL) which was introduced by Baluja [1]. PBIL uses a probability model which is a product of independent probabilities for each component of the binary search space. Let Xi denote the ith component of X, an L-component binary vector which is a state of the search space. The probability model is defined by the L-component vector of parameters 'Y ~), where 'Yi(t) denotes the probability that Xi = 1 at time t. The algorithm works as follows, • Initialize 'Yi(O) = 1/2 for all i; • Repeat - Generate a population of N strings by sampling from the binomial distribution defined by 1(t). - Find the best string in the population x*. - Update the parameters 'Yi(t + 1) = 'Yi(t) + a[xi - 'Yi (t)] for all i. • until (stopping criterion met) The algorithm has only two parameters, the size of the population N and the learning parameter a. 3 The sensitivity of PBIL to the learning rate 3.1 PBIL on a flat landscape The source of sensitivity of PBIL to the learning rate lies in its behavior on a flat landscape. In this case all vectors are equally fit, so the "best" vector x* is a random vector and its expected value is (1) (where (-) denotes the expectation operator) Thus, the parameters remain unchanged on average. In any individual run, however, the parameters converge rapidly to one of the corners of the hypercube. As the parameters deviate from 1/2 they will move towards a corner of the hypercube. Then the population generated will be biased towards that corner, which will move the parameters closer yet to that corner, etc. All of the corners of the hypercube are attractors which, although never reached, are increasingly attractive with increasing proximity. Let us call this phenomenon drift. (In population genetics, the term drift refers to the loss of genetic diversity due to finite population sampling. It is in analogy to this that the term is used here.) Consider the average distance between the parameters and 1/2, 1 (1 )2 D(t) == L 2: "2 - 'Yi(t) • (2) Solving this reveals that on average this converges to 1/4 with a characteristic time T = -1/ 10g(1 - 0:2) ~ 1/0:2 for 0: ~ O. (3) The rate of search on any other search space will have to compete with drift. 3.2 PBIL and the needle-in-the haystack problem As a simple example of the interplay between drift and directed search, consider the so-called needle-in-a-haystack problem. Here the fitness of all strings is 0 except for one special string (the "needle") which has a fitness of 1. Assume it is the string of all 1 'so It is shown here that PBIL will only find the needle if 0: is exponentially small, and is inefficient at finding the needle when compared to random search. Consider the probability of finding the needle at time t, denoted O(t) = rrf=1 'Yi(t). Consider times shorter than T where T is long enough that the needle may be found multiple times, but 0:2T -+ 0 as L -+ 00. It will be shown for small 0: that when the needle is not found (during drift), 0 decreases by an amount 0:2 LO/2, whereas when the needle is found, 0 increases by the amount o:LO. Since initially, the former happens at a rate 2L times greater than the latter, 0: must be less than 2 - (L - 1) for the system to move towards the hypercube corner near the optimum, rather than towards a random corner. When the needle is not found, the mean of O(t) is invariant, (O(t + 1)) = O(t). However, this is misleading, because 0 is not a self-averaging quantity; its mean is affected by exponentially unlikely events which have an exponentially big effect. A more robust measure of the size of O(t) is the exponentiated mean of the log of O(t). This will be denoted by [0] == exp (log 0). This is the appropriate measure of the central tendency of a distribution which is approximately log-normal [4], as is expected of O(t) early in the dynamics, since the log of 0 is the sum of approximately independent quantities. The recursion for 0 expanded to second order in 0: obeys { [O(t)] [1 - 10:2 L] . [O(t + 1)] = [O(t)] [1 + ~L + ~'0:2 L(L - 1)] ; needle not found needle found. In these equations, 'Yi(t) has also been expanded around 1/2. (4) Since the needle will be found with probability O(t) and not found with probability 1 - O(t), the recursion averages to, [O(t + 1)] = [O(t)] (1 ~0:2 L) + [0(t)]2 [O:L ~0:2 L(L + 1)] . (5) The second term actually averages to [D(t)] (D(t)) , but the difference between (D) and [D] is of order 0:, and can be ignored. Equation (5) has a stable fixed point at 0 and an unstable fixed point at 0:/2 + O( 0:2 L). If the initial value of D(O) is less than the unstable fixed point, D will decay to zero. If D(O) is greater than the unstable fixed point, D will grow. The initial value is D(O) = 2- £, so the condition for the likelihood of finding the needle to increase rather than decrease is 0: < 2-(£-1). 1.1 ,-----~-~--~-~--,_________, 120 a Figure 1: Simulations on PBIL on needle-in-a-haystack problem for L = 8,10,11,12 (respectively 0, +, *, 6). The algorithm is run until no parameters are between 0.05 and 0.95, and averaged over 1000 runs. Left: Fitness of best population member at convergence versus 0:. The non-robustness of the algorithm is clear; as L increases, 0: must be very finely set to a very small value to find the optimum. Right: As previous, but with 0: scaled by 2£. The data approximately collapses, which shows that as L increases, 0: must decrease like 2-£ to get the same performance. Figure 1 shows simulations of PBIL on the needle-in-a-haystack problem. These confirm the predictions made above, the optimum is found only if 0: is smaller than a constant times 2£. The algorithm is inefficient because it requires such small 0:; convergence to the optimum scales like 4£. This is because the rate of convergence to the optimum goes like Do:, both of which are 0(2-£). 3.3 PBIL and functions of unitation One might think that the needle-in-the-haystack problem is hard in a special way, and results on this problem are not relevant to other problems. This is not be true, because even smooth functions have fiat subspaces in high dimensions. To see this, consider any continuous, monotonic function of unit at ion u, where u = t L~ Xi , the number of 1 's in the vector. Assume the the optimum occurs when all components are l. The parameters 1 can be decomposed into components parallel and perpendicular to the optimum. Movement along the perpendicular direction is neutral, Only movement towards or away from the optimum changes the fitness. The random strings generated at the start of the algorithm are almost entirely perpendicular to the global optimum, projecting only an amount of order 1/..JL towards the optimum. Thus, the situation is like that of the needle-in-a-haystack problem. The perpendicular direction is fiat, so there is convergence towards an arbitrary hypercube corner with a drift rate, TJ.. '" a? from equation (3). Movement towards the global optimum occurs at a rate, a Til '" VL· (6) (7) Thus, a must be small compared to l/VL for movement towards the global optimum to win. A rough argument can be used to show how the fitness in the final population depends on a. Making use of the fact that when N random variables are drawn from a Gaussian distribution with mean m and variance u 2 , the expected largest value drawn is m + J2u 2 10g(N) for large N (see, for example, [7]) , the Gaussian approximation to the binomial distribution, and approximating the expectation of the square root as the square root of the expectation yields, (u(t + 1)) = (u(t)) + aJ2 (v(t)) 10g(N), (8) where v(t) is the variance in probability distribution, v(t) = -b L i Ii (t)[l - li(t)]. Assuming that the convergence of the variance is primarily due to the convergence on the flat subspace, this can be solved as, 1 Jlog(N) (u(oo)) ~ "2 + aV'iL . (9) The equation must break down when the fitness approaches one, which is where the Gaussian approximation to the binomial breaks down. 0.9 0.9 0.8 0.8 0.7 0.7 0.6 ~ ~ 0 . 6 ~ 0.5 I u.. '" NO.5 0.4 0.4 0.3 0.3 0.2 0.1 0.2 0 20 0 0.2 0.4 0.6 0.8 a Figure 2: Simulations on PBIL on the unitation function for L = 16,32,64,128,256 (respectively D, 0, +, *, 6). The algorithm is run until all parameters are closer to 1 or 0 than 0.05, and averaged over 100 runs. Left: Fitness of best population member at convergence versus a. The fitness is scaled so that the global optimum has fitness 1 and the expected fitness of a random string is O. As L increases, a must be set to a decreasing value to find the optimum. Right: As previous, but with a scaled by VL. The data approximately collapses, which shows that as L increases, a must decrease like VL to get the same performance. The smooth curve shows equation (9). Simulations of PBIL on the unitation function confirm these predictions. PBIL fails to converge to the global optimum unless a is small compared to l/VL. Figure 2 shows the scaling of fitness at convergence with aVL, and compares simulations with equation (9). 4 Corrective 1 Detailed Balance PBIL One view of the problem is that it is due to the fact that the learning dynamics does not obey detailed balance. Even on a flat space, the rate of movement of the parameters "Yi away from 1/2 is greater than the movement back. It is wellknown that a Markov process on variables x will converge to a desired equilibrium distribution 7r(x) if the transition probabilities obey the detailed balance conditions, w(x'lx)7r(x) = w(xlx')7r(x'), (10) where w(x'lx) is the probability of generating x' from x. Thus, any search algorithm searching on a flat space should have dynamics which obeys, w(x'lx) = w(xlx'), (11) and PEIL does not obey this. Perhaps the sensitive dependence on a would be removed if it did. There is a difficulty in modifying the dynamics of PBIL to satisfy detailed balance, however. PEIL visits a set of points which varies from run to run, and (almost) never revisits points. This can be fixed by constraining the parameters to lie on a lattice. Then the dynamics can be altered to enforce detailed balance. Define the allowed parameters in terms of a set of integers ni. The relationship between them is. { I ~(1 - a)ni, "Yi = !(1- a) lni l, 2 ' ni > 0; ni < 0; ni = O. (12) Learning dynamics now consists of incrementing and decrementing the n/s by 1; when xi = 1(0) ni is incremented (decremented). Transforming variables via equation (12), the uniform distribution in "Y becomes in n, P (n) = _a_(I_ a) lnl. 2-a 4.0.1 Detailed balance by rejection sampling (13) One of the easiest methods for sampling from a distribution is to use the rejection method. In this, one has g(x'lx) as a proposal distribution; it is the probability of proposing the value x' from x. Then, A(x'lx) is the probability of accepting this change. Detailed balance condition becomes g(x'lx)A(x'lx)7r(x) = g(xlx')A(xlx')7r(x'). (14) For example, the well-known Metropolis-Hasting algorithm has A(x'lx) = min (1, :~~}:(~}I~})' (15) The analogous equations for PEIL on the lattice are, . [1- "Y(n+l) ] A(n + lin) mm "Y(n) (1 - a), 1 (16) A(n-lln) = min[{~;(~~(1-a),I]. (17) In applying the acceptance formula, each component is treated independently. Thus, moves can be accepted on some components and not on others. 4.0.2 Results Detailed Balance PBIL requires no special tuning of parameters, at least when applied to the two problems of the opening sections. For the needle-in-a-haystack, simulations were performed for 100 values of (): between 0 and 0.4 equally spaced for L = 8,9,10,11,12; 1000 trials of each, population size 20, with the same convergence criterion as before, simulation halts when all "Ii'S are less than 0.05 or greater than 0.95. On none of those simulations did the algorithm fail to contain the global optimum in the final population. For the function of unitation, Detailed Balance PBIL appears to always find the optimum if run long enough. Stopping it when all parameters fell outside the range (0.05,0.95), the algorithm did not always find the global optimum. It produced an average fitness within 1% of the optimum for (): between 0.1 and 0.4 and L = 32, 64,128,256 over a 100 trials, but for learning rates below 0.1 and L = 256 the average fitness fell as low as 4% below optimum. However, this is much improved over standard PBIL (see figure 2) where the average fitness fell to 60% below the optimum in that range. 5 Corrective 2 Probabilistic mutation Another approach to control drift is to add an operator analogous to mutation in GAs. Mutation has the property that when repeatedly applied, it converges to a random data set. Muhlenbein [5] has proposed that the analogous operator ED As estimates frequencies biased towards a random guess. Suppose ii is the fraction of l's at site i. Then, the appropriate estimate of the probability of a 1 at site i is ii + m "Ii = 1 + 2m' (18) where m is a mutation-like parameter. This will be recognized as the maximum posterior estimate of the binomial distribution using as the prior a ,a-distribution with both parameters equal to mN + 1; the prior biases the estimate towards 1/2. This can be applied to PBIL by using the following learning rule, ( 1) "Ii(t) + (): [x; - "Ii (t)] + m "Ii t + = 1 + 2m . (19) With m = 0 it gives the usual PBIL rule; when repeatedly applied on a flat space it converges to 1/2. Unlike Detailed Balance PBIL, this approach does required special scaling of the learning rate, but the scaling is more benign than in standard PBIL and is problem independent. It is determined from three considerations. First, mutation must be large enough to counteract the effects of drift towards random corners of the hypercube. Thus, the fixed point of the average distance to 1/2, (D(t + 1)) defined in equation (2) , must be sufficiently close to zero. Second, mutation must be small enough that it does not interfere with movement towards the parameters near the optimum when the optimum is found. Thus, the fixed point of equation (19) must be sufficiently close to 0 or 1. Finally, a sample of size N sampled from the fixed point distribution near the hypercube corner containing the optimum should contain the optimum with a reasonable probability (say greater than 1 - e- 1). Putting these considerations together yields, logN m (): -- » - » -. L (): 4 (20) 5.1 Results To satisfy the conditions in equation 20, the mutation rate was set to m ex: a 2 , and a was constrained to be smaller than log (N)/L. For the needle-in-a-haystack, the algorithm behaved like Detailed Balance PElL. It never failed to find the optimum for the needle-in-a-haystack problems for the sizes given previously. For the functions of unitation, no improvement over standard PBIL is expected, since the scaling using mutation is worse, requiring a < 1/ L rather than a < 1/..fL. However, with tuning of the mutation rate, the range of a's with which the optimum was always found could be increased over standard PBIL. 6 Conclusions The learning rate of PBIL has to be very small for the algorithm to work, and unpredictably so as it depends upon the problem size in a problem dependent way. This was shown in two very simple examples. Detailed balance fixed the problem dramatically in the two cases studied. Using detailed balance, the algorithm consistently finds the optimum over the entire range of learning rates. Mutation also fixed the problem when the parameters were chosen to satisfy a problem-independent set of inequalities. The phenomenon studied here could hold in any EDA, because for any type of model, the probability is high of generating a population which reinforces the move just made. On the other hand, more complex models have many more parameters, and also have more sources of variability, so the issue may be less important. It would be interesting to learn how important this sensitivity is in EDAs using complex graphical models. Of the proposed correctives, detailed balance will be more difficult to generalize to models in which the structure is learned. It requires an understanding of algorithm's dynamics on a flat space, which may be very difficult to find in those cases. The mutation-type operator will easier to generalize, because it only requires a bias towards a random distribution. However, the appropriate setting of the parameters may be difficult to ascertain. References [1] S. Baluja. Population-based incremental learning: A method for integrating genetic search based function optimization and competive learning. Technical Report CMUCS-94-163, Computer Science Department, Carnegie Mellon University, 1994. [2] A. Johnson and J. L. Shapiro. The importance of selection mechanisms in distribution estimation algorithms. In Proceedings of the 5th International Conference on Artificial Evolution AE01, 2001. [3] P. Larraiiaga and J. A. Lozano. Estimation of Distribution Algorithms, A New Tool for Evolutionary Computation. Kluwer Academic Publishers, 2001. [4] Eckhard Limpert, Werner A. Stahel, and Markus Abbt. Log-normal distributions across the sciences: Keys and clues. BioScience, 51(5):341-352, 2001. [5] H. Miihlenbein. The equation for response to selection and its use for prediction. Evolutionary Computation, 5(3):303- 346, 1997. [6] M. Pelikan, D. E. Goldberg, and F. Lobo. A survey of optimization by building and using probabilistic models. Technical report, University of Illinois at UrbanaChampaign, Illinois Genetic Algorithms Laboratory, 1999. [7] Jonathan L. Shapiro and Adam Priigel-Bennett. Maximum entropy analysis of genetic algorithm operators. Lecture Notes in Computer Science, 993:14- 24, 1995.
2002
30
2,233
Forward-Decoding Kernel-Based Phone Sequence Recognition Shantanu Chakrabartty and Gert Cauwenberghs Center for Language and Speech Processing Department of Electrical and Computer Engineering Johns Hopkins University, Baltimore MD 21218 {shantanu,gert}@jhu.edu Abstract Forward decoding kernel machines (FDKM) combine large-margin classifiers with hidden Markov models (HMM) for maximum a posteriori (MAP) adaptive sequence estimation. State transitions in the sequence are conditioned on observed data using a kernel-based probability model trained with a recursive scheme that deals effectively with noisy and partially labeled data. Training over very large data sets is accomplished using a sparse probabilistic support vector machine (SVM) model based on quadratic entropy, and an on-line stochastic steepest descent algorithm. For speaker-independent continuous phone recognition, FDKM trained over 177 ,080 samples of the TlMIT database achieves 80.6% recognition accuracy over the full test set, without use of a prior phonetic language model. 1 Introduction Sequence estimation is at the core of many problems in pattern recognition, most notably speech and language processing. Recognizing dynamic patterns in sequential data requires a set of tools very different from classifiers trained to recognize static patterns in data assumed i.i.d. distributed over time. The speech recognition community has predominantly relied on hidden Markov models (HMMs) [1] to produce state-of-the-art results. HMMs are generative models that function by estimating probability densities and therefore require a large amount of data to estimate parameters reliably. If the aim is discrimination between classes, then it might be sufficient to model discrimination boundaries between classes which (in most affine cases) afford fewer parameters. Recurrent neural networks have been used to extend the dynamic modeling power of HMMs with the discriminant nature of neural networks [2], but learning long term dependencies remains a challenging problem [3]. Typically, neural network training algorithms are prone to local optima, and while they work well in many situations, the quality and consistency of the converged solution cannot be warranted. Large margin classifiers, like support vector machines, have been the subject of intensive research in the neural network and artificial intelligence communities [4]. They are attractive because they generalize well even with relatively few data points in the training set, and bounds on the generalization error can be directly obtained from the training data. Under general conditions, the training procedure finds a unique solution (decision or regression surface) that provides an out-of-sample performance superior to many techniques. Recently, support vector machines (SVMs) [4] have been used for phoneme (or phone) recognition [5] and have shown encouraging results. However, use of a standard SVM P(xI1) P(111 ) P(110) (a) P(xIO) P(OIO) P(110,x) (b) Figure 1: (a) Two state Markovian maximum-likehood (ML) model with static state transition probabilities and observation vectors xemittedfrom the states. (b) Two state Markovian MAP model, where transition probabilities between states are modulated by the observation vector x. classifier by itself implicitly assumes i.i.d. data, unlike the sequential nature of phones. To model inter-phonetic dependencies, maximum likelihood (ML) approaches assume a phonetic language model that is independent of the utterance data [6], as illustrated in Figure 1 (a). In contrast, the maximum a posteriori (MAP) approach assumes transitions between states that are directly modulated by the observed data, as illustrated in Figure 1 (b). The MAP approach lends itself naturally to hybrid HMM/connectionist approaches with performance comparable to state-of-the-art HMM systems [7]. FDKM [8] can be seen a hybrid HMM/SYM MAP approach to sequence estimation. It thereby augments the ability of large margin classifiers to infer sequential properties of the data. FDKMs have shown superior performance for channel equalization in digital communication where the received symbol sequence is contaminated by inter symbol interference [8]. In the present paper, FDKM is applied to speaker-independent continuous phone recognition. To handle the vast amount of data in the TIMIT corpus, we present a sparse probabilistic model and efficient implementation of the associated FDKM training procedure. 2 FDKM formulation The problem of FDKM recognition is formulated in the framework of MAP (maximum a posteriori) estimation, combining Markovian dynamics with kernel machines. A Markovian model is assumed with symbols belonging to S classes, as illustrated in Figure I(a) for S = 2. Transitions between the classes are modulated in probability by observation (data) vectors x over time. 2.1 Decoding Formulation The MAP forward decoder receives the sequence X [n] = {x[n], x [n - 1], ... ,xli]} and produces an estimate of the probability of the state variable q[n] over all classes i, adn] = P(q[n] = i I X [n], w), where w denotes the set of parameters for the learning machine. Unlike hidden Markov models, the states directly encode the symbols, and the observations x modulate transition probabilities between states [7]. Estimates of the posterior probability a i [n] are obtained from estimates of local transition probabilities using the forward-decoding procedure [7] S - l adn] = L Pij[n] aj[n - 1] j=O (1) where Pij [n] = P(q[n] = i I q[n - 1] = j , x[n], w) denotes the probability of making a transition from class j at time n - 1 to class i at time n, given the current observation vector x[n]. The forward decoding (1) embeds sequential dependence of the data wherein the probability estimate at time instant n depends on all the previous data. An on-line estimate of the symbol q[n] is thus obtained: qest [n] = arg max ai [n] (2) t The BCJR forward-backward algorithm [9] produces in principle a better estimate that accounts for future context, but requires a backward pass through the data, which is impractical in many applications requiring real time decoding. Accurate estimation of transition probabilities Pij [n] in (1) is crucial in decoding (2) to provide good performance. In [8] we used kernel logistic regression [10], with regularized maximum cross-entropy, to model conditional probabilities. A different probabilistic model that offers a sparser representation is introduced below. 2.2 Training Formulation For training the MAP forward decoder, we assume access to a training sequence with labels (class memberships). For instance, the TIMIT speech database comes labeled with phonemes. Continuous (soft) labels could be assigned rather than binary indicator labels, to signify uncertainty in the training data over the classes. Like probabilities, label assignments are normalized: L;:Ol ydn] = 1, ydn] :::: 0. The objective of training is to maximize the cross-entropy of the estimated probabilities adn] given by (1) with respect to the labels Ydn] over all classes i and training data n N - 1 8 - 1 H = L L Ydn]log adn] (3) n = O i = O To provide capacity control we introduce a regularizer fl( w) in the objective function [II). The parameter space w can be partitioned into disjoint parameter vectors W ij and bij for each pair of classes i, j = 0, ... , S - 1 such that Pij [n] depends only on W ij and bij . (The parameter bij corresponds to the bias term in the standard SVM formulation). The regularizer can then be chosen as the L2 norm of each disjoint parameter vector, and the objective function becomes N - 1 8 - 1 1 8 - 1 8 - 1 H = C L Lydn]logadn] "2 L L IW ij l2 n = O i = O j = O i = O (4) where the regularization parameter C controls complexity versus generalization as a biasvariance trade-off [11). The objective function (4) is similar to the primal formulation of a large margin classifier [4]. Unlike the convex (quadratic) cost function of SVMs, the formulation (4) does not have a unique solution and direct optimization could lead to poor local optima. However, a lower bound of the objective function can be formulated so that maximizing this lower bound reduces to a set of convex optimization sub-problems with an elegant dual formulation in terms of support vectors and kernels. Applying the convex property of the - log(.) function to the convex sum in the forward estimation (1), we obtain directly (5) where N - 1 8 - 1 8 - 1 Hj = L Cj [n] L ydn]log Pij [n] - ~ L IWij 12 (6) n = O i = O i = O with effective regularization sequence Cj[n] = Caj[n - 1] . (7) Disregarding the intricate dependence of (7) on the results of (6) which we defer to the followin~ section, the formulation (6) is equivalent to regression of conditional probabilities Pij [n j from labeled data x [n] and Yi [n], for a given outgoing state j. 2.3 Kernel Logistic Probability Regression Estimation of conditional probabilities Pr( ilx) from training data x[n] and labels Yi [n] can be obtained using a regularized form of kernel logistic regression [10]. For each outgoing state j, one such probabilistic model can be constructed for the incoming state i conditional onx[n]: 5 - 1 Pij [n] = exp(fij (x[n])) I L exp(f8j (x[n])) (8) 8= 0 As with SVMs, dot products in the expression for i ij (x) in (8) convert into kernel expansions over the training data x[m] by transforming the data to feature space [12] i ij (x) Wij ·X + bij LX?] x[m].x + bij (9) m <p( ) '" ----+ 6 A0 K(x [m], x) + bij m where K (', .) denotes any symmetric positive-definite kernel l that satisfies the Mercer condition, such as a Gaussian radial basis function or a polynomial [11]. Optimization of the lower-bound in (5) requires solving M disjoint but similar suboptimization problems (6). The subscript j is omitted in the remainder of this section for clarity. The (primal) objective function of kernel logistic regression expresses regularized cross-entropy (6) of the logistic model (8) in the form [13, 14] 1 N M H = - L 21wil2 + C L [L Ydm]jk(x[m]) _log(e!I (x[m]) + ... + efM(x[m])]. (10) i m i The parameters A0 in (9) are determined by minimizing a dual formulation of the objective function (10) obtained through the Legendre transformation, which for logistic regression takes the form of an entropy-based potential function in the parameters [10] MIN N N H e = L [2 L L A~Qlm AZO + C L (Ydm] - AZOIC) log(ydm] - AZOIC)] (11) . I m m subject to constraints LAZO 0 (12) m LAZO 0 (13) Am 2 < Cydm] (14) There are two disadvantages of using the logistic regression dual directly: 1. The solution is non-sparse and all the training points contribute to the final solution. For tasks involving large data sets like phone recognition this turns out to be prohibitive due to memory and run-time constraints. 2. Even though the dual optimization problem is convex, it is not quadratic and precludes the use of standard quadratic programming (QP) techniques. One has to resort to Newton-Raphson or other nonlinear optimization techniques which complicate convergence and require tuning of additional system parameters. I K(x, y) = <I>(x).<I>(y). The map <1>(-) need not be computed explicitly, as it only appears in inner-product form. 2.4 GiniSVM formulation The GiniSVM probabilistic model [15] provides a sparse alternative to logistic regression. A quadratic ('Gini' [16]) index replaces entropy in the dual formulation of logistic regression. The 'Gini' index provides a lower bound of the dual logistic functional, and its quadratic form produces sparse solutions as with support vector machines. The tightness of the bound provides an elegant trade-off between approximation and sparsity. Jensen's inequality (logp ::::; P - 1) formulates the lower bound for the entropy term in (11) in the form of the multivariate Gini impurity index [16]: M M 1- LP; ::::; - LPi logpi (15) where 0 ::::; Pi ::::; 1, Vi and L,i Pi = 1. Both forms of entropy L,~ Pi log Pi and 1 L,~ PT reach their maxima at the same values Pi == 1/ M corresponding to a uniform distribution. As in the binary case, the bound can be tightened by scaling the Gini index with a multiplicative factor '1 ~ 1, of which the particular value depends on M.2 The GiniSVM dual cost function Hg is then given by M I N N N H g = L [2 LL>'~Qlm>'7' +'YC(L (ydm]- >'7'/C)2 - 1)] (16) . 1 m m The convex quadratic cost function (16) with constraints in (11) can now be minimized directly using standard quadratic programming techniques. The primary advantage of the technique is that it yields sparse solutions and yet approximates the logistic regression solution very well [15]. 2.5 Online GiniSVM Training For very large data sets such as TIMIT, using a QP approach to train GiniSVM may still be prohibitive even through sparsity drastically in the trained model reduces the number of support vectors. An on-line estimation procedure is presented, that computes each coefficient >'i in turn from single presentation of the data {x[n], ydn]}. A line search in the parameter >'i and the bias bi performs stochastic steepest descent of the dual objective function (16) of the form (17) n bi ~ bi + L>'~ (18) 1 where [x] + denotes the positive part of x. The normalization factor zn is determined by equation M n L [Cydn](Qnn + 2) + f dn] + 2 L >.f - znl + = C(Qnn + 2) + 2'1 (19) £ solved in at most M algorithmic iterations. 3 Recursive FDKM Training The weights (7) in (6) are recursively estimated using an iterative procedure reminiscent of (but different from) expectation maximization. The procedure involves computing new estimates of the sequence Ctj [n - 1] to train (6) based on estimates of Pij using previous values of the parameters >.i]. The training proceeds in a series of epochs, each refining the training ~t1' +fl n-1 n 1 n-2 n-1 n 2 :rt~r]~i' n-K n-2 n-1 n time_ K Figure 2: Iterations involved in training FDKM on a trellis based on the Markov model of Figure I. During the initial epoch, parameters of the probabilistic model, conditioned on the observed labelfor the outgoing state at time n - 1, of the state at time n are trainedfrom observed labels at time n. During subsequent epochs, probability estimates of the outgoing state at time n - lover increasing forward decoding depth k = 1, ... K determine weights assigned to data nfor training each of the probabilistic models conditioned on the outgoing state. estimate of the sequence CYj[n - 1] by increasing the size of the time window (decoding depth, k) over which it is obtained by the forward algorithm (1). The training steps are illustrated in Figure 2 and summarized as follows: 1. To bootstrap the iteration for the first training epoch (k = 1), obtain initial values for CYj[n - 1] from the labels of the outgoing state, CYj [n - 1] = Yj [n - 1]. This corresponds to taking the labels Ydn - 1] as true state probabilities which corresponds to the standard procedure of using fragmented data to estimate transition probabilities. 2. Train logistic kernel machines, one for each outgoing class j, to estimate the parameters in Pij[n ], i, j = 1, .. , S from the training data x[n] and labels Ydn ], weighted by the sequence CYj [n - 1]. 3. Re-estimate CYj [n - 1] using the forward algorithm (1) over increasing decoding depth k, by initializing CYj [n - k] to y[n - k]. 4. Re-train, increment decoding depth k, and re-estimate CYj [n - 1], until the final decoding depth is reached (k = K). The performance of FDKM training depends on the final decoding depth K, although observed variations in generalization performance for large values of K are relatively smalL A suitable value can be chosen a priori to match the extent of temporal dependency in the data. For phoneme classification in speech, the decoding depth can be chosen according to the length of a typical syllable. An efficient procedure to implement the above algorithm is discussed in [15]. 4 Experiments and Results The performance of FDKM was evaluated on the full TIMIT dataset [17], consisting of labeled continuous spoken utterances. The 60 phone classes presented in TIMIT were first collapsed onto 39 classes according to standard folding techniques [6]. The training set consisted of 6,300 sentences spoken by 63 speakers, resulting in 177,080 phone instances. The test set consisted of 192 sentences spoken by 24 speakers. The speech signal was first processed by a pre-emphasis filter with transfer function 1 - 0.97z- 1. Subsequently, a 25 ms Hamming window was applied over 10 ms shifts to extract a sequence of phonetic segments. Cepstral coefficients were extracted from the sequence, combined with their first and second order time differences into a 39-dimensional vector. Cepstral mean subtraction and speaker normalization were subsequently applied. 2Unlike the binary case (M = 2), the factor 'Y for general M cannot be chosen to match the two maxima at Pi = 11M. Table 1: Performance Evaluation of FDKM (K = 10) on TIMIT Machme 84 83 ~82 28 ~ 1 380 :~ 0079 o u ~78 77 Accuracy InsertIOn SubstItutIOn DeletIOn Errors V I !---- / / / / 1------/ ~ V ~ ~ L ~ Training I ! ---<rTest 2 4 6 8 10 Decoding depth k Figure 3: Recognition rate as afunction of decoding depth k = 1, . . . K. Each phone utterance were then subdivided into three segments with relative proportions 4:3:4 [18]. The features in the three segments were individually averaged and concatenated to obtain a 117 -dimensional feature vector. Evaluation on the test was performed using thresholding of state probabilities in the MAP forward decoding (2) [19], with threshold 0.25. The decoded phone sequence was then compared with the transcribed sequence using Levenshtein's distance to evaluate different sources of errors. Multiple runs of identical phones in the decoded and transcribed sequences were collapsed to single phone instances to reflect true insertion errors. Table 1 summarizes the results of the experiments with FDKM on TIMIT for different values of the regularization constant C. The recognition performance is comparable to the state of the art using HMMs and other approaches, in the upper 70% and lower 80% range [2, 5, 20]. Figure 3 illustrates the improvement in recognition rate with increasing decoding depth k. The optimum value k ;::::; 10 corresponds to inter-phonetic dependencies on a time scale of 100 ms. 5 Conclusion Experiments with FDKM on the TIMIT corpus have demonstrated levels of speakerindependent continuous phone recognition accuracy comparable to or better than other approaches that use HMMs and their various extensions. FDKM improves decoding and generalization performance for data with embedded sequential structure, providing an elegant tradeoff between learning temporal versus spatial dependencies. The recursive estimation procedure reduces or masks the effect of noisy or missing labels Yj [n]. Further improvements can be expected by tuning of hyper-parameters and improved representation of acoustic features. Acknowledgement This work was supported by a grant from the Catalyst Foundation. References [1] L. Rabiner and B-H Juang, Fundamentals of Speech Recognition, Englewood Cliffs, NJ: Prentice-Hall, 1993. [2] Robinson, AJ., "An application of recurrent nets to phone probability estimation," IEEE Transactions on Neural Networks, vol. S,No.2,March 1994. [3] Bengio, Y, "Learning long-term dependencies with gradient descent is difficult," IEEE T. Neural Networks, vol. S, pp. IS7-166, 1994. [4] Vapnik, V. The Nature of Statistical Learning Theory, New York: Springer-Verlag, 1995. [S] Clark, P. and Moreno, M.J. "On the use of Support Vector Machines for Phonetic Classification," IEEE Conf. Proc., 1999. [6] Lee, K.F and Hon, H.W, "Speaker-Independent phone recognition using hidden markov models," IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, pp. 1641-1648, 1989. [7] Bourlard, H. and Morgan, N., Connectionist Speech Recognition: A Hybrid Approach, Kluwer Academic, 1994. [8] Chakrabartty, S. and Cauwenberghs, G. "Sequence Estimation and Channel Equalization using Forward Decoding Kernel Machines," IEEE Int. Con! Acoustics and Signal Proc. (ICASSP'2002), Orlando FL, 2002. [9] Bahl, L.R., Cocke J., Jelinek F and Raviv J. "Optimal decoding of linear codes for minimizing symbol error rate," IEEE Transactions on Inform. Theory, vol. IT-20, pp. 284-287,1974. [10] Jaakkola, T and Haussler, D. "Probabilistic kernel regression models," Proceedings of Seventh International Workshop on Artificial Intelligence and Statistics , 1999. [11] Girosi, F, Jones, M. and Poggio, T "Regularization Theory and Neural Networks Architectures," Neural Computation, vol. 7, pp 219-269, 1995. [12] SchOlkopf, B., Burges, C. and Smola, A., Eds., Advances in Kernel Methods-Support Vector Learning, MIT Press, Cambridge, 1998. [13] Wahba, G. Support Vector Machine, Reproducing Kernel Hilbert Spaces and Randomized GACV, Technical Report 984, Department of Statistics, University of Wisconsin, Madison WI. [14] Zhu, J and Hastie, T, "Kernel Logistic Regression and Import Vector Machine," Adv. IEEE Neural Information Processing Systems (NIPS '2001), Cambridge, MA: MIT Press, 2002. [IS] Chakrabartty, S. and Cauwenberghs, G. "Forward Decoding Kernel Machines: A hybrid HMM/SVM Approach to Sequence Recognition," IEEE Int. Con! of Pattern Recognition: SVM workshop. (ICPR'2002), Niagara Falls, 2002. [16] Breiman, L. Friedman, J. H. et al. Classification and Regression Trees, Wadsworth and Brooks, Pacific Grove, CA, 1984. [17] Fisher, w., Doddington G. et al The DARPA Speech Recognition Research Database: Specifications and Status. Proceedings DARPA speech recognition workshop, pp. 9399, 1986. [18] Fosler-Lussier, E. Greenberg, S. Morgan, N., "Incorporating contextual phonetics into automatic speech recognition," Proc. XIVth Int. Congo Phon. Sci., 1999. [19] Wald, A. Sequential Analysis, Wiley, New York, 1947. [20] Chengalvarayan, R. and Deng, Li., "Speech Trajectory Discrimination Using the Minimum Classification Error Training," IEEE Transactions on Speech and Audio Processing, vol. 6, pp. SOS-SIS, Nov. 1998.
2002
31
2,234
Optoelectronic Implementation of a FitzHugh-Nagumo Neural Model Alexandre R.S. Romariz , Kelvin Wagner Optoelectronic Computing Systems Center University of Colorado, Boulder, CO, USA 80309-0425 romariz@colorado.edu Abstract An optoelectronic implementation of a spiking neuron model based on the FitzHugh-Nagumo equations is presented. A tunable semiconductor laser source and a spectral filter provide a nonlinear mapping from driver voltage to detected signal. Linear electronic feedback completes the implementation, which allows either electronic or optical input signals. Experimental results for a single system and numeric results of model interaction confirm that important features of spiking neural models can be implemented through this approach. 1 Introduction Biologically-inspired computation paradigms take different levels of abstraction when modeling neural dynamics. The production of action potentials or spikes has been abstracted away in many rate-based neurodynamicmodels, but recently this feature has gained renewed interest [1, 2]. A computational paradigm that takes into account the timing of spikes (instead of spike rates only) might be more efficient for signal representation and processing, especially at short time windows [3, 4, 5]. Optics technology provides high bandwidth and massive parallelism for information processing. However, the implementation of digital primitives have not as yet proved competitive against the scalability and low power operation of digital electronic gates. It is then natural to explore the features of optics for different computational paradigms. Artificial neural networks promise an excellent match to the capabilities of optics, as they emphasize simple analog operations, parallelism and adaptive interconnection[6, 7, 8, 9]. Optical implementations of Artificial Neural Networks have to deal with the problem of representing the nonlinear activation functions that define the input-output mappings for each neuron. Although nonlinear optics has been suggested for implementing neurons, hybrid optoelectronic systems, where the task of producing nonlinearity is given to the electronic circuits, may be more practical [10, 11]. In the case of pulsing neurons, the task seems more difficult still, for instead of a nonlinear static map we are required to implement a nonlinear dynamical system. Several possibilities for the implementation of pulsed optical neurons can be considered, including smart pixel pulsed electronic circuits with op On leave from the Electrical Engineering Department, University of Bras´ılia, Brazil tical inputs [12], pulsing laser cavity feedback dynamics [13] and competitive-cooperative phosphor feedback [14]. In this paper we demonstrate and evaluate an optoelectronic implementation of an artificial spiking neuron, based on the FitzHugh-Nagumo equations. The proposed implementation uses wavelength tunability of a laser source and a birefringent crystal to produce a nonlinear mapping from driving voltage to detected optical output [15]. Linear electronic feedback to the laser drive current completes the physical implementation of this model neuron. Inputs can be presented optically or electronically, and output signals are also readily available as optical or electronic pulses. This work is organized as follows. Section 2 reviews the FitzHugh-Nagumo equations and describes the particular optoelectronic spiking neuron implementation we propose here. In Section 3 we analyze and illustrate dynamical properties of the model. Experimental results of the optoelectronic system implementing one model are presented in Section 4. Numeric results that illustrate features of the interaction between models are shown in Section 5. 2 Modified FN Neural Model and optoelectronic implementation The FitzHugh-Nagumo neuron model [16, 17] is appealing for physical implementation, as it is fairly simple and completely described by a pair of coupled differential equations:              !  "# $&% (1) where  is an excitable state variable that exhibits bi-stability as a result of the nonlinear  ' term, and  is a linear recovery variable, bringing the neuron back to a resting state. In the original model proposal,  ( is a third-degree polynomial[16, 17]. This model has been previously implemented in CMOS integrated electronics [18]. In optical implementation of neural networks, the required nonlinear functions are usually performed through electronic devices, with adaptive linear interconnection done in the optical domain. We here explore the possibility of optical implementation of the required nonlinear function  )* by using the nonlinear response of linear optical systems to variations of the wavelength. Consider a birefringent material placed between crossed polarizers. Even though propagation of the field through the material is a linear phenomenon (a linear phase difference among orthogonal polarization components is generated), the output power as a function of incident wavelength is sinusoidal, according to '+-,. 0/21 det 0/43657#89 ;:=<?>A@CBEDGF H #89 JI (2) where / is the transimpedance gain of the detector amplifier, 3 is the responsivity (in A/W), 57#89 is the optical power incident on the detector, which is a function of the laser drive current 8 , F is the optical path difference (OPD) resulting from propagation through the birefringent material and H #89 is the laser wavelength. In semiconductor lasers, and Vertical Cavity Surface Emitting Lasers (VCSELs) in particular, an input current 8 produces a small modulation in the radiation wavelength H  8K . Linearizing the LJM H variation in Equation 2, we find a nonlinear mapping from driving voltage to detected signal:  +-,. N#; O QP7 R : <S>A@ B D #UTV; TAW I (3) Driver ne no VCSEL Collimation PBS Mirror Birefringent Crystal i PD v f(v) u w + − + − (a) Detected Optical Signal 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Driver Voltage (V) 0.0 0.1 0.2 0.3 0.4 0.5 VDet(V) (b) Figure 1: a Experimental setup for the wavelength-based nonlinear oscillator, with simplified view of the electronic feedback. b Experimental evidence of nonlinear mapping from driver voltage to detected signal (open loop), as a result of wavelength modulation as well as laser threshold and saturation. where  is the driving voltage (linearly converted to an input current 8 through the driver transconductance) and the function P7 R includes all conversion factors in the detection process, as well as nonlinear phenomena such as laser threshold and saturation. A simple nonlinear feedback loop can now be established, by feeding the detected signal back to the driver. This basic arrangement has been used to investigate chaotic behavior in delayed-feedback tunable lasers [15] . It is used here as the nonlinearity for an optical self-pulsing mechanism in order to implement neural-like pulses based on the following dynamical system        # O  $      !  "# % (4) Again  is a fast state variable, and  a relatively slow recovery variable, so that J   . The experimental setup is shown in Figure 1a. Light from the tunable source is collimated and propagates through a piece of birefringent crystal. The crystal fast and slow axis are at 45 degrees to the polarizer and analyzer passing axis. The effective propagation length through the crystal (and corresponding wavelength selectivity) is doubled with the use of a mirror. A polarizing beam splitter acts as both polarizer and analyzer. A simplified view of the electronic feedback is also shown. Leaky integrators and linear analog summations implement the linear part of Equation 4, while the nonlinear response (in intensity) of the optical filter implements #; . A VCSEL was used as tunable laser source. These vertical-cavity semiconductor lasers have, when compared to edge-emitting diode lasers, larger separation between longitudinal modes, more circularly-symmetric beams and lower fabrication costs [19]. As the input current is increased, the heating of the cavity red-shifts the resonant wavelength [20], and this is the main mechanism we are exploring for wavelength modulation. An experimental verification of the expected sinusoidal variation of detected power with modulation voltage is given in Figure 1b. A slow (800Hz) modulation ramp was applied to the driver, and the detected power variation was acquired. From this information, the static transfer function shown in the right part of the figure was calculated. Unlike the experiment with a DBR laser diode reported by Goedgebuer et al. [15], it is apparent that current modulation is affecting not only wavelength (and hence effective optical path difference among polarization components) but overall output power as well. Modulation depth is limited (non-zero troughs in the sinusoidal variation), which we attribute to the multiple transverse modes that the device supports. However, as we are going to be operating near Figure 2: Continuous line: trajectory of the system under strong input, obtained by numeric integration ( . -order Runge-Kutta) of Equation 4. Arrows represent the strength of the derivatives at a particular point in state space. Dashed line: nullcline    . Dashdotted line: nullcline    . Stability analysis show that the equilibrium point where the nullclines meet is unstable, so the limit cycle is the sole attractor. Parameters  L   , 0 L  , %  L  ,     V, T W    V, TVC  ?L V. the first maximum (see Section 3), the power variation over successive maxima should not affect the dynamical properties of the closed-loop system. The relatively smooth curve obtained indicates that no mode hops occurred for this driving current range, which was indeed confirmed with Optical Spectrum Analyzer measurements. 3 Simulations FitzHugh-Nagumo models are known to have so-called class II neural excitability (see [21] for a review). This class is characterized by an Andronov-Hopf bifurcation for increasing excitation, and exhibits some dynamical phenomena that are not present in integrate-andfire dynamics. For equal intensity input pulses, integrators will respond maximally to the pulse train with lowest inter-spike interval. Class II neurons have resonant response to a range of input frequencies. There are non-trivial forms of excitation in resonator models that are not matched by integrators: the former can produce a spike at the end of an inhibitory pulse, and conversely, can have a limit cycle condition interrupted (with the system recovering to rest) by an excitatory pulse. We have verified that these characteristics are maintained in the modified optical model, despite the use of a sinusoidal nonlinearity instead of the original  + degree polynomial function. Stability analysis based on the Jacobian of the dynamical system (Equation 4) shows an Andronov-Hopf bifurcation, as in the original model. Limit cycle interruption through exciting pulses is shown in Section 5. Figure 2 shows a typical limit-cycle trajectory, for parameter values that match conditions of the experiment reported in Section 4. Parameters were chosen so that a typical excursion in modulation voltage goes from the dead zone (below the lasing threshold) to around the first peak in the nonlinear detector transfer function. This is an interesting choice because the optical output is only present during spiking, and can be used directly as an input to other optoelectronic neurons. Driver Voltage(V) 10 20 30 40 50 Time(τv) 0.080 0.105 0.130 0.155 0.180 Recovery 10 20 30 40 50 0.00 0.03 0.06 0.09 0.12 Detectd signal 10 20 30 40 50 0.000 0.025 0.050 0.075 0.100 (a) (b) Figure 3: Dynamical system response to strong constant input. a Simulation results. Parameters as in Figure 2. b Experimental results. Parameters:   SL ms,  L ms. Input (V) 0 20 40 60 80 100 Time(τv) 0.0 0.1 0.2 Driver Voltage (V) 0 20 40 60 80 100 -0.2 0.0 0.2 Recovery (V) 0 20 40 60 80 100 0.00 0.05 0.10 Detected Optical Signal (V) 0 20 40 60 80 100 0.00 0.05 0.10 a b Figure 4: (a): Simulated response to a train of pulses. Parameters as in Figure 2. (b): Experimental Results. Parameters as in Figure 3. 4 Experimental Results Figure 3 presents a comparison between simulated waveforms for the various dynamic variables involved (as the system performs the trajectory depicted in Figure 2) and the experimental results obtained with the system described in Figure 1, revealing a good agreement between simulated and experimental waveforms. The double-peak in the optical variable can be understood by following the trajectory indicated in Figure 2, bearing in mind the non-monotonicmapping from driver voltage to detected signal. The decrease in driver voltage observed as the recovery variable  increases produces initially an increase in detected power, and thus the second, broader peak at the end of the cycle. The production of sustained oscillations for constant input is one of the desired characteristics of the model, but in a network, neurons will mostly communicate through their pulsed output. The response of the system to pulsed inputs can be seen in Figure 4. The output optical signal response is all-or-none, but sub-threshold integration of weak inputs is being performed, as the waveform for driver voltage shows in the first pulse. As  slowly returns to 0, a new excitation just after a pulse is less likely, which can be seen at the response to the third pulse. The experimentally observed waveforms agree with the simulations, though details of the pulsing in the optical output are different. 2 ∆φo Bias 0 π φ i a Pulse advance vs input pulse phase 0 2 4 6 Input Pulse Phase (rad) -4 -2 0 2 4 rad No spikes b Figure 5: Numeric illustration of the effect of input timing on the advance of the next spike, in the modified FitzHugh-Nagumo system. a: Schematic view of simulation. See text for details. b: Phase advance as a function of input phase. Bias 0.103V. Input pulse height 10 mV, duration L   . Dynamic system parameters as in Figure 2. 5 Coupling One of the main motivations for using optical technology in neural network implementation is the possibility of massive interconnection, and so the definition of coupling techniques, and the study of adaptation algorithms compatible with the dynamical properties of the experimentally-demonstrated oscillators are the current focus of this research. The most elegant optical implementation of adaptive interconnection is through dynamic volume holography[6, 11], but that requires a set of coherent optical signals, not what we have with an array of pulse emitters. In contrast, the matrix-vector multiplier architecture allows parallel interconnection of incoherent optical signals, and has been used to demonstrate implementations of the Hopfield model [7] and Boltzman machines [9]. An interesting aspect of the coupled dynamics in oscillators exhibiting class II excitability is that the timing of an input pulse can result in advance or retardation of the next spike [22]. This is potentially relevant for hardware implementation, as the excitatory (i.e., inducing an early spike) or inhibitory character of the connection might be controlled without changing signs of the coupling strength. In Figure 5 we show a simulation illustrating the effect of input pulse timing in advancing the output spike. A constant input to a model neuron (Equation 4) was maintained, producing periodic spiking. A second, positive, pulsed input was activated in between spikes, and the effect of this coupling on the advance or retardation of the next spike was verified as the timing of the input was varied. A region of output spike retardation (   ) with excitatory pulsed input can be seen. Even more interesting, for phases around D rad relative to the latest spike, the excitatory pulse can terminate periodic spiking altogether. This phenomenon is seen in detail in Figure 6, where both the time waveforms and statespace trajectories are shown. For this particular condition, the equilibrium point of the system is stable. When correctly timed, the short excitatory pulse forces the system out of its limit cycle, into the basin of attraction of the stable equilibrium, hence stopping the periodic spiking. As the individual models used in this simulations were shown to match experimental implementations in Section 4, we expect to observe the same kind of effect in the coupling of the optoelectronic oscillators. Input 40 60 80 100 120 140 Time(t.c.) 0.000 0.008 0.016 V Driving Voltage 40 60 80 100 120 140 Time(t.c.) 0.0 0.1 0.2 V Output 40 60 80 100 120 140 Time(t.c.) 0.00 0.05 0.10 V a State Space Trajectory 0.0800 0.1000 0.1200 0.1400 0.1600 v -0.0200 -0.0075 0.0050 0.0175 0.0300 0.0425 0.0550 0.0675 0.0800 w b Figure 6: (a): Simulated response illustrating return to stability with excitatory pulse.   SL'L . Other parameters as in Figure 2. (b): Same results in state space. Continuous line: Unperturbed trajectory. Dotted Line: Trajectory during excitatory pulse. 6 Ongoing work and conclusions Implementation of a modified FN neuron model with a nonlinear transfer function realized with a wavelength-tuned VCSEL source, a linear optical spectral filter and linear electronic feedback was demonstrated. The system dynamical behavior agrees with simulated responses, and exhibits some of the basic features of neuron dynamics that are currently being investigated in the area of spiking neural networks. Further experiments are being done to demonstrate coupling effects like the ones described in Section 5. In particular, the use of external optical signals directly onto the detector to implement optical coupling has been demonstrated. Feedback circuit simplification is another important aspect, since we are interested in implementing large arrays of spiking neurons. With enough detection gain, Equation 4 should be implementable with simple RLC circuits, as in the original work by Nagumo[17]. Results reported here were obtained at low frequency (1-100 KHz), limited by amplifier and detector bandwidths. With faster electronics and detectors, the limiting factor in this arrangement would be the time constant for thermal expansion of the VCSEL cavity, which is around 1  . Pulsing operation at 1.2 MHz has been obtained in our latest experiments. Even faster operation is possible when using the internal dynamics of wavelength modulation itself, instead of external electronic feedback. In addition to the thermally-induced modulation of wavelength, carrier injection modifies the index of refraction of the active region directly, which results in an opposite wavelength shift. By using this carrier injection effect to implement the recovery variable, feedback electronics is simplified and a much faster time constant controls the model dynamics. Optical coupling of VCSELs has the potential to generate over 40GHz pulsations [23]. Our goal is to investigate those optical oscillators as a technology for implementing fast networks of spiking artificial neurons. Acknowledgments This research is supported in part by a Doctorate Scholarship to the first author from the Brazilian Council for Scientific and Technological Development, CNPq. References [1] F. Rieke, D. Warland, R.R. von Steveninck, and W. Bialek. Spikes: Exploring the Neural Code. MIT Press, Cambridge, USA, 1997. [2] T.J. Sejnowski. Neural pulse coding. In W. Maass and C.M. Bishop, editors, Pulsed Neural Networks, Cambridge, USA, 1999. The MIT Press. [3] W. Maass. Lower bounds for the computational power of spiking neurons. Neural Computation, 8:1–40, 1996. [4] J.J. Hopfield. Pattern recognition computation using action potential timing for stimulus representation. Nature, 376:33–36, 1995. [5] R. van Rullen and S.J. Thorpe. Rate coding versus temporal order coding: what the retinal ganglion cells tells the visual cortex. Neural Computation, 13:1255–1283, 2001. [6] D. Psaltis, D. Brady, and K. Wagner. Adaptive optical networks using photorefractive crystals. Applied Optics, 27(9):334–341, May 1988. [7] N.H. Farhat, D. Psaltis, A. Prata, and E. Paek. Optical implementation of the Hopfield model. Applied Optics, 24:1469–1475, 1985. [8] S. Gao, J. Yang, Z. Feng, and Y. Zhang. Implementation of a large-scale optical neural network by use of a coaxial lenslet array for interconnection. Applied Optics, 36(20):4779–4783, 1997. [9] A.J. Ticknor and H.H. Barrett. Optical implementation of Boltzmann machines. Optical Engineering, 26(1):16–21, January 1987. [10] K.S. Hung, K.M. Curtis, and J.W. Orton. Optoelectronic implementation of a multifunction cellular neural network. IEEE Transactions on Circuits and Systems II, 43(8):601–608, August 1996. [11] K. Wagner and T.M. Slagle. Optical competitive learning with VLSI liquid-crystal winner-takeall modulators. Applied Optics, 32(8):1408–1435, March 1993. [12] K. Hynna and K. Boahen. Space-rate coding in an adaptive silicon neuron. Neural Networks, 14(6):645–656, July 2001. [13] F. Di Theodoro, E. Cerboneschi, D. Hennequin, and E. Arimondo. Self-pulsing and chaos in an extended-cavity diode laser with intracavity atomic absorber. International Journal of Bifurcation and Chaos, 8(9), September 1998. [14] J.L. Johnson. All-optical pulse generators for optical computing. In Proceedings of the 2002 International Topical Meeting on Optics in Computing, pages 195–197, Taipei, Taiwan, 2002. [15] J. Goedgebuer, L. Larger, and H.Porte. Chaos in wavelength with a feedback tunable laser diode. Physical Review E, 57(3):2795–2798, March 1998. [16] R.FitzHugh. Impulses and physiological states in models of nerve membrane. Biophysical Journal, 1:445–466, 1961. [17] J. Nagumo, S. Arimoto, and S. Yoshizawa. An active pulse transmission line simulating nerve axon. Proceedings of the IRE, 50:2061–2070, 1962. [18] B. Linares-Barranco, E. S´anchez-Sinencio, A. Rodr´ıguez-V´azquez, and J.L. Huertas. A CMOS implementation of FitzHugh-Nagumo neuron model. IEEE Journal of Solid-State Circuits, 26(7):956–965, July 1991. [19] A. Yariv. Optical Electronics in Modern Communications. Oxford University Press, New York, USA, fifth edition, 1997. [20] W. Nakwaski. Thermal aspects of efficient operation of vertical-cavity surface-emitting lasers. Optical and Quantum Electronics, 28:335–352, 1996. [21] E.M. Izhikevich. Neural excitability, spiking and bursting. International Journal of Bifurcation and Chaos, 2000. [22] E.M. Izhikevich. Weakly pulse-coupled oscillators, FM interactions, synchronization, and oscillatory associative memory. IEEE Transactions on Neural Networks, 10(3):508–526, May 1999. [23] C.Z. Ning. Self-sustained ultrafast pulsation in coupled vertical-cavity surface-emitting lasers. Optics Letters, 27(11):912–914, June 2002.
2002
32
2,235
Margin-Based Algorithms for Information Filtering Nicol`o Cesa-Bianchi DTI, University of Milan via Bramante 65 26013 Crema, Italy cesa-bianchi@dti.unimi.it Alex Conconi DTI, University of Milan via Bramante 65 26013 Crema, Italy conconi@dti.unimi.it Claudio Gentile CRII, Universit`a dell’Insubria Via Ravasi, 2 21100 Varese, Italy gentile@dsi.unimi.it Abstract In this work, we study an information filtering model where the relevance labels associated to a sequence of feature vectors are realizations of an unknown probabilistic linear function. Building on the analysis of a restricted version of our model, we derive a general filtering rule based on the margin of a ridge regression estimator. While our rule may observe the label of a vector only by classfying the vector as relevant, experiments on a real-world document filtering problem show that the performance of our rule is close to that of the on-line classifier which is allowed to observe all labels. These empirical results are complemented by a theoretical analysis where we consider a randomized variant of our rule and prove that its expected number of mistakes is never much larger than that of the optimal filtering rule which knows the hidden linear model. 1 Introduction Systems able to filter out unwanted pieces of information are of crucial importance for several applications. Consider a stream of discrete data that are individually labelled as “relevant” or “nonrelevant” according to some fixed relevance criterion; for instance, news about a certain topic, emails that are not spam, or fraud cases from logged data of user behavior. In all of these cases, a filter can be used to drop uninteresting parts of the stream, forwarding to the user only those data which are likely to fulfil the relevance criterion. From this point of view, the filter may be viewed as a simple on-line binary classifier. However, unlike standard on-line pattern classification tasks, where the classifier observes the correct label after each prediction, here the relevance of a data element is known only if the filter decides to forward that data element to the user. This learning protocol with partial feedback is known as adaptive filtering in the Information Retrieval community (see, e.g., [14]). We formalize the filtering problem as follows. Each element of an arbitrary data sequence is characterized by a feature vector  and an associated relevance label  (say,   for relevant and   for nonrelevant). At each time    , the filtering system observes the  -th feature vector  and must decide whether or not to forward it. If the data is forwarded, then its relevance label  is revealed to the system,  The research was supported by the European Commission under the KerMIT Project No. IST2001-25431. which may use this information to adapt the filtering criterion. If the data is not forwarded, its relevance label remains hidden. We call   the  -th instance of the data sequence and the pair     the  -th example. For simplicity, we assume   for all  . There are two kinds of errors the filtering system can make in judging the relevance of a feature vector   . We say that an example       is a false positive if    and   is classified as relevant by the system; similarly, we say that       is a false negative if    and   is classified as nonrelevant by the system. Although false negatives remain unknown, the filtering system is scored according to the overall number of wrong relevance judgements it makes. That is, both false positives and false negatives are counted as mistakes. In this paper, we study the filtering problem under the assumption that relevance judgements are generated using an unknown probabilistic linear function. We design filtering rules that maintain a linear hypothesis and use the margin information to decide whether to forward the next instance. Our performance measure is the regret; i.e., the number of wrong judgements made by a filtering rule over and above those made by the rule knowing the probabilistic function used to generate judgements. We show finite-time (nonasymptotical) bounds on the regret that hold for arbitrary sequences of instances. The only other results of this kind we are aware of are those proven in [9] for the apple tasting model. Since in the apple tasting model relevance judgements are chosen adversarially rather than probabilistically, we cannot compare their bounds with ours. We report some preliminary experimental results which might suggest the superiority of our methods as opposed to the general transformations developed within the apple tasting framework. As a matter of fact, these general transformations do not take margin information into account. In Section 2, we introduce our probabilistic relevance model and make some preliminary observations. In Section 3, we consider a restricted version of the model within which we prove a regret bound for a simple filtering rule called SIMPLE-FIL. In Section 4, we generalize this filtering rule and show its good performance on the Reuters Corpus Volume 1. The algorithm employed, which we call RIDGE-FIL, is a linear least squares algorithm inspired by [2]. In that section we also prove, within the unrestricted probabilistic model, a regret bound for the randomized variant P-RIDGE-FIL of the general filtering rule. Both RIDGE-FIL and its randomized variant can be run with kernels [13] and adapted to the case when the unknown linear function drifts with time. 2 Learning model, notational conventions and preliminaries The relevance of   is given by a     -valued random variable  (where  means “relevant”) such that there exists a fixed and unknown vector   ,   , for which        for all    . Hence   is relevant with probability            . The random variables     are assumed to be independent, whereas we do not make any assumption on the way the sequence  !  ! is generated. In this model, we want to perform almost as well as the algorithm that knows and forwards   if and only if   "# . We consider linear-threshold filtering algorithms that predict the value of  through SGN $&%     (' *) , where %     is a dynamically updated weight vector which might be intended as the current approximation to , and '  is a suitable time-changing “confidence” threshold. For any fixed sequence  !  !   of instances, we use +  to denote the margin    and , +  to denote the margin %     . We define the expected regret of the linear-threshold filtering algorithm at time  as -.&  , +  /'  0123-.4  +  015 . We observe that in the conditional -76 -probability space where , +  8'  is given we have 6 4  , +  8'  092 86 &  +  0:5 ;6 4  , +  ('  0:2 (6 4  +  012<5, +  8'  + = > ;6 4  +  92 86 4  +  015?@, +  A'  B+ = C ED +  D 5, +  8'  + = >  where we use  to denote the Bernoulli random variable which is 1 if and only if predicate is true. Integrating over all possible values of , +  8'  we obtain -.&  , +  ('  015 (-4  +  015 D +  D-., +  8'  + = 5 (1) = D +  D-.BD , +  8'  A+  D@ D +  D  =     , +  ('   +     (2) where the last inequality is Markov’s. These (in)equalities will be used in Sections 3 and 4.2 for the analysis of SIMPLE-FIL and P-RIDGE-FIL algorithms. 3 A simplified model We start by analyzing a restricted model where each data element has the same unknown probability of being relevant and we want to perform almost as well as the filtering rule that consistently does the optimal action (i.e., always forwards if :   and never forwards otherwise). The analysis of this model is used in Section 4 to guide the design of good filtering rules for the unrestricted model. Let    and let ,   ,      be the sample average of , where  is the number of forwarded data elements in the first  time steps and ,   is the fraction of true positives among the  elements that have been forwarded. Obviously, the optimal rule forwards if and only if  . Consider instead the empirical rule that forwards if and only if ,      . This rule makes a mistake only when !,    =  . To make the probability of this event go to zero with  , it suffices that -BD ,    CD = D CD  as  , which can only happen if  increases quickly enough with  . Hence, data should be forwarded (irrespective to the sign of the estimate ,   ) also when the confidence level for ,   gets too small with respect to  . A problem in this argument is that large deviation bounds require     for making -.D ,    CD D CD  small. But in our case is unknown. To fix this, we use the condition    ,    . This looks dangerous, as we use the empirical value of ,   to control the large deviations of ,   itself; however, we will show that this approach indeed works. An algorithm, which we call SIMPLE-FIL, implementing the above line of reasoning takes the form of the following simple rule: forward if and only if ,     '    , where '   "! # %$  . The expected regret at time  of SIMPLE-FIL is defined as the probability that SIMPLE-FIL makes a mistake at time  minus the probability that the optimal filtering rule makes a mistake, that is -4  ,     '   0 2 -.&  0 5 . The next result shows a logarithmic bound on this regret. Theorem 1 The expected cumulative regret of SIMPLE-FIL after any number ( of time steps is at most &2CD CD  ' (! ) . Proof sketch. We can bound the actual regret after  time steps as follows. From (1) and the definition of the filtering rule we have  * ,+  -.&  4,    8'  015   * ,+  -4  /0:5 D CD  * ,+  -.4,  ,  8'   = 0/ = D CD21  * ,+  -435 %$  = 6 "!  ,   87  * ,+  -43!,  ,  = 9 %$   6 (!  ,   875: D CD   ;6<  A =;>    Without loss of generality, assume   . We now bound =; < and   ;6>  separately. Since %$  =  "!  ,    implies that  %$   , we have that ;6<   implies    for some    . Hence we can write ; < =   * ,+  3 %$  = 6 (!  ,    9 %$    7 =   * ,+  %$  *  + 3  = 6 "!  ,  7 =   * ,+  %$  *  +  =   (!   D ,  D = D CD  9/ =   * ,+  %$  *  + D ,  D = D CD  / for     "!  =   * ,+  %$  *  + D ,   CD> D CD   /  Applying Chernoff-Hoeffding [11] bounds to ,  , which is a sum of      @ -valued independent random variables, we obtain !=;< =    ,+   %$   + -2D ,  CD> D CD  =    !   We now bound   ;6>  by adapting a technique from [8]. Let "#%$  '&       , "()$  &    *  +   , , )$  * +/.   $ * +.   We have ;> =  * ,+  D ,     CD>"    $  D CD /  * ,+ 10 "    $   , ,  ,  $ 32  * ,+  3 , ,  ,  $  9 %$   6 (!  ,   87 =   * ,+  %$  *  +54 D ,   CD@" )$  D CD /  * ,+  3 , ,    $ .  %$   6 (!  ,    7 ;76 ;78  Applying Chernoff-Hoeffding bounds again, we get  =;96 =    ,+    = C (!   Finally, one can easily verify that ;:8  . Piecing everything together we get the desired result. ; 4 Linear least squares filtering In order to generalize SIMPLE-FIL to the original (unrestricted) learning model described in Section 2, we need a low-variance estimate of the target vector . Let <  be the matrix whose columns are the forwarded feature vectors after the first  time steps and let =  be the vector of corresponding observed relevance labels (the index  will be momentarily dropped). Note that >= ?<  holds. Consider the least squares estimator @<A<  !BC<D= of , where E<A<  B is the pseudo-inverse of <D<  . For all belonging to the column space of < , this is an unbiased estimator of , that is GFB@<D<  !BC<H=JI EE<A<  B%</>= @<A<  B%<A<   To remove the assumption on , we make <D<  full rank by adding the identity K . This also allows us to replace the pseudo-inverse with the standard inverse, 0 0.2 0.4 0.6 0.8 1 F-MEASURE 34 CATEGORIES RIDGE-FULL RIDGE-FIL FREQUENCY x 10 Figure 1: -measure for each one of the 34 filtering tasks. The -measure is defined by     , where  is precision (fraction of relevant documents among the forwarded ones) and  is recall (fraction of forwarded documents among the relevant ones). In the plot, the filtering rule RIDGE-FIL is compared with RIDGE-FULL which sees the correct label after each classification. While precision and recall of RIDGE-FULL are balanced, RIDGE-FIL’s recall is higher than precision due to the need of forwarding more documents than believed relevant. This in order to make the confidence of the estimator converge to 1 fast enough. Note that, in some cases, this imbalance causes RIDGE-FIL to achieve a slightly better -measure than RIDGE-FULL. obtaining @K <D<   $  <D= , a “sparse” version of the ridge regression estimator [12] (the sparsity is due to the fact that we only store in < the forwarded instances, i.e., those for which we have a relevance labels). To estimate directly the margin   , rather than , we further modify, along the lines of the techniques analyzed in [3, 6, 15], the sparse ridge regression estimator. More precisely, we estimate    with the quantity %     , where the %  is defined by %  K < %$  <  %$        $  < %$  = %$   (3) Using the Sherman-Morrison formula, we can then write out the expectation of %     as F4%    EI    $   ,                ,           which holds for all ,   , and all matrices < %$  . Let %$  be the number of forwarded instances among     %$  . In order to generalize to the estimator (3) the analysis of SIMPLE-FIL, we need to find a large deviation bound of the form -8$BD %     A+  D' ,  %$  ? ) >   for all ,   , where >  goes to zero “sufficiently fast” as 4  . Though we have not been able to find such bounds, we report some experimental results showing that algorithms based on (3) and inspired by the analysis of SIMPLE-FIL do exhibit a good empirical behavior on real-world data. Moreover, in Section 4.2 we prove a bound (not based on the analysis of SIMPLE-FIL) on the expected regret of a randomized variant of the algorithm used in the experiments. For this variant we are able to prove a regret bound that scales essentially with the square root of  (to be contrasted with the logarithmic regret of SIMPLE-FIL). 4.1 Experimental results We ran our experiments using the filtering rule that forwards   if SGN ;%     8'   , where %  is the estimator (3) and '    "! # %$   Note that this rule, which we call RIDGE-FIL, is a natural generalization of SIMPLE-FIL to the unrestricted learning model; in particular, SIMPLE-FIL uses a relevance threshold ' of the very same form as RIDGE-FIL, although SIMPLE-FIL’s “margin” , is defined differently. We tested our algorithm on a Algorithm: P-RIDGE-FIL. Parameters: Real " : ; A   . Initialization: %  E4  B5 ,  "K<   ,  . Loop for    1. Get     and let , +  9%   . 2. If , +  9 then forward   , get label   and update as follows: %  %     , +  $     ; <     <         <    <     ; %    E    K@ $    <%  , where  if D D % 3D D = and 9 is such that D D %    D D , otherwise;  . 3. Else forward   with probability . If   was forwarded then get label   and do the same updates as in 2; otherwise, do not make any update. Figure 2: Pseudo-code for the filtering algorithm P-RIDGE-FIL. The performance of this algorithm is analyzed in Theorem 3. document filtering problem based on the first 70000 newswire stories from the Reuters Corpus Volume 1. We selected the 34 Reuters topics whose frequency in the set of 70000 documents was between 1% and 5% (a plausible range for filtering applications). For each topic, we defined a filtering task whose relevance judgements were assigned based on whether the document was labelled with that topic or not. Documents were mapped to real vectors using the bag-of-words representation. In particular, after tokenization we lemmatized the tokens using a general-purpose finite-state morphological English analyzer and then removed stopwords (we also replaced all digits with a single special character). Document vectors were built removing all words which did not occur at least three times in the corpus and using the TF-IDF encoding in the form "! TF ' (!7  DF  , where TF is the word frequency in the document, DF is the number of documents containing the word, and is the total number of documents (if TF  the TF-IDF coefficient was also set to  ). Finally, all document vectors were normalized to length 1. To measure how the choice of the threshold '  affects the filtering performance, we ran RIDGE-FIL with '  set to zero on the same dataset as a standard on-line binary classifier (i.e., receiving the correct label after every classification). We call this algorithm RIDGE-FULL. Figure 1 illustrates the experimental results. The average  -measure of RIDGE-FULL and RIDGE-FIL are, respectively,   and   ; hence the threshold compensates pretty well the partial feedback in the filtering setup. On the other hand, the standard Perceptron achieves here a  -measure of   in the classification task, hence inferior to that of RIDGE-FULL. Finally, we also tested the apple-tasting filtering rule (see [9, STAP transformation]) based on the binary classifier RIDGE-FULL. This transformation, which does not take into consideration the margin, exhibited a poor performance and we did not include it in the plot. 4.2 Probabilistic ridge filtering In this section we introduce a probabilistic filtering algorithm, derived from the (on-line) ridge regression algorithm, for the class of linear probabilistic relevance functions. The algorithm, called P-RIDGE-FIL, is sketched in Figure 2. The algorithm takes "   and a probability value as input parameters and maintains a linear hypothesis %  . If %       , then   is forwarded and %  gets updated according to the following two-steps ridge regression-like rule. First, the intermediate vector %  is computed via the standard on-line ridge regression algorithm using the inverse of matrix  . Then, the new vector %    is obtained by projecting %  onto the unit ball, where the projection is taken w.r.t. the distance function    ; B%   4 (%      ; (%  . Note that D D % /D D = implies %    %  . On the other hand, if %    0  then   is forwarded (and consequently %  is updated) with some probability . The analysis of P-RIDGE-FIL is inspired by the analysis in [1] for a related but different problem, and is based on relating the expected regret in a given trial  to a measure of the progress of %  towards . The following lemma will be useful. Lemma 2 Using the notation of Figure 2, let  be the trial when the  -th update occurs. Then the following inequality holds:  , +  :     &+  :   = 8 (!  <    <   ; %      ; B%     , where D /D denotes the determinant of matrix and  ; %   4 (%"   ; 8%  . Proof sketch. From Lemma 4.2 and Theorem 4.6 in [3] and the fact that D , +  D = D D %  D D =   it follows that  , +       +     = 8 (!  <     <    ; B%      4 B%    . Now, the function    ; B%   4  %      ;  %" is a Bregman divergence (e.g., [4, 10]), and it can be easily shown that %    in Figure 2 is the projection of %  onto the convex set     D D D D =  w.r.t.    ; i.e., %      !  = =       %   . By a projection property of Bregman divergences (see, e.g., the appendix in [10]) it follows that    4 B%  /  24 %     for all such that D D D D = . Putting together gives the desired inequality. ; Theorem 3 Let  !  D +  D  !  D    D . For all   , if algorithm P-RIDGE-FIL of Figure 2 is run with1   !   !   , then its expected cumulative regret   ,+  -.&  , +  012    ,+  -.&  +  092 is at most " !  "  !  " (! $    )  #    Proof sketch. If  is the trial when the  -th forward takes place, we define the random variables $  % ; %   &   ; B%     and '  8 (!  <    <  . If no update occurs in trial  we set $  ('   . Let )  be the regret of P-RIDGE-FIL in trial  and ) 6  be the regret of the update rule %   %    in trial  . If , +  E , then  )    ) 6   and   $   can be lower bounded via Lemma 2. If , + 0  , then ! $   gets lower bounded via Lemma 2 only with probability , while for the regret we can only use the trivial bound   )   = . With probability 4 , instead,  )    ) 6   and   $    . Let * be a constant to be specified. We can write ?  )   +*  $   ,   )   , +  :C  +*  $   , +  1>  + ?! )   , + 09> -*! $   , + 09>   (4) Now, it is easy to verify that in the conditional space where , +  is given we have ! &+   * D , +    +  and  , +    D , +   , +   +   +  . Thus, using Lemma 2 and Eq. (4) we can write ?! )   +*  $   .   ) 6  ?, +  1C +*   , +  A+   ( '  D4, +  ; ?, +  :C  - 5    ) 6   , +  01C  C   , +  0:C   -*6    , +   +   (  '  D4, +  ; ?, +  01C  1This parametrization requires the knowledge of / . It turns out one can remove this assumption at the cost of a slightly more involved proof. This can be further upper bounded by   , +  A+    +*   , +   +   ?, +  1C +*    , +   +   , +  09>  *    ' 5D , +  & , + 9>  * !   ' 2D , + &  , + .01C  +     , +  01C   (5) where in the inequality we have dropped from factor  ? and combined the resulting terms   ) 6   , +  01C  and  ) 6   , +  9>  into  ) 6   . In turn, this term has been bounded as   ) 6   =        , +   +    =  "   , +   +    by virtue of (2) with '   . At this point we work in the conditional space where , +  is given and distinguish the two cases , +  9 and , +  01 . In the first case we have   1 , +   +   D4, +  * -*  2 *C! '  D4, +   =  +*    *> '  D4, +    whereas in the second case we have 2 9  , +  A+  D , +  *   *  ,* C  '  D , + & - ' = ?   * ? * C  '  D , +   + '  where in both cases we used , +  3+  =  . We set *  #  and sum over    . Notice that   ,+  $  = D D D D and that   ,+  '  = 8 "!  <      <   =  (! $    ) (e.g., [3], proof of Theorem 4.6 therein). After a few overapproximations (and taking the worst between the two cases , +  1 and , +  01 ) we obtain   ,+  ?  )   = " #   #   "! $    ) " #  "  thereby concluding the proof. ; References [1] Abe, N., and Long, P.M. (1999). Associative reinforcement learning using linear probabilistic concepts. In Proc. ICML’99, Morgan Kaufmann. [2] Auer, P. (2000). Using Upper Confidence Bounds for Online Learning. In Proc. FOCS’00, IEEE, pages 270–279. [3] Azoury, K., and Warmuth, M.K. (2001). Relative loss bounds for on-line density estimation with the exponential family of distributions, Machine Learning, 43:211–246. [4] Censor, Y., and Lent, A. (1981). An iterative row-action method for interval convex programming. Journal of Optimization Theory and Applications, 34(3), 321–353. [5] Cesa-Bianchi, N. (1999). Analysis of two gradient-based algorithms for on-line regression. Journal of Computer and System Sciences, 59(3):392–411. [6] Cesa-Bianchi, N., Conconi, A., and Gentile, C. (2002). A second-order Perceptron algorithm. In Proc. COLT’02, pages 121–137. LNAI 2375, Springer. [7] Cesa-Bianchi, N., Long, P.M., and Warmuth, M.K. (1996). Worst-case quadratic loss bounds for prediction using linear functions and gradient descent. IEEE Trans. NN, 7(3):604–619. [8] Gavald`a, R., and Watanabe, O. (2001). Sequential sampling algorithms: Unified analysis and lower bounds. In Proc. SAGA’01, pages 173–187. LNCS 2264, Springer. [9] Helmbold, D.P., Littlestone, N., and Long, P.M. (2000). Apple tasting. Information and Computation, 161(2):85–139. [10] Herbster, M. and Warmuth, M.K. (1998). Tracking the best regressor, in Proc. COLT’98, ACM, pages 24–31. [11] Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58:13–30. [12] Hoerl, A., and Kennard, R. (1970). Ridge regression: biased estimation for nonorthogonal problems. Technometrics, 12:55–67. [13] Vapnik, V. (1998). Statistical learning theory. New York: J. Wiley & Sons. [14] Voorhees, E., Harman, D. (2001). The tenth Text REtrieval Conference. TR 500-250, NIST. [15] Vovk, V. (2001). Competitive on-line statistics. International Statistical Review, 69:213–248.
2002
33
2,236
Half-Lives of EigenFlows for Spectral Clustering Chakra Chennubhotla & Allan D. Jepson Department of Computer Science, University of Toronto, Canada M5S 3H5 chakra,jepson  @cs.toronto.edu Abstract Using a Markov chain perspective of spectral clustering we present an algorithm to automatically find the number of stable clusters in a dataset. The Markov chain’s behaviour is characterized by the spectral properties of the matrix of transition probabilities, from which we derive eigenflows along with their halflives. An eigenflow describes the flow of probability mass due to the Markov chain, and it is characterized by its eigenvalue, or equivalently, by the halflife of its decay as the Markov chain is iterated. A ideal stable cluster is one with zero eigenflow and infinite half-life. The key insight in this paper is that bottlenecks between weakly coupled clusters can be identified by computing the sensitivity of the eigenflow’s halflife to variations in the edge weights. We propose a novel EIGENCUTS algorithm to perform clustering that removes these identified bottlenecks in an iterative fashion. 1 Introduction We consider partitioning a weighted undirected graph— corresponding to a given dataset— into a set of discrete clusters. Ideally, the vertices (i.e. datapoints) in each cluster should be connected with high-affinity edges, while different clusters are either not connected or are connnected only by a few edges with low affinity. The practical problem is to identify these tightly coupled clusters, and cut the inter-cluster edges. Many techniques have been proposed for this problem, with some recent success being obtained through the use of spectral methods (see, for example, [2, 4, 5, 11, 12]). Here we use the random walk formulation of [4], where the edge weights are used to construct a Markov transition probability matrix,  . This matrix  defines a random walk on the graph to be partitioned. The eigenvalues and eigenvectors of  provide the basis for deciding on a particular segmentation. In particular, it has been shown that for  weakly coupled clusters, the leading  eigenvectors of  will be roughly piecewise constant [4, 13, 5]. This result motivates many of the current spectral clustering algorithms. For example in [5], the number of clusters  must be known a priori, and the  -means algorithm is used on the  leading eigenvectors of  in an attempt to identify the appropriate piecewise constant regions. In this paper we investigate the form of the leading eigenvectors of the Markov matrix  . Using some simple image segmentation examples we confirm that the leading eigenvectors of  are roughly piecewise constant for problems with well separated clusters. However, we observe that for several segmentation problems that we might wish to solve, the coupling between the clusters is significantly stronger and, as a result, the piecewise constant approximation breaks down. Unlike the piecewise constant approximation, a perfectly general view is that the eigenvectors of  determine particular flows of probability along the edges in the graph. We refer to these as eigenflows since they are characterized by their associated eigenvalue , which specifies the flow’s overall rate of decay. Instead of measuring the decay rate in terms of the eigenvalue , we find it more convenient to use the flow’s halflife  , which is simply defined by    . Here  is the number of Markov chain steps needed to reduce the particular eigenflow to half its initial value. Note that as approaches  the half-life approaches infinity. From the perspective of eigenflows, a graph representing a set of weakly coupled clusters produces eigenflows between the various clusters which decay with long halflives. In contrast, the eigenflows within each cluster decay much more rapidly. In order to identify clusters we therefore consider the eigenflows with long halflives. Given such a slowly decaying eigenflow, we identify particular bottleneck regions in the graph which critically restrict the flow (cf. [12]). To identify these bottlenecks we propose computing the sensitivity of the flow’s halflife with respect to perturbations in the edge weights. We implement a simple spectral graph partitioning algorithm which is based on these ideas. We first compute the eigenvectors for the Markov transition matrix, and select those with long halflives. For each such eigenvector, we identify bottlenecks by computing the sensitivity of the flow’s halflife with respect to perturbations in the edge weights. In the current algorithm, we simply select one of these eigenvectors in which a bottleneck has been identified, and cut edges within the bottleneck. The algorithm recomputes the eigenvectors and eigenvalues for the modified graph, and continues this iterative process until no further edges are cut. 2 From Affinities to Markov Chains Following the formulation in [4], we consider an undirected graph with vertices  , for   , and edges    with non-negative weights    . Here the weight    represents the affinity of vertices   and   . The edge affinities are assumed to be symmetric, that is,        . A Markov chain is defined using these affinities by setting the transition probability !"  from vertex # to vertex  to be proportional to the edge affinity, $  . That is, !" %'&)(+*    where & ,.-./ 10 *   gives the normalizing factor which ensures / 10 * !2 23 . In matrix notation, the affinities are represented by a symmetric 46574 matrix 8 , with elements $  , and the transition probability matrix 9;:!<  = is given by >.8@? (A* B?C diag :& * D& / =E (1) Notice that the 4F524 matrix  is not in general symmetric. This transition probability matrix  defines the random walk of a particle on the graph . Suppose the initial probability of the particle being at vertex   is GIH  , for JK;L4 . Then, the probability of the particle being initially at vertex   and taking edge    is !" MGIH  . In matrix notation, the probability of the particle ending up any of the vertices N <O: *  P QQQRM / = after one step is given by the distribution N G *   N G%H , where N GTSU :VGWS * EXGYS / = . For analysis it is convenient to consider the matrix Z[.? (+*\ P ]? *\ P , which is similar to  (where ? is as given in Eq. (1)). The matrix Z therefore has the same spectrum as  and any eigenvector N ^ of Z must correspond to an eigenvector ? *\ P N ^ of  with the same eigenvalue. Note that Z_C? (+*\ P ]? *M\ P `? (A*M\ P 8@? (A* ? *\ P `? (A*M\ P 8@? (A*M\ P , and therefore Z is a symmetric 4a524 matrix since 8 is symmetric while ? is diagonal. The advantage of considering the matrix Z over  is that the symmetric eigenvalue problem is more stable to small perturbations, and is computationally much more tractable. Since the matrix Z is symmetric, it has an orthogonal decomposition of the form: Z[.bTcdbTef (2) (a) (b) (c) (d) (e) Figure 1: (a-c) Three random images each having an occluder in front of a textured background. (d-e) A pair of eye images. where bCN ^ *  N ^ P QQQ+ N ^ /  are the eigenvectors and c is a diagonal matrix of eigenvalues * DIPLQQQ+D/  sorted in decreasing order. While the eigenvectors have unit length,  N ^ S  ; , the eigenvalues are real and have an absolute value bounded by 1,  S  . The eigenvector representation provides a simple way to capture the Markovian relaxation process [12]. For example, consider propagating the Markov chain for  iterations. The transition matrix after  iterations, namely ] , can be represented as:   '? *M\ P b c  b e ? (+*\ P  (3) Therefore the probability distribution for the particle being at vertex   after  steps of the random walk, given that the initial probability distribution was N G,H , is N G@   N G H  ? *\ P bTcf N  H , where N  H  b e ? (+*\ P N G@H provides the expansion coefficients of the initial distribution N G%H in terms of the eigenvectors of Z . As  , the Markov chain approaches the stationary distribution N ,   N  e . Assuming the graph is connected with edges having non-zero weights, it is convenient to interpret the Markovian relaxation process as perturbations to the stationary distribution, N G  N  - / 0 P     N   , where *   is associated with the stationary distribution N and N    ? *M\ P N ^  . 3 EigenFlows Let N G@H be an initial probability distribution for a random particle to be at the vertices of the graph . By the definition of the Markov chain, recall that the probability of making the transition from vertex   to   is the probability of starting in vertex   , times the conditional probability of taking edge    given that the particle is at vertex   , namely !   GIH  . Similarly, the probability of making the transition in the reverse direction is ! D  G)H  . The net flow of probability mass along edge    from # to   is therefore the difference !" MGIH  !UD VG)H  . It then follows that the net flow of probability mass from vertex   to   is given by    : N G,H#= , where    : N G%H#= is the :   J= -element of the 4F5 4 matrix K: N G H =f  diag : N G H =  diag : N G H = ]ef (4) Notice that    e for    diag : N G%H = , and therefore  is antisymmetric (i.e.  e    ). This expresses the fact that the flow    from   to   is just the opposite sign of the flow in the reverse direction. Furthermore, it can be shown that K: =T for any stationary distribution . Therefore, the flow is caused by the eigenvectors N   with  _ , and hence we analyze the rate of decay of these eigenflows K: N   = . For illustration purposes we begin by considering an ensemble of random test images formed from two independent samples of 2D Gaussian filtered white noise (see Fig. 1a-c). One sample is used to form the  5a background image, and a cropped  5 fragment of second sample is used for the foreground region. A small constant bias is added to the foreground region. (a) (b) (c) Figure 2: (a) Eigenmode (b) corresponding eigenflow (c) gray value at each pixel corresponds to the maximum of the absolute sensitivities of all the weights on edges connected to a pixel (not including itself). Dark pixels indicate high absolute sensitivities. A graph clustering problem is formed where each pixel in a test image is associated with a vertex of the graph . The edges in are defined by the standard 8-neighbourhoodof each pixel (with pixels at the edges and corners of the image only having 5 and 3 neighbours, respectively). The edge weight between neighbouring vertices   and   is given by the affinity      :Y: N   =  Y: N   =M= P $: P =  , where Y: N  S = is the test image brightness at pixel N  S and is a grey-level standard deviation. We use   , where is the median absolute difference of gray levels between all neighbouring pixels and ;L  . This generative process provides an ensemble of clustering problems which we feel are representative of the structure of typical image segmentation problems. In particular, due to the smooth variation in gray-levels, there is some variability in the affinities within both foreground and background regions. Moreover, due to the use of independent samples for the two regions, there is often a significant step in gray-level across the boundary between the two regions. Finally, due to the small bias used, there is also a significant chance for pixels on opposite sides of the boundary to have similar gray-levels, and thus high affinities. This latter property ensures that there are some edges with significant weights between the two clusters in the graph associated with the foreground and background pixels. In Figure 2 we plot one eigenvector, N  S , of the matrix  along with its eigenflow, K: N  S = . Notice that the displayed eigenmode is not in general piecewise constant. Rather, the eigenvector is more like vibrational mode of a non-uniform membrane (in fact, they can be modeled in precisely that way). Also, for all but the stationary distribution, there is a significant net flow between neighbours, especially in regions where the magnitude of the spatial gradient of the eigenmode is larger. 4 Perturbation Analysis of EigenFlows As discussed in the introduction, we seek to identify bottlenecks in the eigenflows associated with long halflives. This notion of identifying bottlenecks is similar to the well-known max-flow, min-cut theorem. In particular, for a graph whose edge weights represent maximum flow capacities between pairs of vertices, instead of the current conditional transition probabilities, the bottleneck edges can be identified as precisely those edges across which the maximum flow is equal to their maximum capacity. However, in the Markov framework, the flow of probability across an edge is only maximal in the extreme cases for which the initial probability of being at one of the edge’s endpoints is equal to one, and zero at the other endpoint. Thus the max-flow criterion is not directly applicable here. Instead, we show that the desired bottleneck edges can be conveniently identified by considering the sensitivity of the flow’s halflife to perturbations of the edge weights (see Fig. 2c). Intuitively, this sensitivity arises because the flow across a bottleneck will have fewer alternative routes to take and therefore will be particularly sensitive to changes in the edge weights within the bottleneck. In comparison, the flow between two vertices in a strongly coupled cluster will have many alternative routes and therefore will not be particularly sensitive on the precise weight of any single edge. In order to pick out larger halflives, we will use one parameter,  H , which is a rough estimate of the smallest halflife that one wishes to consider. Since we are interested in perturbations which significantly change the current halflife of a mode, we choose to use a logarithmic scale in halflife. A simple choice for a function which combines these two effects is : =   :   H = , where  the halflife of the current eigenmode. Suppose we have an eigenvector N ^ of Z , with eigenvalue . This eigenvector decays with a halflife of F   :X L=  :D  = . Consider the effect on d: = of perturbing the affinity    for the :   J = -edge, to       . In particular, we show in the Appendix that the derivative of :f:    =M= with respect to    , evaluated at      , satisfies &   :   H = &       :X L=  : I=   :   =  ^   &   ^   &  P  :   I= ^ P  &   ^ P  &   (5) Here : ^ M ^ #= are the :   J= elements of eigenvector N ^ and :X&& #= are degrees of nodes :   J = (Eq.1). In Figure 2, for a given eigenvector and its flow, we plot the maximum of absolute sensitivities of all the weights on edges connected to a pixel (not including itself). Note that the sensitivities are large in the bottlenecks at the border of the foreground and background. 5 EIGENCUTS: A Basic Clustering Algorithm We select a simple clustering algorithm to test our proposal of using the derivative of the eigenmode’s halflife for identifying bottleneck edges. Given a value of  H , which is roughly the minimum halflife to consider for any eigenmode, we iterate the following: 1. Form the symmetric  affinity matrix  , and initialize  . 2. Set ! #" $&%(' )+*-,   ." ) , and set a scale factor / to be the median of 0 #" for 1$325476869694: . Form the symmetric matrix ;  =< ,?>+@   =< ,?>+@ . 3. Compute eigenvectors ACB D , 4 B D @ 49E9E9E4FB D 'FG of ;H , with eigenvalues I J , IFKLI J @ IM69696KLI J ' I . 4. For each eigenvector B DN of ;  with halflife O NQP3R OTS , compute the halflife sensitivities, U N #" ) WVX Y:Z\[^]`_\ab] Mc V+dFe.f g for each edge in the graph. Here we use R h2\i\j . 5. Do non-maximal suppression within each of the computed sensitivities. That is, suppress the sensitivity U N ." ) if there is a strictly more negative value U N k " ) or U N ." ' for some vertex l k in the neighbourhood of l ) , or some l ' in the neighbourhood of l . 6. Compute the sum m N of n U N #" )8o  #" ) over all non-suppressed edges p.1:4Cq r for which U N ." )s t i`/ . We use t hnvu6w2 . 7. Select the eigenmode B D N8x for which m N8x is maximal. 8. Cut all edges p.1:4Cq r in y (i.e. set their affinities to 0) for which U N8x #" )0s t i\/ and for which this sensitivity was not suppressed during non-maximal suppression. 9. If any new edges have been cut, go to 2. Otherwise stop. Here steps  (z are as described previously, other than computing the scaling constant { , which is used in step  to provide a scale invariant threshold on the computed sensitivities. In step 4 we only consider eigenmodes with halflives larger than |D H , with | _ } because this typically eliminates the need to compute the sensitivities for many modes with tiny values of  S and, because of the  H term in : = , it is very rare for eigenvectors with halflives smaller than | H to produce any sensitivity less than ~ . In step 5 we perform a non-maximal suppression on the sensitivities for the €C eigenvector. We have observed that at strong borders the computed sensitivities can be less than ~ in a band along the border few pixels thick. This non-maximal suppression allows us to thin this region. Otherwise, many small isolated fragments can be produced in the neighbourhood of such strong borders. In step 6 we wish to select one particular eigenmode to base the edge cutting on at this iteration. The reason for not considering all the modes simultaneously is that we have found the locations of the cuts can vary by a few pixels for different modes. If nearby edges are cut as a result of different eigenmodes, then small isolated fragments can result in the final clustering. Therefore we wish to select just one eigenmode to base cuts on each iteration. The particular eigenmode selected can, of course, vary from one iteration to the next. The selection strategy in step 6 above picks out the mode which produces the largest linearized increment in d: S =    : S   H = . That is, we compute S      _   e#f g     , where           is the change of affinities for any edge left to be cut, and       otherwise. Other techniques for selecting a particular mode were also tried, and they all produced similar results. This iterative cutting process must eventually terminate since, except for the last iteration, edges are cut each iteration and any cut edges are never uncut. When the process does terminate, the selected succession of cuts provides a modified affinity matrix 8  which has well separated clusters. For the final clustering result, we can use either a connected components algorithm or the  -means algorithm of [5] with  set to the number of modes having large halflives. 6 Experiments We compare the quality of EIGENCUTS with two other methods: a  -means based spectral clustering algorithm of [5] and an efficient segmentation algorithm proposed in [1] based on a pairwise region comparison function. Our strategy was to select thresholds that are likely to generate a small number of stable partitions. We then varied these thresholds to test the quality of partitions. To allow for comparison with  -means, we needed to determine the number of clusters  a priori. We therefore set  to be the same as the number of clusters that EIGENCUTS generated. The cluster centers were initialized to be as orthogonal as possible [5]. The first two rows in Fig. 3 show results using EIGENCUTS. A crucial observation with EIGENCUTS is that, although the number of clusters changed slightly with a change in  H , the regions they defined were qualitatively preserved across the thresholds and corresponded to a naive observer’s intuitive segmentation of the image. Notice in the random images the occluder is found as a cluster clearly separated from the background. The performance on the eye images is also interesting in that the largely uniform regions around the center of the eye remain as part of one cluster. In comparison, both the  -means algorithm and the image segmentation algorithm of [1] (rows 3-6 in Fig. 3) show a tendency to divide uniform regions and give partitions that are neither stable nor intuitive, despite multiple restarts. 7 Discussion We have demonstrated that the common piecewise constant approximation to eigenvectors arising in spectral clustering problems limits the applicability of previous methods to situations in which the clusters are only relatively weakly coupled. We have proposed a new edge cutting criterion which avoids this piecewise constant approximation. Bottleneck edges between distinct clusters are identified through the observed sensitivity of an eigenflow’s halflife on changes in the edges’ affinity weights. The basic algorithm we propose is computationally demanding in that the eigenvectors of the Markov matrix must be recomputed after each iteration of edge cutting. However, the point of this algorithm is to simply demonstrate the partitioning that can be achieved through the computation of the sensitivity of eigenflow halflives to changes in edge weights. More efficient updates of the eigenvalue computation, taking advantage of low-rank changes in the matrix Z  from one iteration to the next, or a multi-scale technique, are important areas for further study. (a) (b) (c) (d) (e) Figure 3: Each column refers to a different image in the dataset shown in Fig. 1. Pairs of rows correspond to results from applying: EIGENCUTS with  L  +|    $+~      and  H     (Rows 1&2),  -Means spectral clustering where  , the number of clusters, is determined by the results of EIGENCUTS (Rows 3&4) and Falsenszwalb & Huttenlocher ~      (Rows 5&6). Acknowledgements We have benefited from discussions with Sven Dickinson, Sam Roweis, Sageev Oore and Francisco Estrada. References [1] P. Felzenszalb and D. Huttenlocher Efficiently Computing a Good Segmentation Internation Journal on Computer Vision, 1999. [2] R. Kannan, S. Vempala and A. Vetta On clusterings–good, bad and spectral. Proc. 41st Annual Symposium on Foundations of Computer Science , 2000. [3] J. R. Kemeny and J. L. Snell Finite Markov Chains. Van Nostrand, New York, 1960. [4] M. Meila and J. Shi A random walks view of spectral segmentation. Proc. International Workshop on AI and Statistics , 2001. [5] A. Ng, M. Jordan and Y. Weiss On Spectral Clustering: analysis and an algorithm NIPS, 2001. [6] A. Ng, A. Zheng, and M. Jordan Stable algorithms for link analysis. Proc. 24th Intl. ACM SIGIR Conference, 2001. [7] A. Ng, A. Zheng, and M. Jordan Link analysis, eigenvectors and stability. Proc. 17th Intl. IJCAI, 2001. [8] P. Perona and W. Freeman A factorization approach to grouping. European Conference on Computer Vision, 1998. [9] A. Pothen Graph partitioning algorithms with applications to scientific computing. Parallel Numerical Algorithms, D. E. Keyes et al (eds.), Kluwer Academic Press, 1996. [10] G. L. Scott and H. C. Longuet-Higgins Feature grouping by relocalization of eigenvectors of the proximity matrix. Proc. British Machine Vision Conference, pg. 103-108, 1990. [11] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transaction on Pattern Analysis and Machine Intelligence , 2000. [12] N. Tishby and N. Slonim Data clustering by Markovian Relaxation and the Information Bottleneck Method. NIPS, v 13, MIT Press, 2001. [13] Y. Weiss Segmentation using eigenvectors: a unifying view. International Conference on Computer Vision, 1999. Appendix We compute the derivative of the log of half-life O of an eigenvalue J with respect to an element o ) of the affinity matrix  . Half-life is defined as the power to which J must be raised to reduce the eigenvalue to half, i.e., J ]  28i . What we are interested is in seeing significant changes in those half-lives O which are relatively large compared to some minimum half-life O S . So eigenvectors with half-lives smaller than O S are effectively ignored. It is easy to show that,  p O  ObS9r  o )   p r J    pCJbr  pCJ ] i 5r  J  o ) 4 and  J  o )  B D  ;  o ) B D 6 (6) Let B D be the corresponding eigenvector such that ; B D &JB D , where ; is the modified affinity matrix (Sec 2). As ; ( < ,?>M@   < ,?>M@ , we can write for all 1  q :  ;  o ) ( < ,?>M@  )   )   < ,?>M@ n   < ,?>+@ n  < ,?>M@ 4 (7) where  is a matrix of all zeros except for a value of 2 at location pb4br ; p  4  ) r are degrees of the nodes 1 and q (stacked as elements on the diagonal matrix  see Sec 2); and    V! #"%$& e @   V #"%$& g @  ) )  having non-zero entries only on the diagonal. Simplifying the expression further, we get B D  ;  o ) B D  B D  < ,?>+@   )   )   < ,?>M@ B D n&B D   ,?>+@   < ,?>M@   < ,?>+@  B D n&B D   < ,?>M@   < ,?>+@ 7 ,?>+@  B D 6 (8) Using the fact that  < ,?>+@ y < ,?>M@ B D  ; B D  JB D , and   ,?>M@   ,?>+@  as both  and  are diagonal, the above equation reduces to: B D  ;  o ) B D  B D  < ,?>+@   )   ) '  < ,?>M@ B D n(JB D   ,?>M@ B D  B D  < ,?>+@   )   )   < ,?>M@ B D n JB D   < ,    < , )  ) )  B D 6 (9) The scalar form of this expression is used in Eq.5.
2002
34
2,237
The RA Scanner: Prediction of Rheumatoid Joint Inflammation Based on Laser Imaging Anton Schwaighofer1 2 1 TU Graz, Institute for Theoretical Computer Science Inffeldgasse 16b, 8010 Graz, Austria http://www.igi.tugraz.at/aschwaig Volker Tresp, Peter Mayer 2 Siemens Corporate Technology, Department of Neural Computation Otto-Hahn-Ring 6, 81739 Munich, Germany http://www.tresp.org,peter.mayer@mchp.siemens.de Alexander K. Scheel, Gerhard M¨uller University G¨ottingen, Department of Medicine, Nephrology and Rheumatology Robert-Koch-Straße 40, 37075 G¨ottingen, Germany ascheel@gwdg.de,gmueller@med.uni-goettingen.de Abstract We describe the RA scanner, a novel system for the examination of patients suffering from rheumatoid arthritis. The RA scanner is based on a novel laser-based imaging technique which is sensitive to the optical characteristics of finger joint tissue. Based on the laser images, finger joints are classified according to whether the inflammatory status has improved or worsened. To perform the classification task, various linear and kernel-based systems were implemented and their performances were compared. Special emphasis was put on measures to reliably perform parameter tuning and evaluation, since only a very small data set was available. Based on the results presented in this paper, it was concluded that the RA scanner permits a reliable classification of pathological finger joints, thus paving the way for a further development from prototype to product stage. 1 Introduction Rheumatoid arthritis (RA) is the most common inflammatory arthropathy with 1–2% of the population being affected. This chronic, mostly progressive disease often leads to early disability and joint deformities. Recent studies have convincingly shown that early treatment and therefore an early diagnosis is mandatory to prevent or at least delay joint destruction [2]. Unfortunately, long-term medication with disease modifying anti-rheumatic drugs (DMARDs) often acts very slowly on clinical parameters of inflammation, making it difficult to find the right drug for a patient within adequate time. Conventional radiology, such as magnetic resonance imaging (MRI) and ultrasound, may provide information on soft tissue changes, yet these techniques are time-consuming and—in the case of MRI— costly. New imaging techniques for RA diagnosis should thus be non-invasive, of low cost, examiner independent and easy to use. Following recent experiments on absorption and scattering coefficients of laser light in joint tissue [6], a prototype laser imaging technique was developed [7]. As part of the prototype development, it became necessary to analyze if the rheumatic status of a finger joint can be reliably classified on the basis of the laser images. Aim of this article is to provide an overview of this analysis. Employing different linear and kernel-based classifiers, we will investigate the performance of the laser imaging technique to predict the status of the rheumatic joint inflammation. Provided that the accuracy of the overall system is sufficiently high, the imaging technique and the automatic inflammation classification can be combined into a novel device that allows an inexpensive and objective assessment of inflammatory joint changes. The paper is organized as follows. In Sec. 2 we describe the RA scanner in more detail, as well as the process of data acquisition. In Sec. 3 we describe the linear and kernel-based classifiers used in the experiments. In Sec. 4 we describe how the methods were evaluated and compared. We present experimental results in Sec. 5. Conclusions and an outlook are given in Sec. 6. 2 The RA Scanner The rheumatoid arthritis (RA) scanner provides a new medical imaging technique, developed specifically for the diagnosis of RA in finger joints. The RA scanner [7] allows the in vivo trans-illumination of finger joints with laser light in the near infrared wavelength range. The scattered light distribution is detected by a camera and is used to assess the inflammatory status of the finger joint. Example images, taken from an inflamed joint and from a healthy control, are shown in Fig. 1. Starting out from the laser images, image pre-processing is used to obtain a description of each laser image by nine numerical features. A brief description of the features is given in Fig. 1. Furthermore for each finger joint examined, the circumference is measured using a conventional measuring tape. The nine image features plus the joint circumference make up the data that is used in the classification step of the RA scanner to predict the inflammatory status of the joint. 2.1 Data Acquisition One of the clinically important questions is to know as early as possible if a prescribed medication improves the state of rheumatoid arthritis. Therefore the goal of the classification step in the RA scanner is to decide—based on features extracted from the laser images—if there was an improvement of arthritis activity or if the joint inflammation remained unchanged or worsened. The data for the development of the RA scanner stems from a study on 22 patients with rheumatoid arthritis. Data from 72 finger joints were used for the study. All of these 72 finger joints were examined at baseline and during a follow-up visit after a mean duration of 42 days. Earlier data from an additional 20 patients had to be discarded since experimental conditions were not controlled properly. Each joint was examined and the clinical arthritis activity was classified from 0 (inactive, not swollen, tender or warm) to 3 (very active) by a rheumatologist. The characteristics of joint tissue was recorded by the above described laser imaging technique. In a preprocess(a) Laser image of a healthy finger joint (b) Laser image of an inflamed finger joint. The inflammation changes the joint tissue’s absorption coefficient, giving a darker image. Figure 1: Two examples of the light distribution captured by the RA scanner. A laser beam is sent through the finger joint (the finger tip is to the right, the palm is on the left), the light distribution below the joint is captured by a CCD element. To calculate the features, first a horizontal line near the vertical center of the finger joint is selected. The distribution of light intensity along that line is bell-shaped. The features used in the classification task are the maximum light intensity, the curvature of the light intensity at the maximum and seven additional features based on higher moments of the intensity curve. ing step nine features were derived from the distribution of the scattered laser light (see Fig. 1). The tenth feature is the circumference of the finger joint. Since there are high inter-individual variations in optical joint characteristics, it is not possible to tell the inflammatory status of a joint from one single image. Instead, special emphasis was put on the intra-individual comparison of baseline and follow-up data. For every joint examined, data from baseline and follow-up visit were compared and changes in arthritis activity were rated as improvement, unchanged or worsening. This rating divided the data into two classes: Class 1 contains the joints where an improvement of arthritis activity was observed (a total of 46 joints), and class  1 are the joints that remained unchanged or worsened (a total of 26 joints). For all joints, the differences in feature values between baseline and follow-up visit were computed. 3 Classification Methods In this section, we describe the employed linear and kernel-based classification methods, where we focus on design issues. 3.1 Gaussian Process Classification (GPC) In Gaussian processes, a function f  x  M ∑ j  1 wjK  x  x j  Θ  (1) is described as a superposition of M kernel functions K  x  x j  Θ  , defined for each of the M training data points x j, with weight w j. The kernel functions are parameterized by the vector Θ  θ0 θd  . In two-class Gaussian process classification, the logistic transfer function σ  f  x     1 e  f  x   1 is applied to the prediction of a Gaussian process to produce an output which can be interpreted as π  x  , the probability of the input x belonging to class 1 [10]. In the experiment we chose the Gaussian kernel function K  x  x j  Θ   θ0 exp   1 2  x  xj  Tdiag  θ2 1 θ2 d   1  x  x j  (2) with input length scales θ1 θd where d is the dimension of the input space. diag  θ2 1 θ2 d  denotes a diagonal matrix with entries θ2 1 θ2 d. For training the Gaussian process classifier (that is, determining the posterior probabilities of the parameters w 1  wM  θ0  θd) we used a full Bayesian approach, implemented with Readford Neal’s freely available FBM software.1 3.2 Gaussian Process Regression (GPR) In GPR we treat the classification problem as a regression problem with target values   1  1  , i.e. we do not apply the logistic transfer function as in the last subsection. Any GP output  0 is treated as indicating an example from class 0, any output   0 as an indicator for class 1.The disadvantage is that the GPR prediction cannot be treated as a posterior class probability; the advantage is that the fast and non-iterative training algorithms for GPR can be applied. GPR for classification problems can be considered as special cases of Fisher discriminant analysis with kernels [4] and of least squares support vector machines [9]. The parameters Θ   θ0 θd  of the covariance function Eq. (2) were chosen by maximizing the posterior probability of Θ, P  Θ  t  X  ∝P  t  X  Θ  P  Θ  , via a scaled conjugate gradient method. Later on, this method will be referred to as “GPR Bayesian”. Results are also given for a simplified covariance function with θ0  1, θ1  θ2   θd  r, where the common length scale r was chosen by cross-validation (later on referred to as “GPR crossval”). 3.3 Support Vector Machine (SVM) The SVM is a maximum margin linear classifier. As in Sec. 3.2, the SVM classifies a pattern according to the sign of f  x  in Eq. (1). The difference is that the weights w   w1   wM  T in the SVM minimize the particular cost function [8] wTKw M ∑ i 1 Ci  1  yi  f  xi    (3) where   sets all negative arguments to zero. Here, yi  1   1  is the class label for training point xi. Ci 0 is a constant that determines the weight of errors on the training data, and K is an M M matrix containing the amplitudes of the kernel functions at the training data, i.e. Ki j  K  xi  x j  Θ  . The motivation for this cost function stems from statistical learning theory [8]. Many authors have previously obtained excellent classification results by using the SVM. One particular feature of the SVM is the sparsity of the solution vector w, that is, many elements wi are zero. In the experiments, we used both an SVM with linear kernel (“SVM linear”) and an SVM with a Gaussian kernel (“SVM Gaussian”), equivalent to the Gaussian process kernel Eq. (2), with θ0  1, θ1  θ2   θd  r. The kernel parameter r was chosen by cross-validation. 1As a prior distribution for kernel parameter θ0 we chose a Gamma distribution. θ1  θd are samples of a hierarchical Gamma distribution. In FBM syntax, the prior is 0.05:0.5 x0.2:0.5:1. Sampling from the posterior distribution was done by persistent hybrid Monte Carlo, following the example of a 3-class problem in Neal [5]. To compensate for the unbalanced distribution of classes, the penalty term C i was chosen to be 0 8 for the examples from the larger class and 1 for the smaller class. This was found empirically to give the best balance of sensitivity and specificity (cf. Sec. 4). A formal treatment of this issue can be found in Lin et al. [3]. 3.4 Generalized Linear Model (GLM) A GLM for binary responses is built up from a linear model for the input data, and the model output f  x  wTx is in turn input to the link function. For Bernoulli distributions, the natural link function [1] is the logistic transfer function σ  f  x     1 e  f  x    1. The overall output of the GLM σ  f  x   computes π  x  , the probability of the input x belonging to class 1. Training of the linear model was done by iteratively re-weighted least squares (IRLS). 4 Training and Evaluation One of the challenges in developing the classification system for the RA scanner is the low number of training examples available. Data was collected through an extensive medical study, but only data from 72 fingers were found to be suitable for further use. Further data can only be acquired in carefully controlled future studies, once the initial prototype method has proven sufficiently successful. Training From the currently available 72 training examples, classifiers need to be trained and evaluated reliably. Part of the standard methodology for small data sets is N-fold crossvalidation, where the data are partitioned into N equally sized sets and the system is trained on N  1 of those sets and tested on the Nth data set left out. Since we wish to make use of as much training data as possible, N  36 seemed the appropriate choice 2, giving test sets with two examples in each iteration. For some of the methods model parameter needed to be tuned (for example, choosing SVM kernel width), where again cross-validation is employed. The nested cross-validation ensures that in no case any of the test examples is used for training or to tune parameters, leading to the following procedure: Run 36 fold CV For Bayesian methods or methods without tunable parameters (SVM linear, GPC, GPR Bayesian, GLM): Use full training set to tune and train classifier For Non-Bayesian methods (SVM Gaussian, GPR crossval): Run 35 fold CV on the training set choose parameters to minimise CV error train classifier with chosen parameters evaluate the classifier on the 2 example test set Significance Tests In order to compare the performance of two given classification methods, one usually employs statistical hypothesis testing. We use here a test that is best suited for small test sets, since it takes into account the outcome on the test examples one by one, thus matching our above described 36-fold cross validation scheme perfectly. A similar test has been used by Yang and Liu [11] to compare text categorization methods. Basis of the test are two counts b (how many examples in the test set were correctly classified by method B, but misclassified by method A) and c (number of examples misclassified by B, correctly classified by A). We assume that examples misclassified (resp. correctly classified) by both A and B do not contribute to the performance difference. We take the 2Thus, it is equivalent to a leave-one-out scheme, yet with only half the time consumption. Method Error rate GLM 20 83% GLM, reduced feature set 16 67% GPR Bayesian 13 89% GPR crossval 22 22% GPC 23 61% SVM linear 22 22% SVM linear, reduced feature set 16 67% SVM Gaussian 20 83% Table 1: Error rates of different classification methods on the rheumatoid arthritis prediction problem. All error rates have been computed by 36-fold cross-validation. “Reduced feature set” indicates experiments where a priori feature selection has been done counts b and c as the sufficient statistics of a binomial random variable with parameter θ, where θ is the proportion of cases where method A performs better than method B. The null hypothesis H0 is that the parameter θ  0 5, that is, both methods A and B have the same performance. Hypothesis H1 is that θ  0 5. The test statistics under the null hypothesis is the Binomial distribution Bi  i  b c  θ) with parameter θ  0 5. We reject the null hypothesis if the probability of observing a count k c under the null hypothesis P  k c   ∑b c i  c Bi  i  b c  θ  0 5  is sufficiently small. ROC Curves In medical diagnosis, biometrics and other areas, the common means of assessing a classification method is the receiver operating characteristics (ROC) curve. An ROC curve plots sensitivity versus 1-specificity3 for different thresholds of the classifier output. Based on the ROC curve it can be decided how many false positives resp. false negatives one is willing to tolerate, thus helping to tune the classifier threshold to best suit a certain application. Acquiring the ROC curve typically requires the classifier output on an independent test set. We instead use the union of all test set outputs in the cross-validation routine. This means that the ROC curve is based on outputs of slightly different models, yet this still seems to be the most suitable solution for such few data. For all classifiers we assess the area of the ROC curve and the cross-validation error rate. Here the above mentioned threshold on the classifier output is chosen such that sensitivity equals specificity. 5 Results Tab. 1 lists error rates for all methods listed in Sec. 3. Gaussian process regression (GPR Bayesian) with an error rate of 14% clearly outperforms all other methods, which all achieve comparable error rates in the range of 20 24%. We attribute the good performance of GPR to its inherent feature relevance detection, which is done by adapting the length scales θi in the covariance function Eq. (2), i.e. a large θ i means that the i-th feature is essentially ignored. Surprisingly, Gaussian process classification implemented with Markov chain Monte Carlo sampling (GPC) showed rather poor performance. We currently have no clear explanation for this fact. We found no indications of convergence problems, furthermore we achieved similar results with different sampling schemes. In an additional experiment we wanted to find out if classification results could be improved 3sensitivity  truepositives truepositives  falsenegatives specificity  truenegatives truenegatives  falsepositives 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1−Specificity Sensitivity GPR Bayesian GLM, reduced feature set SVM linear, reduced feature set Figure 2: ROC curves of the best classification methods, both on the full data set and on a reduced data set where a priori feature selection was used to retain only the three most relevant features. Integrating the area under the ROC curves gives similar results for all three methods, with an area of 0 86 for SVM linear and GLM, and 0 84 for GPR Bayesian by using only a subset of input features4. We found that only the performance of the two linear classifiers (GLM and SVM linear) could be improved by the input feature selection. Both now achieve an error rate of 16 67%, which is slightly worse than GPR on the full feature set (see Tab. 1). Significance Tests Using the statistical hypothesis test described in the previous section, we compared all classification methods pairwise. It turned out the three best methods (GPR Bayesian, and GLM and SVM linear with reduced feature set) perform better than all other methods at a confidence level of 90% or more. Amongst the three best methods, no significant difference could be observed. ROC Curves For the three best classification methods (GPR Bayesian, and GLM and SVM linear with reduced feature set), we have plotted the receiver operating characteristics (ROC) curve in Fig. 2. According to the ROC curve a sensitivity of 80% can be achieved with a specificity at around 90%. GPR Bayesian seems to give best results, both in terms of error rate and shape of the ROC curve. Summary To summarize, when the full set of features was used, best performance was obtained with GPR Bayesian. We attribute this to the inherent input relevance detection mechanisms of this approach. Comparable yet slightly worse results could be achieved by performing feature selection a priori and reducing the number of input features to the three most significant ones. In particular, the error rates of linear classifiers (GLM and linear SVM) improved by this feature selection, whereas more complex classifiers did not benefit. We can draw the important conclusion that, using the best classifiers, a sensitivity of 80% can be reached at a specificity of approximately 90%. 6 Conclusions In this paper we have reported results of the analysis of a prototype medical imaging system, the RA scanner. Aim of the RA scanner is to detect soft tissue changes in finger joints, 4This was done with the input relevance detection algorithm of the neural network tool SENN, a variant of sequential backward elimination where the feature that least affects the neural network output is removed. The feature set was reduced to the three most relevant ones. which occur in early stages of rheumatoid arthritis (RA). Basis of the RA scanner is a novel laser imaging technique that is sensitive to inflammatory soft tissue changes. We have analyzed whether the laser images are suitable for an accurate prediction of the inflammatory status of a finger joint, and which classification methods are best suited for this task. Out of a set of linear and kernel-based classification methods, Gaussian processes regression performed best, followed closely by generalized linear models and the linear support vector machine, the latter two operating on a reduced feature set. In particular, we have shown how parameter tuning and classifier training can be done on basis of the scarce available data. For the RA prediction task, we achieved a sensitivity of 80% at a specificity of approximately 90%. These results show that a further development of the RA scanner is desirable. In the present study the inflammatory status is assessed by a rheumatologist, taking into account the patients subjective degree of pain. Thus we may expect a certain degree of label noise in the data we have trained the classification system on. Further developments of the classification system in the RA scanner will thus incorporate information from established medical imaging systems such as magnetic resonance imaging (MRI). MRI is known to provide accurate information about soft tissue changes in finger joints, yet is too costly to be routinely used for RA diagnosis. By incorporating MRI results into the RA scanner’s classification system, we expect to significantly improve the overall accuracy. Acknowledgments AS gratefully acknowledges support through an Ernst-von-Siemens scholarship. Thanks go to Radford Neal for making his FBM software available to the public, and to Ian Nabney and Chris Bishop for the Netlab toolbox. References [1] Fahrmeir, L. and Tutz, G. Multivariate Statistical Modelling Based on Generalized Linear Models. Springer Verlag, 2nd edn., 2001. [2] Kim, J. and Weisman, M. When does rheumatoid arthritis begin and why do we need to know? Arthritis and Rheumatism, 43:473–482, 2000. [3] Lin, Y., Lee, Y., and Wahba, G. Support vector machines for classification in nonstandard situations. Tech. Rep. 1016, Department of Statistics, University of Wisconsin, Madison, WI, USA, 2000. [4] Mika, S., R¨atsch, G., Weston, J., Sch¨olkopf, B., Smola, A. J., and M¨uller, K.-R. Invariant feature extraction and classification in kernel spaces. In S. A. Solla, T. K. Leen, and K.-R. M¨uller, eds., Advances in Neural Information Processing Systems 12. MIT Press, 2000. [5] Neal, R. M. Monte carlo implementation of gaussian process models for bayesian regression and classification. Tech. Rep. 9702, Department of Statistics, University of Toronto, 1997. [6] Prapavat, V., Runge, W., Krause, A., Beuthan, J., and M¨uller, G. A. Bestimmung von gewebeoptischen Eigenschaften eines Gelenksystems im Fr¨uhstadium der rheumatoiden Arthritis (in vitro). Minimal Invasive Medizin, 8:7–16, 1997. [7] Scheel, A. K., Krause, A., Mesecke-von Rheinbaben, I., Metzger, G., Rost, H., Tresp, V., Mayer, P., Reuss-Borst, M., and M¨uller, G. A. Assessment of proximal finger joint inflammation in patients with rheumatoid arthritis, using a novel laser-based imaging technique. Arthritis and Rheumatism, 46(5):1177–1184, 2002. [8] Sch¨olkopf, B. and Smola, A. J. Learning with Kernels. MIT Press, 2002. [9] Van Gestel, T., Suykens, J. A., Lanckriet, G., Lambrechts, A., De Moor, B., and Vandewalle, J. Bayesian framework for least-squares support vector machine classifiers, gaussian processes and kernel fisher discriminant analysis. Neural Computation, 14(5):1115–1147, 2002. [10] Williams, C. K. and Barber, D. Bayesian classification with gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342–1351, 1998. [11] Yang, Y. and Liu, X. A re-examination of text categorization methods. In Proceedings of ACM SIGIR 1999. ACM Press, 1999.
2002
35
2,238
Optimality of Reinforcement Learning Algorithms with Linear Function Approximation Ralf Schoknecht ILKD University of Karlsruhe, Germany ralf.schoknecht@ilkd.uni-karlsruhe.de Abstract There are several reinforcement learning algorithms that yield approximate solutions for the problem of policy evaluation when the value function is represented with a linear function approximator. In this paper we show that each of the solutions is optimal with respect to a specific objective function. Moreover, we characterise the different solutions as images of the optimal exact value function under different projection operations. The results presented here will be useful for comparing the algorithms in terms of the error they achieve relative to the error of the optimal approximate solution. 1 Introduction In large domains the determination of an optimal value function via a tabular representation is no longer feasible with respect to time and memory considerations. Therefore, reinforcement learning (RL) algorithms are combined with linear function approximation schemes. However, the different RL algorithms, that all achieve the same optimal solution in the tabular case, converge to different solutions when combined with function approximation. Up to now it is not clear which of the solutions, i.e. which of the algorithms, should be preferred. One reason is that a characterisation of the different solutions in terms of the objective functions they optimise is partly missing. In this paper we state objective functions for the TD(O) algorithm [9], the LSTD algorithm [4, 3] and the residual gradient algorithm [1] applied to the problem of policy evaluation, i.e. the determination of the value function for a fixed policy. Moreover, we characterise the different solutions as images of the optimal exact value function under different projection operations. We think that an analysis of the different optimisation criteria and the projection operations will be useful for determining the errors that the different algorithms achieve relative to the error of the theoretically optimal approximate solution. This will yield a criterion for selecting an optimal RL algorithm. For the TD(O) algorithm such error bounds with respect to a specific norm are already known [2, 10] but for the other algorithms there are no comparable results. 2 Exact Policy Evaluation For a Markov decision process (MDP) with finite state space S (lSI = N), action space A, state transition probabilities p : (S, S, A) -+ [0,1] and stochastic reward function r : (S, A) -+ R policy evaluation is concerned with solving the Bellman equation Vit = "(PltVIt + Rit (1) for a fixed policy /-t : S -+ A. vt denotes the value of state Si, pt,j = p(Si' Sj, /-t(Si)), Rf = E{r(si,/-t(Si))} and "( is the discount factor. As the policy /-t is fixed we will omit it in the following to make notation easier. The fixed point V* of equation (1) can be determined iteratively with an operator T: RN -+ RN by TVn = V n+1 = "(PVn + R. This iteration converges to a unique fixed point [2], that is given by V* = (I - ,,(p)-l R, where (J - "(P) is invertible for every stochastic matrix P. 3 Approximate Policy Evaluation (2) (3) If the state space S gets too large the exact solution of equation (1) becomes very costly with respect to both memory and computation time. Therefore, often linear feature-based function approximation is applied. The value function V is represented as a linear combination of basis functions H := {1J>1' ... , IJ> F} which can be written as V = IJ>w, where w E RF is the parameter vector describing the linear combination and IJ> = (1J>11 ... IIJ> F) E RNxF is the matrix with the basis functions as columns. The rows of IJ> are the feature vectors CP(Si) E RF for the states Si. 3.1 The Optimal Approximate Solution If the transition probability matrix P were known, then the optimal exact solution V* = (J - ,,(P)-l R could be computed directly. The optimal approximation to this solution is obtained by minimising IllJ>w - V* II with respect to w. Therefore, a notion of norm must exist. Generally a symmetric positive definite matrix D can be used to define a norm according to II . liD = ~ with the scalar product (x, y) D = xT Dy. The optimal solution that can be achieved with the linear function approximator IJ>w then is the orthogonal projection of V* onto [IJ>], i.e. the span of the columns of IJ>. Let IJ> have full column rank. Then the orthogonal projection on [IJ>] according to the norm II· liD is defined as IID = 1J>(IJ>TDIJ»-lIJ>TD. We denote the optimal approximate solution by vf/ = IID V*. The corresponding parameter vector wfJ/ with vgL = IJ>wfJ/ is then given by wfJ/ = (IJ>TDIJ»-lIJ>TDV* = (IJ>TDIJ»-lIJ>TD(J _ ,,(P)-lR. (4) Here, 8L stands for supervised learning because wl} minimises the weighted quadratic error w~knF ~lllJ>w - V*111 = ~(lJ>w£L - v*f D(lJ>w£L - V*) = ~llVgL - V*111 (5) for a given D and V*, which is the objective of a supervised learning method. Note, that V* equals the expected discounted accumulated reward along a sampled trajectory under the fixed policy /-t, i.e. V*(so) = E[2:::o r(st, /-t(St))] for every So E S. These are exactly the samples obtained by the TD(l) algorithm [9]. Thus, the TD(l) solution is equivalent to the optimal approximate solution. 3.2 The Iterative TD Algorithm In the approximate case the Bellman equation (1) becomes <I>w = ,,(P<I>w + R (6) A popular algorithm for updating the parameter vector w after a single transition Xi -+ Zi with reward ri is the stochastic sampling-based TD(O)-algorithm [9] wn+l = wn + acp(xi)[ri + ,,(CP(Zi )T wn - cp(Xi)T wn] = (IF + aAi)wn + abi, (7) where a is the learning rate, Ai = cp(Xi)["(cp(Zi) - cp(xi)f, bi = cp(xi)ri and IF is the identity matrix in RF. Let p be a probability distribution on the state space S. Furthermore, let Xi be sampled according to p, Zi be sampled according to P(Xi , ·) and ri be sampled according to r(x;). We will use Ep[.] to denote the expectation with respect to the distribution p. Let AiP = Ep[A;] and bIt = Ep[bi]. If the p p learning rate decays according to Lat = 00 t La; < 00, (8) t then, in the average sense, the stochastic TD(O) algorithm (7) behaves like the deterministic iteration (9) with ATD = _<I>T D (I - rvP)<I> bTD = <I>T D R (10) Dp PI' Dp P , where D p = diag(p) is the diagonal matrix with the elements of p and R is the vector of expected rewards [2] (Lemma 6.5, Lemma 6.7). In particular the stochastic TD(O) algorithm converges if and only if the deterministic algorithm (9) converges. Furthermore, if both algorithms converge they converge to the same fixed point. An iteration of the form (9) converges if all eigenvalues of the matrix 1+ aAip p lie within the unit circle [5]. For a matrix Alt that has only eigenvalues with p negative real part and a learning rate at that decays according to (8) there is a t* such that the eigenvalues of I + atA IF lie inside the unit circle for all t > t* . p Hence, for a decaying learning rate the deterministic TD(O) algorithm converges if all eigenvalues of Aft have a negative real part. Since this requirement is not p always fulfilled the TD algorithm possibly diverges as shown in [1] . This divergence is due to the positive eigenvalues of AI;D [8]. p However, under special assumptions convergence of the TD(O) algorithm can be shown [2]. Let the feature matrix <I> E RNxF have full rank, where F :::; N, i.e. there are not more parameters than states). This results in no loss of generality because the linearly dependent columns of <I> can be eliminated without changing the power of the approximation architecture. The most important assumption concerns the sampling of the states that is reflected in the matrix D. Let the Markov chain be aperiodic and recurrent. Besides the aperiodicity requirement, this assumption results in no loss of generality because transient states can be eliminated. Then a steady-state distribution 7r of the Markov chain exists. When sampling the states accordinj3 to this steady-state distribution, i.e. D = D'/r = diag(7r), it can be shown that AI;" is negative definite [2] (Lemma 6.6). This immediately yields that all eigenvalues are negative which in turn yields convergence of the TD(O) algorithm with decaying learning rate. In the next section we will characterise the limit value vZ;: as the projection of V* in a more general setting. However, for the sampling distribution 7r there is another interesting interpretation of VZ;: as the fixed point of IID~ T , where IID~ is the orthogonal projection with respect to DJr onto [<r>], as defined in section 3.1, and T is the update operator defined in (2) [2, 10]. In the following we use this fact to deduce a new formula for VZ;: that has a form similar to V* in (3). Before we proceed, we need the following lemma Lemma 1 The matrix 1 ')'IID~P is regular. Proof: The matrix 1 ')'IID~P is regular if and only if it does not have eigenvalue zero. An equivalent condition is that one is not an eigenvalue of ')'IID ~ P. Therefore, it is sufficient to show that the spectral radius satisfies ehIID~P) < 1. For any matrix norm II· II it holds that e(A) :S IIAII [5]. Therefore, we know that ehIID~P) :S IbIID~PIID~ ' where the vector norm II·IID~ induces the matrix norm II . IID~ by the standard definition IIAIID~ = sUP ll x II D~= dIIAx IID~} . With this definition and with the fact that IlPx lID~ :S Il x lID~ for all x [2] (Lemma 6.4) we obtain IIPIID~ = sUP ll x II D~=dIIPxIID~} :S sUP llxII D~=dll x IID~} = 1. Moreover, we have IIIIDJID~ = sUP ll x II D~=d IIIID~ xI ID~} :S sUP ll x II D~=d llxIID~} = 1, where we used the well known fact that an orthogonal projection IID~ is a nonexpansion with respect to the vector norm II . IID~. Putting all together we obtain ehIID~P) :S II ')'IID~PIID~ :S ')' IIIIDJID~ · IIPIID~ :S ')' < 1. D We can now solve the fixed point equation vZ;: = IID~ TVZ;: and obtain (11) with j5 = IID~ P and R = IID~ R. This resembles equation (3) for the exact solution of the policy evaluation problem. The TD(O) solution with sampling distribution 7r can thus be interpreted as exact solution of the "projected" policy evaluation problem with j5 and R. Note, that compared to the TD(l) solution of the approximate policy evaluation problem VJ!: = IID~ (1 - ,),P) - l R with weighting matrix DJr equation (11) only differs in the position of the projection operator. This leads to an interesting comparison of TD(O) and TD(l). While TD(O) yields the exact solution of the projected problem, TD(l) yields the projected solution of the exact problem. 3.3 The Least-Squares TD Algorithm Besides the iterative solution of (6) often a direct solution by matrix inversion is computed using equation (9) in the fixed point form AiFwiF + bIt = O. This p p p approach is known as least-squares TD (LSTD) [4, 3]. It is only required that AIt p be invertible, i.e. that the eigenvalues be unequal zero. In contrast to the iterative TD algorithm the eigenvalues need not have negative real parts. Therefore, LSTD offers the possibility of using sampling distributions p other than the steady-state distribution 7r [6, 7] Thus, parts of the state space that would be rarely visited under the steady-state distribution can now be visited more frequently which makes the approximation of the value function more reliable. This is necessary if the result of policy evaluation should be used in a policy improvement step because otherwise the action choice in rarely visited states may be bad [6]. For the following let the feature matrix have full column rank. As described above this results in no loss of generality. LSTD allows to sample the states with an arbitrary sampling distribution p. If there are states s that are not visited under p, i.e. p(s) = 0, then these states can be eliminated from the Markov chain. Hence, without loss of generality we assume that the matrix D p = diag(p) is invertible. These conditions ensure the invertibility of A'};D and according to [4, 3] the LSTD p solution is given by (12) Note, that the matrix A'iF and the vector bI;D can be computed from samples p p such that the model P does not need to be known. Note also that in general wI;D ¥- wy} as discussed in [3]. This means, that the TD(O) solution wI;D and the p p p TD(I) solution wfJ/ may differ when function approximation is used. p Depending on the sampling distribution p the LSTD approach may be the only way of computing the fixed point of (9) because the corresponding iterative TD(O) algorithm may diverge due to positive eigenvalues. However, if the TD(O) algorithm converges the limit coincides with the LSTD solution wI; D. p For the value function V.JD achieved by the LSTD algorithm the following holds p VTD q,WTD (~) q,(_ATD)-l bTD = q, [(_ATD)T(_ATD)] -1 (_ATD)TbTD Dp Dp Dp Dp Dp Dp Dp Dp (3),(10) II V* II V* ( ) = (I-,PjTDJq,q,TDp(I-,P) = DJD . 13 We define D JD = (J - , P)TDJq,q,TDp(J - , P). As q,q,T is singular in general, the matrix DJD is symmetric and positive semi-definite. Hence, it defines a semi-norm II·IIDTD . Thus, the LSTD solution is obtained by projecting V* onto [q,] with p respect to II . II DT D. After having deduced this new relation between the optimal p solution V* and V.JD we can characterise WI;D as minimising the corresponding p p quadratic objective function. min~llq,w-V * 112 TD =~(q,WTD_V*fDTD(q,wTD_V*) = ~IIVTD-V*W TD . (14) cER F 2 Dp 2 Dp p Dp 2 Dp Dp It can be shown that the value of the objective function for the LSTD solution is zero, i.e. IIV.JD- V*111TD = O. With equation (14) we have shown that the p p LSTD solution minimises a certain error metric. The form of this error metric is similar to (5). The only difference lies in the norm that is used. This unifies the characterisation of the solutions that are achieved by different algorithms. 3.4 The Residual Gradient Algorithm There is a third approach to solving equation (6). The residual gradient algorithm [1] directly minimises the weighted Bellman error 1 2 -II(I - , P)q,w - RIID 2 p (15) by gradient descent. The resulting update rule of the deterministic algorithm has a form similar to (9) (16) with bRG = q,T(J "VPT)D R Dp , P' (17) where D p is again the diagonal matrix with the visitation probabilities Pi on its diagonal. As all entries on the diagonal are nonnegative, D p can be decomposed into yfi5"";T yfi5"";. Hence, we can write Ai5; = -(yfi5"";(I _ ,p)q,)T yfi5"";(J - ,P)q,. Therefore, Ai5G is negative semidefinite. If q, has full column rank and Dp is p regular, i.e. the visitation probability for every state is positive, then Ai5G is negative p definite. Therefore, all eigenvalues of Ai5G are negative, which yields convergence of p the residual gradient algorithm (16) for a decaying learning rate independently of the weighting D p , the function approximator q, and the transition probabilities P. The equivalence of the limit value of the deterministic and the stochastic version of the residual gradient algorithm can be proven with an argument similar to that in [2] for the equivalence of the deterministic and the stochastic version of the TD(O) algorithm in equations (7) and (9) respectively. Note also that the matrix Ai5G and p the vector bi5G can be computed from samples so that the model P does not need p to be known for the deterministic residual gradient algorithm. If Ai5G is invertible a unique limit of the iteration (16) exists. It can be directly p computed via the fixed point form, which yields the new identity wi5; = (-Ai5;)-lbi5; = (q,T(I - , pf Dp(I _ , p)q,)-l q,T (J _ , p)T DpR. (18) This solution of the residual gradient algorithm is related to the optimal solution (4) of the approximate Bellman equation (6) as described in the following lemma. Lemma 2 The solution wi5G of the residual gradient algorithm with weighting map trix D p is equivalent to the optimal supervised learning solution Wf/RG of the approxp imate Bellman equation (6) with weighting matrix D:G = (J _ , p)T Dp(I - , P). Proof: wi5; = (q,T (I _ , p)T Dp(I _ , p)q,) -l q,T (I - , pf DpR = (q,T D:Gq,)-l q,T (J - , pf Dp(I - , P)(I _ , p) -l R = (q,T DRGq,)-l q,T DRGV* = wSL p p DJ;G, where we used the fact that V* = (J _ , P)-l R. D Therefore, wi5G can be interpreted as the orthogonal projection of the optimal p solution V* onto [q,] with respect to the scalar product defined by D:G. This yields a new equivalent formula for the Bellman error (15) ~II(I - , P)q,w RII~ = ~((J - , P)q,w - RfDp((I - , P)q,w - R) 2 p 2 = ~(q,w - v*f(I - , pfDp(J - , P)(q,w - V*) = ~11q,w - V*II~RG' (19) 2 2 p The Bellman error is the objective function that is minimised by the residual gradient algorithm. As we have just shown, this objective function can be expressed in a form similar to (5), where the only difference lies in the norm that is used. Thus, we have shown that the solution of the residual gradient algorithm can also be characterised in the general framework of quadratic error metrics IIq,w - V* liD. As a direct consequence we can represent the solution as an orthogonal projection V RG = q,wRG = II RG V*. Dp Dp Dp According to section 3.2 an iteration of the form (16) generally converges for matrices A with eigenvalues that have negative real parts. However, the fact that Ai5G p is symmetric assures convergence even for singular Ai5G [8] (Proposition 1). Thus, p Table 1: Overview over the solutions of different RL algorithms. The supervised learning (SL) approach, the TD(O) algorithm, the LSTD algorithm and the residual gradient (RG) algorithm are analysed in terms of the conditions of solvability. Moreover, we summarise the optimisation criteria that the different algorithms minimise and characterise the different solutions in terms of the projection of the optimal solution V* onto [<1>]. If the visitation distribution is arbitrary, we write 'r:/p. SL TD LSTD RG solvability: condition for Ai Re(Ai) < 0 Ai :;i 0 Re(Ai) ::::: 0 condition for p 'r:/p p=7f p(s) :;i 0 'r:/p optimisation criterion eq. (5) eq. (14) eq. (14) eq. (19) characterisation as projection IIDp V* IID;D V* IIDTD V* p IIDRG V* p the residual gradient algorithm (16) converges for any matrix A15G that is of the p form (17) and in case A15G is regular the limit is given by (18). Note that a matrix p <I> which does not have full column rank leads to ambiguous solutions w15G that p depend on the initial value wo. However, the corresponding V j}G = <l>w15G are the p p same. For singular Dp the matrix D:G = (I - ,P)T Dp(J - IP) is also singular. Thus, the limit Vj}G may not be unique but may depend itself on the initial value p wo. The reason is that there may be a whole subspace of [<I>] with dimension larger than zero that minimises IIVj}G - V*IIDRG because II·IIDRG is now only a semi-norm. p p p But for all minimising Vj}G the Bellman error is the same, i.e. with respect to the p Bellman error all the solutions Vj}G are equivalent [8] (Proposition 1). p 3.5 Synopsis of the Different Solutions In Table 1 we give a brief overview of the solutions that the different RL algorithms yield. An SL solution can be computed for arbitrary weighting matrices D p induced by a sampling distribution p. For the three RL algorithms (TD, LSTD, RG) solvability conditions can be either formulated in terms of the eigenvalues of the iteration matrix A or in terms of the sampling distribution p. The iterative TD(O) algorithm has the most restrictive conditions for solvability both for the eigenvalues of the iteration matrix A, whose real parts must be smaller than zero, and for the sampling distribution p, which must equal the steady-state distribution 7f. The LSTD method only requires invertibility of Arp. This is satisfied if <I> has p full column rank and if the visitation distribution p samples every state s infinitely often, i.e. p( s) :;i 0 for all s E S. In contrast to that the residual gradient algorithm converges independently of p and the concrete A15G because all these matrices have p eigenvalues with nonpositive real parts. All solutions can be characterised as minimising a quadratic optimisation criterion Il<I>w - V* liD with corresponding matrix D. The SL solution optimises the weighted quadratic error (5), RG optimises the weighted Bellman error (19) and both TD and LSTD optimise the quadratic function (14) with weighting matrices D;;D and DJD respectively. With the assumption of regular D p , i.e. p(s) :;i 0 for all s E S, the solutions V can be characterised as images of the optimal solution V* under different orthogonal projections (optimal, RG) and projections that minimise a semi-norm (TD, LSTD). For singular Dp see the remarks on ambiguous solutions in section 3.4. Let us finally discuss the case of a quasi-tabular representation of the value function that is obtained for regular <I> and let all states be visited infinitely often, i.e. D p is regular. Due to the invertibility of <I> we have [<I>] = ~N. Thus, the optimal solution V* is exactly representable because V* E [<I>]. Moreover, every projection operator II : ~N -+ [<I>] reduces to the identity. Therefore, all the projection operators for the different algorithms are equivalent to the identity. Hence, with a quasi-tabular representation all the algorithms converge to the optimal solution V*. 4 Conclusions We have presented an analysis of the solutions that are achieved by different reinforcement learning algorithms combined with linear function approximation. The solutions of all the examined algorithms, TD(O), LSTD and the residual gradient algorithm, can be characterised as minimising different corresponding quadratic objective function. As a consequence, each of the value functions, that one of the above algorithms converges to, can be interpreted as image of the optimal exact value function under a corresponding orthogonal projection. In this general framework we have given the first characterisation of the approximate TD(O) solution in terms of the minimisation of a quadratic objective function. This approach allows to view the TD(O) solution as exact solution of a projected learning problem. Moreover, we have shown that the residual gradient solution and the optimal approximate solution only differ in the weighting of the error between the exact and the approximate solution. In future research we intend to use the results presented here for determining the errors of the different solutions relative to the optimal approximate solution with respect to a given norm. This will yield a criterion for selecting reinforcement learning algorithms that achieve optimal solution quality. References [1] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. Proc. of the Twelfth International Conference on Machine Learning, 1995. [2] D. P. Bertsekas and J. N. Tsitsiklis. Neuro Dynamic Programming. Athena Scientific, Belmont, Massachusetts, 1996. [3] J .A. Boyan. Least-squares temporal difference learning. In Proceeding of the Sixteenth International Conference on Machine Learning, pages 49- 56, 1999. [4] S.J Bradtke and A.G. Barto. Linear least-squares algorithms for temporal difference learning. Machine Learning, 22:33- 57, 1996. [5] A. Greenbaum. Iterative Methods for Solving Linear Systems. SIAM, 1997. [6] D. Koller and R. Parr. Policy iteration for factored mdps. In Proc. of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI), pages 326- 334, 2000. [7] M. G. Lagoudakis and R. Parr. Model-free least-squares policy iteration. In Advances in Neural Information Processing Systems, volume 14, 2002. [8] R. Schoknecht and A. Merke. Convergent combinations of reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, volume 15, 2003. [9] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9- 44, 1988. [10] J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 1997.
2002
36
2,239
Evidence Optimization Techniques for Estimating Stimulus-Response Functions Maneesh Sahani Gatsby Unit, UCL 17 Queen Sq., London, WC1N 3AR, UK. maneesh@gatsby.ucl.ac.uk Jennifer F. Linden Keck Center, UCSF San Francisco, CA 94143–0732, USA. linden@phy.ucsf.edu Abstract An essential step in understanding the function of sensory nervous systems is to characterize as accurately as possible the stimulus-response function (SRF) of the neurons that relay and process sensory information. One increasingly common experimental approach is to present a rapidly varying complex stimulus to the animal while recording the responses of one or more neurons, and then to directly estimate a functional transformation of the input that accounts for the neuronal firing. The estimation techniques usually employed, such as Wiener filtering or other correlation-based estimation of the Wiener or Volterra kernels, are equivalent to maximum likelihood estimation in a Gaussian-output-noise regression model. We explore the use of Bayesian evidence-optimization techniques to condition these estimates. We show that by learning hyperparameters that control the smoothness and sparsity of the transfer function it is possible to improve dramatically the quality of SRF estimates, as measured by their success in predicting responses to novel input. 1 Introduction A common experimental approach to the measurement of the stimulus-response function (SRF) of sensory neurons, particularly in the visual and auditory modalities, is “reverse correlation” and its related non-linear extensions [1]. The neural response  to a continuous, rapidly varying stimulus   , is measured and used in an attempt to reconstruct the functional mapping     . In the simplest case, the functional is taken to be a finite impulse response (FIR) linear filter; if the input is white the filter is identified by the spike-triggered average of the stimulus, and otherwise by the Wiener filter. Such linear filter estimates are often called STRFs for spatio-temporal (in the visual case) or spectro-temporal (in the auditory case) receptive fields. The general the SRF may also be parameterized on the basis of known or guessed non-linear properties of the neurons, or may be expanded in terms of the Volterra or Wiener integral power series. In the case of the Wiener expansion, the integral kernels are usually estimated by measuring various cross-moments of  and   . In practice, the stimulus is often a discrete-time process  . In visual experiments, the discretization may correspond to the frame rate of the display. In the auditory experiments that will be considered below, it is set by the rate of the component tone pulses in a random chord stimulus. On time-scales finer than that set by this discretization rate, the stimulus is strongly autocorrelated. This makes estimation of the SRF at a finer time-scale extremely non-robust. We therefore lose very little generality by discretizing the response with the same time-step, obtaining a response histogram   . In this discrete-time framework, the estimation of FIR Wiener-Volterra kernels (of any order) corresponds to linear regression. To estimate the first-order kernel up to a given maximum time lag , we construct a set of input lag-vectors            . If a single stimulus frame,   , is itself a  -dimensional vector (representing, say, pixels in an image or power in different frequency bands) then the lag vectors are formed by concatenating stimulus frames together into vectors of length  . The Wiener filter is then obtained by least-squares linear regression from the lag vectors    to the corresponding observed activities   . Higher-order kernels can also be found by linear regression, using augmented versions of the stimulus lag vectors. For example, the second-order kernel is obtained by regression using input vectors formed by all quadratic combinations of the elements of   (or, equivalently, by support-vector-like kernel regression using a homogeneous second-order polynomial kernel). The present paper will be confined to a treatment of the linear case. It should be clear, however, that the basic techniques can be extended to higher orders at the expense of additional computational load, provided only that a sensible definition of smoothness in these higher-order kernels is available. The least-squares solution to a regression problem is identical to the maximum likelihood (ML) value of the weight vector  for the probabilistic regression model with Gaussian output noise of constant variance  :       !"  $# (1) As is common with ML learning, weight vectors obtained in this way are often overfit to the training data, and so give poor estimates of the true underlying stimulus-response function. This is the case even for linear models. If the stimulus is uncorrelated, the MLestimated weight along some input dimension is proportional to the observed correlation between that dimension of the stimulus and the output response. Noise in the output can introduce spurious input-output correlations and thus result in erroneous weight values. Furthermore, if the true relationship between stimulus and response is non-linear, limited sampling of the input space may also lead to observed correlations that would have been absent given unlimited data. The statistics and machine learning literatures provide a number of techniques for the containment of overfitting in probabilistic models. Many of these approaches are equivalent to the maximum a posteriori (MAP) estimation of parameters under a suitable prior distribution. Here, we investigate an approach in which these prior distributions are optimized with reference to the data; as such, they cease to be “prior” in a strict sense, and instead become part of a hierarchical probabalistic model. A distribution on the regression parameters is first specified up to the unknown values of some hyperparameters. These hyperparameters are then adjusted so as to maximize the marginal likelihood or “evidence” — that is, the probability of the data given the hyperparameters, with the parameters themselves integrated out. Finally, the estimate of the parameters is given by the MAP weight vector under the optimized “prior”. Such evidence optimization schemes have previously been used in the context of linear, kernel and Gaussian-process regression. We show that, with realistic data volumes, such techniques provide considerably better estimates of the stimulus-response function than do the unregularized (ML) Wiener estimates. 2 Test data and methods A diagnostic of overfitting, and therefore divergence from the true stimulus-response relationship, is that the resultant model generalizes poorly; that is, it does not predict actual responses to novel stimuli well. We assessed the generalization ability of parameters chosen by maximum likelihood and by various evidence optimization schemes on a set of responses collected from the auditory cortex of rodents. As will be seen, evidence optimization yielded estimates that generalized far better than those obtained by the more elementary ML techniques, and so provided a more accurate picture of the underlying stimulus-response function. A total of 205 recordings were collected extracellularly from 68 recording sites in the thalamo-recipient layers of the left primary auditory cortex of anaesthetized rodents (6 CBA/CaJ mice and 4 Long-Evans rats) while a dynamic random chord stimulus (described below) was presented to the right ear. Recordings often reflected the activity of a number of neurons; single neurons were identified by Bayesian spike-sorting techniques [2, 3] whenever possible. The stimulus consisted of 20 ms tone pulses (ramped up and down with a 5 ms cosine gate) presented at random center frequencies, maximal intensities, and times, such that pulses at more than one frequency might be played simultaneously. This stimulus resembled that used in a previous study [4], except in the variation of pulse intensity. The times, frequencies and sound intensities of all tone pulses were chosen independently within the discretizations of those variables (20 ms bins in time, 1/12 octave bins covering either 2–32 or 25–100 kHz in frequency, and 5 dB SPL bins covering 25–70 dB SPL in level). At any time point, the stimulus averaged two tone pulses per octave, with an expected loudness of approximately 73 dB SPL for the 2–32 kHz stimulus and 70 dB SPL for the 25–100 kHz stimulus. Each pulse was ramped up and down with a 5 ms cosine gate. The total duration of each stimulus was 60 s. At each recording site, the 2–32 kHz stimulus was repeated for 20 trials, and the 25–100 kHz stimulus for 10 trials. Neural responses from all 10 or 20 trials were histogrammed in 20 ms bins aligned with stimulus pulse durations. Thus, in the regression framework, the instantaneous input vector   comprised the sound amplitudes at each possible frequency at time  , and the output  was the number of spikes per trial collected into the  th bin. The repetition of the same stimulus made it possible to partition the recorded response power into a stimulus-related (signal) component and a noise component. (For derivation, see Sahani and Linden, “How Linear are Auditory Cortical Responses?”, this volume.) Only those 92 recordings in which the signal power was significantly greater than zero were used in this study. Tests of generalization were performed by cross-validation. The total duration of the stimulus was divided 10 times into a training data segment (9/10 of the total) and a test data segment (1/10), such that all 10 test segments were disjoint. Performance was assessed by the predictive power, that is the test data variance minus average squared prediction error. The 10 estimates of the predictive power were averaged, and normalized by the estimated signal power to give a number less than 1. Note that the predictive power could be negative in cases where the mean was a better description of the test data than was the model prediction. In graphs of the predictive power as a function of noise level, the estimate of the noise power is also shown after normalization by the estimated signal power. 3 Evidence optimization for linear regression As is common in regression problems, it is convenient to collect all the stimulus vectors and observed responses into matrices. Thus, we described the input by a matrix , the  th column of which is the input lag-vector   . Similarly, we collect the outputs into a row vector  , the  th element of which is  . The first  time-steps are dropped to avoid incomplete lag-vectors. Then, assuming independent noise in each time bin, we combine the individual probabilities to give:    !  !                               (2) We now choose the prior distribution on  to be normal with zero mean (having no prior reason to favour either positive or negative weights) and covariance matrix  . Then the joint density of  and  is  !   ! !"                                 (3) where the normalizer          . Fixing  to be the observed values, this implies a normal posterior on  with variance   ! "#%$     and mean &   ' "# . By integrating this normal density in  we obtain an expression for the evidence: (   !        !) !    +*  ,         -             .     (4) We seek to optimize this evidence with respect to the hyperparameters in  , and the noise variance   . To do this we need the respective gradients. If the covariance matrix contains a parameter / , then the derivative of the log-evidence with respect to / is given by 0 0 /214365 (    Tr    7 8&9&   0 0 /    (5) while the gradient in the noise variance is 0 0   14365 (      : $ Tr ; <  = $      8&     8&     (6) where : is the number of training data points. 4 Automatic relevance determination (ARD) The most common evidence optimization scheme for regression is known as automatic relevance determination (ARD). Originally proposed by MacKay and Neal, it has been used extensively in the literature, notably by MacKay[5] and, in a recent application to kernel regression, by Tipping [6]. The prior covariance on the weights is taken to be of the form  ?>  with > ?@BACEDGFIH  . That is, the weights are taken to be independent with potentially different prior precisions  FJH  . Substitution into (5) yields 0 0 FIH 14365 (     F  H < H4H 8&  H  # (7) Previous authors have noted that, in comparison to simple gradient methods, iteration of fixed point equations derived from this and from (6) converge more rapidly: F,K LNM H    F H  H4H &  H (8) and    OKPLNM    8&       &  :<Q H   < HRHSFIH  (9) . time (ms) ML −240 −180 −120 −60 0 25 50 100 time (ms) ARD −240 −180 −120 −60 0 25 50 100 time (ms) ASD −240 −180 −120 −60 0 25 50 100 frequency (kHz) time (ms) ASD/RD −240 −180 −120 −60 0 25 50 100 R2001011802G/20010731/pen14loc2poisshical020 Figure 1: Comparison of various STRF estimates for the same recording. A pronounced general feature of the maxima discovered by this approach is that many of the optimal precisions are infinite (that is, the variances are zero). Since the prior distribution is centered on zero, this forces the corresponding weight to vanish. In practice, as the iterated value of a precision crosses some pre-determined threshold, the corresponding input dimension is eliminated from the regression problem. The results of evidence optimization suggest that such inputs are irrelevant to predicting the output; hence the name given to this technique. The resulting MAP estimates obtained under the optimized ARD prior thus tend to be sparse, with only a small number of non-zero weights often appearing as isolated spots in the STRF. The estimated STRFs for one example recording using ML and ARD are shown in the two left-most panels of figure 1 (the other panels show smoothed estimates which will be described below), with the estimated weight vectors rearranged into time-frequency matrices. The sparsity of the ARD solution is evident in the reduction of apparent estimation noise at higher frequencies and longer time lags. This reduction improves the ability of the estimated model to predict novel data by more than a factor of 2 in this case. Assessed by cross-validation, as described above, the ARD estimate accurately predicted 26% of the signal power in test data, whereas the ML estimate (or Wiener kernel) predicted only 12%. This improvement in predictive quality was evident in every one of the 92 recordings with significant signal power, indicating that the optimized prior does improve estimation accuracy. The left-most panel of figure 2 compares the normalized cross-validation predictive power of the two STRF estimates. The other two panels show the difference in predictive powers as function of noise (in the center) and as a histogram (right). The advantage of the evidence-optimization approach is clearly most pronounced at higher noise levels. −1.5 −1 −0.5 0 0.5 −1.5 −1 −0.5 0 0.5 normalized ML predictive power normalized ARD predictive power 0 25 50 0 0.2 0.4 0.6 0.8 1 normalized noise power 0 20 40 0 0.2 0.4 0.6 0.8 1 no. of recordings normalized prediction difference (ARD − ML) Figure 2: Comparison of ARD and ML predictions. 5 Automatic smoothness determination (ASD) In many regression problems, such as those for which ARD was developed, the different input dimensions are often unrelated; indeed they may be measured in different units. In such contexts, an independent prior on the weights, as in ARD, is reasonable. By contrast, the weights of an STRF are dimensionally and semantically similar. Furthermore, we might expect weights that are nearby in either time or frequency (or space) to be similar in value; that is, the STRF is likely to be smooth on the scale at which we model it. Here we introduce a new evidence optimization scheme, in which the prior covariance matrix is used to favour smoothing of the STRF weights. The appropriate scale (along either the time or the frequency/space axis) cannot be known a priori. Instead, we introduce hyperparameters  and  that set the scale of smoothness in the spectral (or spatial) and temporal dimensions respectively, and then, for each recording, optimize the evidence to determine their appropriate values. The new parameterized covariance matrix,  , depends on two   matrices  and   . The     element of each of these gives the squared distance between the weights  H and  in terms of center frequency (or space) and time respectively. We take   -           $       ! (10) where the exponent is taken element by element. In this scheme, the hyperparameters  and  set the correlation distances for the weights along the spectral (spatial) and temporal dimensions, while the additional hyperparameter  sets their overall scale. Substitution of (10) into the general hyperparameter derivative expression (5) gives 0 0  14365 (    Tr ;   7  &9&     = (11) and 0 0  14365 (     Tr    < 8&9&               (12) (where the  denotes the Hadamard or Schur product; i.e., the matrices are multiplied element by element), along with a similar expression for   14365 ( . In this case, optimization is performed by simple gradient methods. The third panel of figure 1 shows the ASD-optimized MAP estimate of the STRF for the same example recording discussed previously. Optimization yielded smoothing width estimates of 0.96 (20 ms) bins in time and 2.57 (1/12 octave) bins in frequency; the effect of this smoothing of the STRF estimate is evident. ASD further improved the ability of the linear kernel to predict test data, accounting for 27.5% of the signal power in this example. In the population of 92 recordings (figure 3, upper panels) MAP estimates based on the ASD-optimized prior again outperformed ML (Wiener kernel) estimates substantially on every single recording considered, particularly on those with poorer signal-to-noise ratios. They also tended to predict more accurately than the ARD-based estimates (figure 3, lower panels). The improvement over ARD was not quite so pronounced (although it was frequently greater than in the example of figure 1). 6 ARD in an ASD-defined basis The two evidence optimization frameworks presented above appear inconsistent. ARD yields a sparse, independent prior, and often leads to isolated non-zero weights in the estimated STRF. By contrast, ASD is explicitly designed to recover smooth STRF estimates. −1.5 −1 −0.5 0 0.5 −1.5 −1 −0.5 0 0.5 normalized ML predictive power normalized ASD predictive power 0 20 40 60 0 0.5 1 1.5 normalized noise power 0 10 20 0 0.5 1 1.5 number of recordings normalized predictive power difference (ML − ASD) −0.5 0 0.5 −0.5 0 0.5 normalized ARD predictive power normalized ASD predictive power 0 20 40 60 −0.2 0 0.2 0.4 normalized noise power 0 10 20 −0.2 0 0.2 0.4 number of recordings normalized predictive power difference (ARD − ASD) Figure 3: Comparison of ASD predictions to ML and ARD. Nonetheless, both frameworks appear to improve the ability of estimated models to generalize to novel data. We are thus led to consider ways in which features of both methods may be combined. By decomposing the prior covariance   , it is possible to rewrite the joint density of (3) as   !   !) !   -                               # (13) Making the substitutions  and     , this expression may be recognized as the joint density for a transformed regression problem with unit prior covariance (the normalizing constant, not shown, is appropriately transformed by the Jacobean associated with the change in variables). If now we introduce and optimize a diagonal prior covariance of the ARD form in this transformed problem, we are indirectly optimizing a covariance matrix of the form    >  in the original basis. Intuitively, the sparseness driven by ARD is applied to basis vectors drawn from the rows of the transformation matrix , rather than to individual weights. If this basis reflects the smoothness prior obtained from ASD then the resulting prior will combine the smoothness and sparseness of two approaches. We choose to be the (positive branch) matrix square root of the optimal prior matrix  (see (10)) obtained from ASD. If the eigenvector decomposition of  is   , then     , where the diagonal elements of   are the positive square roots of the eigenvalues of  . The components of , defined in this way, are Gaussian basis vectors slightly narrower than those in  (this is easily seen by noting that the eigenvalue spectrum for the Toeplitz matrix  is given by the Fourier transform, and that the square-root of the Gaussian function in the Fourier space is a Gaussian of larger width, corresponding to a smaller width in the original space). Thus, weight vectors obtained through ARD 0 0.2 0.4 0.6 0 0.2 0.4 0.6 normalized ASD predictive power normalized ASD/RD predictive power 0 25 50 −0.04 −0.02 0 0.02 0.04 0.06 0.08 normalized noise power 0 10 20 −0.04 −0.02 0 0.02 0.04 0.06 0.08 no. of recordings normalized prediction difference (ASD/RD − ASD) Figure 4: Comparison of ARD in the ASD basis and simple ASD in this basis will be formed by a superposition of Gaussian components, each of which individually matches the ASD prior on its covariance. The results of this procedure (labelled ASD/RD) on our example recording are shown in the rightmost panel of figure 1. The combined prior shows a similar degree of smoothing to the ASD-optimized prior alone; in addition, like the ARD prior, it suppresses the apparent background estimation noise at higher frequencies and longer time lags. Predictions made with this estimate are yet more accurate, capturing 30% of the signal power. This improvement over estimates derived from ASD alone is borne out in the whole population (figure 4), although the gain is smaller than in the previous cases. 7 Conclusions We have demonstrated a succession of evidence-optimization techniques which appear to improve the accuracy of STRF estimates from noisy data. The mean improvement in prediction of the ASD/RD method over the Wiener kernel is 40% of the stimulus-related signal power. Considering that the best linear predictor would on average capture no more than 40% of the signal power in these data even in the absence of noise (Sahani and Linden, “How Linear are Auditory Cortical Responses?”, this volume), this is a dramatic improvement. These results apply to the case of linear models; our current work is directed toward extensions to non-linear SRFs within an augmented linear regression framework. References [1] Marmarelis, P. Z & Marmarelis, V. Z. (1978) Analysis of Physiological Systems. (Plenum Press, New York). [2] Lewicki, M. S. (1994) Neural Comp 6, 1005–1030. [3] Sahani, M. (1999) Ph.D. thesis (California Institute of Technology, Pasadena, CA). [4] deCharms, R. C, Blake, D. T, & Merzenich, M. M. (1998) Science 280, 1439–1443. [5] MacKay, D. J. C. (1994) ASHRAE Transactions 100, 1053–1062. [6] Tipping, M. E. (2001) J Machine Learning Res 1, 211–244.
2002
37
2,240
Binary Coding in Auditory Cortex Michael R. DeWeese and Anthony M. Zador Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724 deweese@cshl.edu, zador@cshl.edu Abstract Cortical neurons have been reported to use both rate and temporal codes. Here we describe a novel mode in which each neuron generates exactly 0 or 1 action potentials, but not more, in response to a stimulus. We used cell-attached recording, which ensured single-unit isolation, to record responses in rat auditory cortex to brief tone pips. Surprisingly, the majority of neurons exhibited binary behavior with few multi-spike responses; several dramatic examples consisted of exactly one spike on 100% of trials, with no trial-to-trial variability in spike count. Many neurons were tuned to stimulus frequency. Since individual trials yielded at most one spike for most neurons, the information about stimulus frequency was encoded in the population, and would not have been accessible to later stages of processing that only had access to the activity of a single unit. These binary units allow a more efficient population code than is possible with conventional rate coding units, and are consistent with a model of cortical processing in which synchronous packets of spikes propagate stably from one neuronal population to the next. 1 Binary coding in auditory cortex We recorded responses of neurons in the auditory cortex of anesthetized rats to pure-tone pips of different frequencies [1, 2]. Each pip was presented repeatedly, allowing us to assess the variability of the neural response to multiple presentations of each stimulus. We first recorded multi-unit activity with conventional tungsten electrodes (Fig. 1a). The number of spikes in response to each pip fluctuated markedly from one trial to the next (Fig. 1e), as though governed by a random mechanism such as that generating the ticks of a Geiger counter. Highly variable responses such as these, which are at least as variable as a Poisson process, are the norm in the cortex [3-7], and have contributed to the widely held view that cortical spike trains are so noisy that only the average firing rate can be used to encode stimuli. Because we were recording the activity of an unknown number of neurons, we could not be sure whether the strong trial-to-trial fluctuations reflected the underlying variability of the single units. We therefore used an alternative technique, cellFigure 1: Multi-unit spiking activity was highly variable, but single units obeyed binomial statistics. a Multi-unit spike rasters from a conventional tungsten electrode recording showed high trial-to-trial variability in response to ten repetitions of the same 50 msec pure tone stimulus (bottom). Darker hash marks indicate spike times within the response period, which were used in the variability analysis. b Spikes recorded in cell-attached mode were easily identified from the raw voltage trace (top) by applying a high-pass filter (bottom) and thresholding (dark gray line). Spike times (black squares) were assigned to the peaks of suprathreshold segments. c Spike rasters from a cell-attached recording of single-unit responses to 25 repetitions of the same tone consisted of exactly one well-timed spike per trial (latency standard deviation = 1.0 msec), unlike the multi-unit responses (Fig. 1a). Under the Poisson assumption, this would have been highly unlikely (P ~ 10-11). d The same neuron as in Fig. 1c responds with lower probability to repeated presentations of a different tone, but there are still no multi-spike responses. e We quantified response variability for each tone by dividing the variance in spike count by the mean spike count across all trials for that tone. Response variability for multi-unit tungsten recording (open triangles) was high for each of the 29 tones (out of 32) that elicited at least one spike on one trial. All but one point lie above one (horizontal gray line), which is the value produced by a Poisson process with any constant or time varying event rate. Single unit responses recorded in cell-attached mode were far less variable (filled circles). Ninety one percent (10/11) of the tones that elicited at least one spike from this neuron produced no multi-spike responses in 25 trials; the corresponding points fall on the diagonal line between (0,1) and (1,0), which provides a strict lower bound on the variability for any response set with a mean between 0 and 1. No point lies above one. attached recording with a patch pipette [8, 9], in order to ensure single unit isolation (Fig. 1b). This recording mode minimizes both of the main sources of error in spike detection: failure to detect a spike in the unit under observation (false negatives), and contamination by spikes from nearby neurons (false positives). It also differs from conventional extracellular recording methods in its selection bias: With cella Time (msec) 0 40 80 120 160 200 1sec 5mV Raw cellattached voltage Threshold Identified spikes High-pass filtered .... .. ... .............. . Single-unit 10 kHz 38 kHz Single-unit 28 kHz Single-unit recording method e Mean response (spikes/trial) 0 1 2 3 0 1 2 3 4 Response variance/mean (spikes/trial) Poisson N = 11 tones N = 29 tones binary b Multi-unit c d attached recording neurons are selected solely on the basis of the experimenter’s ability to form a seal, rather than on the basis of neuronal activity and responsiveness to stimuli as in conventional methods. Surprisingly, single unit responses were far more orderly than suggested by the multi-unit recordings; responses typically consisted of either 0 or 1 spikes per trial, and not more (Fig. 1c-e). In the most dramatic examples, each presentation of the same tone pip elicited exactly one spike (Fig. 1c). In most cases, however, some presentations failed to elicit a spike (Fig. 1d). Although low-variability responses have recently been observed in the cortex [10, 11] and elsewhere [12, 13], the binary behavior described here has not previously been reported for cortical neurons. The majority of the neurons (59%) in our study for which statistical significance could be assessed (at the p<0.001 significance level; see Fig. 2, caption) showed noisy binary behavior—“binary” because neurons produced either 0 or 1 spikes, and “noisy” because some stimuli elicited both single spikes and failures. In a substantial fraction of neurons, however, the responses showed more variability. We found no correlation between neuronal variability and cortical layer (inferred from the depth of the recording electrode), cortical area (inside vs. outside of area A1) or depth of anesthesia. Moreover, the binary mode of spiking was not due to the brevity (25 msec) of the stimuli; responses that were binary for short tones were comparably binary when longer (100 msec) tones were used (Fig. 2b). b a 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Mean response (spikes/trial) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Response variance/mean (spikes/trial) Poisson binary Not assessable Not significant Significant (p<0.001) N = 3055 response sets 28 kHz - 100 msec 28 kHz - 25 msec 0 40 80 120 160 200 Time (msec) Figure 2: Half of the neuronal population exhibited binary firing behavior. a Of the 3055 sets of responses to 25 msec tones, 2588 (gray points) could not be assessed for significance at the p<0.001 level, 225 (open circles) were not significantly binary, and 242 were significantly binary (black points; see Identification methods for group statistics below). All points were jittered slightly so that overlying points could be seen in the figure. 2165 response sets contained no multi-spike responses; the corresponding points fell on the line from [0,1] to [1,0]. b The binary nature of single unit responses was insensitive to tone duration, even for frequencies that elicited the largest responses. Twenty additional spike rasters from the same neuron (and tone frequency) as in Fig. 1c contain no multi-spike responses whether in response to 100 msec tones (above) or 25 msec tones (below). Across the population, binary responses were as prevalent for 100 msec tones as for 25 msec tones (see Identification methods for group statistics). In many neurons, binary responses showed high temporal precision, with latencies sometimes exhibiting standard deviations as low as 1 msec (Fig. 3; see also Fig. 1c), comparable to previous observations in the auditory cortex [14], and only slightly more precise than in monkey visual area MT [5]. High temporal precision was positively correlated with high response probability (Fig. 3). 0 Jitter (msec) 0 2 4 6 8 10 12 14 a b Mean response (spikes/trial) 0 0.4 0.8 1.2 1.6 2 0 10 20 30 40 N = (44 cells)x(32 tones) Jitter (msec) N = 32 tones Mean response (spikes/trial) 0.2 0.4 0.6 0.8 1 Figure 3: Trial-to-trial variability in latency of response to repeated presentations of the same tone decreased with increasing response probability. a Scatter plot of standard deviation of latency vs. mean response for 25 presentations each of 32 tones for a different neuron as in Figs. 1 and 2 (gray line is best linear fit). Rasters from 25 repeated presentations of a low response tone (upper left inset, which corresponds to left-most data point) display much more variable latencies than rasters from a high response tone (lower right inset; corresponds to right-most data point). b The negative correlation between latency variability and response size was present on average across the population of 44 neurons described in Identification methods for group statistics (linear fit, gray). The low trial-to-trial variability ruled out the possibility that the firing statistics could be accounted for by a simple rate-modulated Poisson process (Fig. 4a1,a2). In other systems, low variability has sometimes been modeled as a Poisson process followed by a post-spike refractory period [10, 12]. In our system, however, the range in latencies of evoked binary responses was often much greater than the refractory period, which could not have been longer than the 2 msec inter-spike intervals observed during epochs of spontaneous spiking, indicating that binary spiking did not result from any intrinsic property of the spike generating mechanism (Fig. 4a3). Moreover, a single stimulus-evoked spike could suppress subsequent spikes for as long as hundreds of milliseconds (e.g. Figs. 1d,4d), supporting the idea that binary spiking arises through a circuit-level, rather than a single-neuron, mechanism. Indeed, the fact that this suppression is observed even in the cortex of awake animals [15] suggests that binary spiking is not a special property of the anesthetized state. It seems surprising that binary spiking in the cortex has not previously been remarked upon. In the auditory cortex the explanation may be in part technical: Because firing rates in the auditory cortex tend to be low, multi-unit recording is often used to maximize the total amount of data collected. Moreover, our use of cell-attached recording minimizes the usual bias toward responsive or active neurons. Such explanations are not, however, likely to account for the failure to observe binary spiking in the visual cortex, where spike count statistics have been scrutinized more closely [3-7]. One possibility is that this reflects a fundamental difference between the auditory and visual systems. An alternative interpretation— Figure 4: a The lack of multi-spike responses elicited by the neuron shown in Fig. 3a were not due to an absolute refractory period since the range of latencies for many tones, like that shown here, was much greater than any reasonable estimate for the neuron’s refractory period. (a1) Experimentally recorded responses. (a2) Using the smoothed post stimulus time histogram (PSTH; bottom) from the set of responses in Fig. 4a, we generated rasters under the assumption of Poisson firing. In this representative example, four double-spike responses (arrows at left) were produced in 25 trials. (a3) We then generated rasters assuming that the neuron fired according to a Poisson process subject to a hard refractory period of 2 msec. Even with a refractory period, this representative example includes one triple- and three double-spike responses. The minimum interspike-interval during spontaneous firing events was less than two msec for five of our neurons, so 2 msec is a conservative upper bound for the refractory period. b. Spontaneous activity is reduced following high-probability responses. The PSTH (top; 0.25 msec bins) of the combined responses from the 25% (8/32) of tones that elicited the largest responses from the same neuron as in Figs. 3a and 4a illustrates a preclusion of spontaneous and evoked activity for over 200 msec following stimulation. The PSTHs from progressively less responsive groups of tones show progressively less preclusion following stimulation. c Fewer noisy binary neurons need to be pooled to achieve the same “signal-to-noise ratio” (SNR; see ref. [24]) as a collection of Poisson neurons. The ratio of the number of Poisson to binary neurons required to achieve the same SNR is plotted against the mean number of spikes elicited per neuron following stimulation; here we have defined the SNR to be the ratio of the mean spike count to the standard deviation of the spike count. d Spike probability tuning curve for the same neuron as in Figs. 1c-e and 2b fit to a Gaussian in tone frequency. and one that we favor—is that the difference rests not in the sensory modality, but instead in the difference between the stimuli used. In this view, the binary responses may not be limited to the auditory cortex; neurons in visual and other sensory cortices might exhibit similar responses to the appropriate stimuli. For example, the a3 a1 a2 0.2 0.4 0.6 1 0 Mean spike count per neuron 4 8 12 16 20 Ratio of pool sizes 0.8 c d 2.0 3.8 7.1 13.2 24.9 46.7 0 0.2 0.4 0.6 0.8 1 N = 32 tones Tone frequency (kHz) Response probability 0 40 80 120 160 200 Time (msec) Poisson with refractory period 2 kHz b 100 spikes/s Response probability 0 100 200 300 400 500 Time (msec) Poisson simulation PSTH tone pips we used might be the auditory analog of a brief flash of light, rather than the oriented moving edges or gratings usually used to probe the primary visual cortex. Conversely, auditory stimuli analogous to edges or gratings [16, 17] may be more likely to elicit conventional, rate-modulated Poisson responses in the auditory cortex. Indeed, there may be a continuum between binary and Poisson modes. Thus, even in conventional rate-modulated responses, the first spike is often privileged in that it carries most of the information in the spike train [5, 14, 18]. The first spike may be particularly important as a means of rapidly signaling stimulus transients. Binary responses suggest a mode that complements conventional rate coding. In the simplest rate-coding model, a stimulus parameter (such as the frequency of a tone) governs only the rate at which a neuron generates spikes, but not the detailed positions of the spikes; the actual spike train itself is an instantiation of a random process (such as a Poisson process). By contrast, in the binomial model, the stimulus parameter (frequency) is encoded as the probability of firing (Fig. 4d). Binary coding has implications for cortical computation. In the rate coding model, stimulus encoding is “ergodic”: a stimulus parameter can be read out either by observing the activity of one neuron for a long time, or a population for a short time. By contrast, in the binary model the stimulus value can be decoded only by observing a neuronal population, so that there is no benefit to integrating over long time periods (cf. ref. [19]). One advantage of binary encoding is that it allows the population to signal quickly; the most compact message a neuron can send is one spike [20]. Binary coding is also more efficient in the context of population coding, as quantified by the signal-to-noise ratio (Fig. 4c). The precise organization of both spike number and time we have observed suggests that cortical activity consists, at least under some conditions, of packets of spikes synchronized across populations of neurons. Theoretical work [21-23] has shown how such packets can propagate stably from one population to the next, but only if neurons within each population fire at most one spike per packet; otherwise, the number of spikes per packet—and hence the width of each packet—grows at each propagation step. Interestingly, one prediction of stable propagation models is that spike probability should be related to timing precision, a prediction born out by our observations (Fig. 3). The role of these packets in computation remains an open question. 2 Identification methods for group statistics We recorded responses to 32 different 25 msec tones from each of 175 neurons from the auditory cortices of 16 Sprague-Dawley rats; each tone was repeated between 5 and 75 times (mean = 19). Thus our ensemble consisted of 32x175=5600 response sets, with between 5 and 75 samples in each set. Of these, 3055 response sets contained at least one spike on at least on trial. For each response set, we tested the hypothesis that the observed variability was significantly lower than expected from the null hypothesis of a Poisson process. The ability to assess significance depended on two parameters: the sample size (5-75) and the firing probability. Intuitively, the dependence on firing probability arises because at low firing rates most responses produce only trials with 0 or 1 spikes under both the Poisson and binary models; only at high firing rates do the two models make different predictions, since in that case the Poisson model includes many trials with 2 or even 3 spikes while the binary model generates only solitary spikes (see Fig. 4a1,a2). Using a stringent significance criterion of p<0.001, 467 response sets had a sufficient number of repeats to assess significance, given the observed firing probability. Of these, half (242/467=52%) were significantly less variable than expected by chance, five hundred-fold higher than the 467/1000=0.467 response sets expected, based on the 0.001 significance criterion, to yield a binary response set. Seventy-two neurons had at least one response set for which significance could be assessed, and of these, 49 neurons (49/72=68%) had at least one significantly sub-Poisson response set. Of this population of 49 neurons, five achieved low variability through repeatable bursty behavior (e.g., every spike count was either 0 or 3, but not 1 or 2) and were excluded from further analysis. The remaining 44 neurons formed the basis for the group statistics analyses shown in Figs. 2a and 3b. Nine of these neurons were subjected to an additional protocol consisting of at least 10 presentations each of 100 msec tones and 25 msec tones of all 32 frequencies. Of the 100 msec stimulation response sets, 44 were found to be significantly sub-Poisson at the p<0.05 level, in good agreement with the 43 found to be significant among the responses to 25 msec tones. 3 Bibliography 1. Kilgard, M.P. and M.M. Merzenich, Cortical map reorganization enabled by nucleus basalis activity. Science, 1998. 279(5357): p. 1714-8. 2. Sally, S.L. and J.B. Kelly, Organization of auditory cortex in the albino rat: sound frequency. J Neurophysiol, 1988. 59(5): p. 1627-38. 3. Softky, W.R. and C. Koch, The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J Neurosci, 1993. 13(1): p. 334-50. 4. Stevens, C.F. and A.M. Zador, Input synchrony and the irregular firing of cortical neurons. Nat Neurosci, 1998. 1(3): p. 210-7. 5. Buracas, G.T., A.M. Zador, M.R. DeWeese, and T.D. Albright, Efficient discrimination of temporal patterns by motion-sensitive neurons in primate visual cortex. Neuron, 1998. 20(5): p. 959-69. 6. Shadlen, M.N. and W.T. Newsome, The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. J Neurosci, 1998. 18(10): p. 3870-96. 7. Tolhurst, D.J., J.A. Movshon, and A.F. Dean, The statistical reliability of signals in single neurons in cat and monkey visual cortex. Vision Res, 1983. 23(8): p. 775-85. 8. Otmakhov, N., A.M. Shirke, and R. Malinow, Measuring the impact of probabilistic transmission on neuronal output. Neuron, 1993. 10(6): p. 1101-11. 9. Friedrich, R.W. and G. Laurent, Dynamic optimization of odor representations by slow temporal patterning of mitral cell activity. Science, 2001. 291(5505): p. 889-94. 10. Kara, P., P. Reinagel, and R.C. Reid, Low response variability in simultaneously recorded retinal, thalamic, and cortical neurons. Neuron, 2000. 27(3): p. 635-46. 11. Gur, M., A. Beylin, and D.M. Snodderly, Response variability of neurons in primary visual cortex (V1) of alert monkeys. J Neurosci, 1997. 17(8): p. 2914-20. 12. Berry, M.J., D.K. Warland, and M. Meister, The structure and precision of retinal spike trains. Proc Natl Acad Sci U S A, 1997. 94(10): p. 5411-6. 13. de Ruyter van Steveninck, R.R., G.D. Lewen, S.P. Strong, R. Koberle, and W. Bialek, Reproducibility and variability in neural spike trains. Science, 1997. 275(5307): p. 1805-8. 14. Heil, P., Auditory cortical onset responses revisited. I. First-spike timing. J Neurophysiol, 1997. 77(5): p. 2616-41. 15. Lu, T., L. Liang, and X. Wang, Temporal and rate representations of timevarying signals in the auditory cortex of awake primates. Nat Neurosci, 2001. 4(11): p. 1131-8. 16. Kowalski, N., D.A. Depireux, and S.A. Shamma, Analysis of dynamic spectra in ferret primary auditory cortex. I. Characteristics of single-unit responses to moving ripple spectra. J Neurophysiol, 1996. 76(5): p. 350323. 17. deCharms, R.C., D.T. Blake, and M.M. Merzenich, Optimizing sound features for cortical neurons. Science, 1998. 280(5368): p. 1439-43. 18. Panzeri, S., R.S. Petersen, S.R. Schultz, M. Lebedev, and M.E. Diamond, The role of spike timing in the coding of stimulus location in rat somatosensory cortex. Neuron, 2001. 29(3): p. 769-77. 19. Britten, K.H., M.N. Shadlen, W.T. Newsome, and J.A. Movshon, The analysis of visual motion: a comparison of neuronal and psychophysical performance. J Neurosci, 1992. 12(12): p. 4745-65. 20. Delorme, A. and S.J. Thorpe, Face identification using one spike per neuron: resistance to image degradations. Neural Netw, 2001. 14(6-7): p. 795-803. 21. Diesmann, M., M.O. Gewaltig, and A. Aertsen, Stable propagation of synchronous spiking in cortical neural networks. Nature, 1999. 402(6761): p. 529-33. 22. Marsalek, P., C. Koch, and J. Maunsell, On the relationship between synaptic input and spike output jitter in individual neurons. Proc Natl Acad Sci U S A, 1997. 94(2): p. 735-40. 23. Kistler, W.M. and W. Gerstner, Stable propagation of activity pulses in populations of spiking neurons. Neural Comp., 2002. 14: p. 987-997. 24. Zohary, E., M.N. Shadlen, and W.T. Newsome, Correlated neuronal discharge rate and its implications for psychophysical performance. Nature, 1994. 370(6485): p. 140-3. 25. Abbott, L.F. and P. Dayan, The effect of correlated variability on the accuracy of a population code. Neural Comput, 1999. 11(1): p. 91-101.
2002
38
2,241
Learning Attractor Landscapes for Learning Motor Primitives Auke Jan Ijspeert1,3∗, Jun Nakanishi2, and Stefan Schaal1,2 1University of Southern California, Los Angeles, CA 90089-2520, USA 2ATR Human Information Science Laboratories, Kyoto 619-0288, Japan 3EPFL, Swiss Federal Institute of Technology, Lausanne, Switzerland ijspeert@usc.edu, jun@his.atr.co.jp, sschaal@usc.edu Abstract Many control problems take place in continuous state-action spaces, e.g., as in manipulator robotics, where the control objective is often defined as finding a desired trajectory that reaches a particular goal state. While reinforcement learning offers a theoretical framework to learn such control policies from scratch, its applicability to higher dimensional continuous state-action spaces remains rather limited to date. Instead of learning from scratch, in this paper we suggest to learn a desired complex control policy by transforming an existing simple canonical control policy. For this purpose, we represent canonical policies in terms of differential equations with well-defined attractor properties. By nonlinearly transforming the canonical attractor dynamics using techniques from nonparametric regression, almost arbitrary new nonlinear policies can be generated without losing the stability properties of the canonical system. We demonstrate our techniques in the context of learning a set of movement skills for a humanoid robot from demonstrations of a human teacher. Policies are acquired rapidly, and, due to the properties of well formulated differential equations, can be re-used and modified on-line under dynamic changes of the environment. The linear parameterization of nonparametric regression moreover lends itself to recognize and classify previously learned movement skills. Evaluations in simulations and on an actual 30 degree-offreedom humanoid robot exemplify the feasibility and robustness of our approach. 1 Introduction Learning control is formulated in one of the most general forms as learning a control policy u = π(x, t, w) that maps a state x, possibly in a time t dependent way, to an action u; the vector w denotes the adjustable parameters that can be used to optimize the policy. Since learning control policies (CPs) based on atomic state-action representations is rather time consuming and faces problems in higher dimensional and/or continuous state-action spaces, a current topic in learning control is to use ∗http://lslwww.epfl.ch/˜ijspeert/ higher level representations to achieve faster and more robust learning [1, 2]. In this paper we suggest a novel encoding for such higher level representations based on the analogy between CPs and differential equations: both formulations suggest a change of state given the current state of the system, and both usually encode a desired goal in form of an attractor state. Thus, instead of shaping the attractor landscape of a policy tediously from scratch by traditional methods of reinforcement learning, we suggest to start out with a differential equation that already encodes a rough form of an attractor landscape and to only adapt this landscape to become more suitable to the current movement goal. If such a representation can keep the policy linear in the parameters w, rapid learning can be accomplished, and, moreover, the parameter vector may serve to classify a particular policy. In the following sections, we will first develop our learning approach of shaping attractor landscapes by means of statistical learning building on preliminary previous work [3, 4].1 Second, we will present a particular form of canonical CPs suitable for manipulator robotics, and finally, we will demonstrate how our methods can be used to classify movement and equip an actual humanoid robot with a variety of movement skills through imitation learning. 2 Learning Attractor Landscapes We consider a learning scenario where the goal of control is to attain a particular attractor state, either formulated as a point attractor (for discrete movements) or as a limit cycle (for rhythmic movements). For point attractors, we require that the CP will reach the goal state with a particular trajectory shape, irrespective of the initial conditions — a tennis swing toward a ball would be a typical example of such a movement. For limit cycles, the goal is given as the trajectory shape of the limit cycle and needs to be realized from any start state, as for example, in a complex drumming beat hitting multiple drums during one period. We will assume that, as the seed of learning, we obtain one or multiple example trajectories, defined by positions and velocities over time. Using these samples, an asymptotically stable CP is to be generated, prescribing a desired velocity given a particular state2. Various methods have been suggested to solve such control problems in the literature. As the simplest approach, one could just use one of the demonstrated trajectories and track it as a desired trajectory. While this would mimic this one particular trajectory, and scaling laws could account for different start positions [5], the resultant control policy would require time as an explicit variable and thus become highly sensitive toward unforeseen perturbations in the environment that would disrupt the normal time flow. Spline-based approaches [6] have a similar problem. Recurrent neural networks were suggested as a possible alternative that can avoid explicit time indexing — the complexity of training these networks to obtain stable attractor landscapes, however, has prevented a widespread application so far. Finally, it is also possible to prime a reinforcement learning system with sample trajectories and pursue one of the established continuous state-action learning algorithms; investigations of such an approach, however, demonstrated rather limited efficiency [7]. In the next sections, we present an alternative and surprisingly simple solution to learning the control problem above. 1Portions of the work presented in this paper have been published in [3, 4]. We here extend these preliminary studies with an improvement and simplification of the rhythmic system, an integrated view of the interpretation of both the discrete and rhythmic CPs, the fitting of a complete alphabet of Grafitti characters, and an implementation of automatic allocation of centers of kernel functions for locally weighted learning. 2Note that we restrict our approach to purely kinematic CPs, assuming that the movement system is equipped with an appropriate feedback and feedforward controller that can accurately track the kinematic plans generated by our policies. Table 1: Discrete and Rhythmic control policies. αz, βz, αv, βv, αz, βz, µ, σi and ci are positive constants. x0 is the start state of the discrete system in order to allow nonzero initial conditions. The design parameters of the discrete system are τ, the temporal scaling factor, and g, the goal position. The design parameters of the rhythmic system are ym, the baseline of the oscillation, τ, the period divided by 2π, and ro, the amplitude of oscillations. The parameters wi are fitted to a demonstrated trajectory using Locally Weighted Learning. Discrete Rhythmic τ ˙y = z + PN i=1 ΨiwT i ˜v PN i=1 Ψi τ ˙y = z + PN i=1 ΨiwT i ˜v PN i=1 Ψi τ ˙z = αz(βz(g −y) −z) τ ˙z = αz(βz(ym −y) −z) ˜v = [v] ˜v = [r cos φ, r sin φ]T τ ˙v = αv(βv(g −x) −v) τ ˙φ = 1 τ ˙x = v τ ˙r = −µ(r −r0) Ψi = exp ³ −hi( x−x0 g−x0 −ci)2´ Ψi = exp ¡ −hi(mod(φ, 2π) −ci)2¢ ci ∈[0, 1] ci ∈[0, 2π] 2.1 Dynamical systems for Discrete Movements Assume we have a basic control policy (CP), for instance, instantiated by the second order attractor dynamics τ ˙z = αz(βz(g −y) −z) τ ˙y = z + f (1) where g is a known goal state, αz, βz are time constants, τ is a temporal scaling factor (see below) and y, ˙y correspond to the desired position and velocity generated by the policy as a movement plan. For appropriate parameter settings and f = 0, these equations form a globally stable linear dynamical system with g as a unique point attractor. Could we insert a nonlinear function f in Eq.1 to change the rather trivial exponential convergence of y to allow more complex trajectories on the way to the goal? As such a change of Eq.1 enters the domain of nonlinear dynamics, an arbitrary complexity of the resulting equations can be expected. To the best of our knowledge, this has prevented research from employing generic learning in nonlinear dynamical systems so far. However, the introduction of an additional canonical dynamical system (x, v) τ ˙v = αv(βv(g −x) −v) τ ˙x = v (2) and the nonlinear function f f(x, v, g) = PN i=1 Ψiwiv PN i=1 Ψi Ψi = exp ¡ −hi(x/g −ci)2¢ (3) can alleviate this problem. Eq.2 is a second order dynamical system similar to Eq.1, however, it is linear and not modulated by a nonlinear function, and, thus, its monotonic global convergence to g can be guaranteed with a proper choice of αv and βv. Assuming that all initial conditions of the state variables x, v, y, z are initially zero, the quotient x/g ∈[0, 1] can serve as a phase variable to anchor the Gaussian basis functions Ψi (characterized by a center ci and bandwidth hi), and v can act as a “gating term” in the nonlinear function (3) such that the influence of this function vanishes at the end of the movement. Assuming boundedness of the weights wi in Eq.3, it can be shown that the combined dynamical system (Eqs.1–3) asymptotically converges to the unique point attractor g. Given that f is a normalized basis function representation with linear parameterization, it is obvious that this choice of a nonlinearity allows applying a variety of 0 0.5 1 1.5 2 −1 0 1 2 3 y 0 0.5 1 1.5 2 −10 −5 0 5 10 dy/dt 0 0.5 1 1.5 2 0 0.5 1 Ψi 0 0.5 1 1.5 2 0 1 2 3 v Time [s] 0 0.5 1 1.5 2 0 0.5 1 1.5 2 x Time [s] 0 0.5 1 1.5 2 −3 −2 −1 0 1 y 0 0.5 1 1.5 2 −40 −20 0 20 40 dy/dt 0 0.5 1 1.5 2 0 2 4 6 mod(φ,2π) 0 0.5 1 1.5 2 0 0.5 1 Ψi 0 0.5 1 1.5 2 −1 −0.5 0 0.5 1 r cos(φ) Time [s] 0 0.5 1 1.5 2 −1 −0.5 0 0.5 1 r sin(φ) Time [s] Figure 1: Examples of time evolution of the discrete CPs (left) and rhythmic CPs (right). The parameters wi have been adjusted to fit ˙ydemo(t) = 10 sin(2πt) exp(−t2) for the discrete CPs and ˙ydemo(t) = 2π cos(2πt) −6π sin(6πt) for the rhythmic CPs. learning algorithms to find the wi. For learning from a given sample trajectory, characterized by a trajectory ydemo(t), ˙ydemo(t) and duration T, a supervised learning problem can be formulated with the target trajectory ftarget = τ ˙ydemo −zdemo for Eq.1 (right), where zdemo is obtained by integrating Eq.1 (left) with ydemo instead of y. The corresponding goal state is g = ydemo(T) −ydemo(t = 0), i.e., the sample trajectory was translated to start at y = 0. In order to make the nominal (i.e., assuming f = 0) dynamics of Eqs.1 and 2 span the duration T of the sample trajectory, the temporal scaling factor τ is adjusted such that the nominal dynamics achieves 95% convergence at t = T. For solving the function approximation problem, we chose a nonparametric regression technique from locally weighted learning (LWL) [8] as it allows us to determine the necessary number of basis functions, their centers ci, and bandwidth hi automatically — in essence, for every basis function Ψi, LWL performs a locally weighted regression of the training data to obtain an approximation of the tangent of the function to be approximated within the scope of the kernel, and a prediction for a query point is achieved by a Ψi-weighted average of the predictions all local models. Moreover, as will be explained later, the parameters wi learned by LWL are also independent of the number of basis functions, such that they can be used robustly for categorization of different learned CPs. In summary, by anchoring a linear learning system with nonlinear basis functions in the phase space of a canonical dynamical system with guaranteed attractor properties, we are able to learn complex attractor landscapes of nonlinear differential equations without losing the asymptotic convergence to the goal state. 2.2 Extension to Limit Cycle Dynamics The system above can be extended to limit cycle dynamics by replacing the canonical system (x, v) with, for instance, the following rhythmic system which has a stable limit cycle in terms of polar coordinates (φ, r): τ ˙φ = 1 τ ˙r = −µ(r −r0) (4) Similar to the discrete system, the rhythmic canonical system serves to provide both an amplitude signal ˜v = [r cos φ, r sin φ]T and phase variable mod(φ, 2π) to the basis function Ψi of the control policy (z, y): τ ˙z = αz(βz(ym −y) −z) τ ˙y = z + PN i=1 ΨiwT i ˜v PN i=1 Ψi (5) where ym is an anchor point for the oscillatory trajectory. Table 1 summarizes the proposed discrete and rhythmic CPs, and Figure 1 shows exemplary time evolutions of the complete systems. 2.3 Special Properties of Control Policies based on Dynamical Systems Spatial and Temporal Invariance An interesting property of both discrete and rhythmic CPs is that they are spatially and temporally invariant. Scaling of the goal g for the discrete CP and of the amplitude r0 for the rhythmic CP does not affect the topology of the attractor landscape. Similarly, the period (for the rhythmic system) and duration (for the discrete system) of the trajectory y is directly determined by the parameter τ. This means that the amplitude and durations/periods of learned patterns can be independently modified without affecting the qualitative shape of trajectory y. In section 3, we will exploit these properties to reuse a learned movement (such as a tennis swing, for instance) in novel conditions (e.g toward new ball positions). Robustness against Perturbations When considering applications of our approach to physical systems, e.g., robots and humanoids, interactions with the environment may require an on-line modification of the policy. An obstacle can, for instance, block the trajectory of the robot, in which case large discrepancies between desired positions generated by the control policy and actual positions of the robot will occur. As outlined in [3], the dynamical system formulation allows feeding back an error term between actual and desired positions into the CPs, such that the time evolution of the policy is smoothly paused during a perturbation, i.e., the desired position y is modified to remain close to the actual position ˜y. As soon as the perturbation stops, the CP rapidly resumes performing the (time-delayed) planned trajectory. Note that other (task-specific) ways to cope with perturbations can be designed. Such on-line adaptations are one of the most interesting properties of using autonomous differential equations for CPs. Movement Recognition Given the temporal and spatial invariance of our policy representation, trajectories that are topologically similar tend to be fit by similar parameters wi, i.e., similar trajectories at different speeds and/or different amplitudes will result in similar wi. In section 3.3, we will use this property to demonstrate the potential of using the CPs for movement recognition. 3 Experimental Evaluations 3.1 Learning of Rhythmic Control Policies by Imitation We tested the proposed CPs in a learning by demonstration task with a humanoid robot. The robot is a 1.9-meter tall 30 DOFs hydraulic anthropomorphic robot with legs, arms, a jointed torso, and a head [9]. We recorded trajectories performed by a human subject using a joint-angle recording system, the Sarcos Sensuit (see Figure 2, top). The joint-angle trajectories are fitted by the CPs, with one CP per degree of freedom (DOF). The CPs are then used to replay the movement in the humanoid robot, using an inverse dynamics controller to track the desired trajectories generated by the CPs. The actual positions ˜y of each DOF are fed back into the CPs in order to take perturbations into account. Using the joint-angle recording system, we recorded a set of rhythmic movements such as tracing a figure 8 in the air, or a drumming sequence on a bongo (i.e. without drumming sticks). Six DOFs for both arms were recorded (three at the shoulder, one at the elbow, and two at the wrist). An exemplary movement and its replication by the robot is demonstrated in Figure 2 (top). Figure 2 (left) shows the joint trajectories over one period of an exemplary drumming beat. Demonstrated and learned trajectories are superposed. For the learning, the base frequency was extracted by hand such as to provide the parameter τ to the rhythmic CP. Once a rhythmic movement has been learned by the CP, it can be modulated in several ways. Manipulating r0 and τ for all DOFs amounts to simultaneously 0 0.5 1 1.5 2 −0.1 −0.05 0 0.05 L_SFE 0 0.5 1 1.5 2 0 0.05 0.1 0.15 R_SFE 0 0.5 1 1.5 2 −0.15 −0.1 −0.05 0 0.05 L_SAA 0 0.5 1 1.5 2 −0.05 0 0.05 0.1 R_SAA 0 0.5 1 1.5 2 −0.4 −0.2 0 0.2 L_HR 0 0.5 1 1.5 2 0 0.2 0.4 R_HR 0 0.5 1 1.5 2 −0.4 −0.2 0 0.2 0.4 0.6 L_EB 0 0.5 1 1.5 2 −0.8 −0.6 −0.4 −0.2 0 R_EB 0 0.5 1 1.5 2 −0.2 −0.1 0 0.1 L_WFE 0 0.5 1 1.5 2 0 0.1 0.2 0.3 R_WFE 0 0.5 1 1.5 2 −0.2 0 0.2 L_WR Time [s] 0 0.5 1 1.5 2 −0.4 −0.2 0 R_WR Time [s] 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 A 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 B 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 C 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 D Time [s] Figure 2: Top: Humanoid robot learning a figure-8 movement from a human demonstration. Left: Recorded drumming movement performed with both arms (6 DOFs per arm). The dotted lines and continuous lines correspond to one period of the demonstrated and learned trajectories, respectively. Right: Modification of the learned rhythmic pattern (flexion/extension of the right elbow, R EB). A: trajectory learned by the rhythmic CP, B: temporary modification with ˜r0 = 2r0, C: ˜τ = τ/2, D: ˜ ym = ym + 1 (dotted line), where ˜r0, ˜τ, and ˜ ym correspond to modified parameters between t=3s and t=7s. Movies of the human subject and the humanoid robot can be found at http://lslwww.epfl.ch/˜ijspeert/humanoid.html. modulate the amplitude and period of all DOFs, while keeping the same phase relation between DOFs. This might be particularly useful for a drumming task in order to replay the same beat pattern at different speeds and/or amplitudes. Alternatively, the r0 and τ parameters can be modulated independently for the DOFs each arm, in order to be able to change the beat pattern (doubling the frequency of one arm, for instance). Figure 2 (right) illustrates different modulations which can be generated by the rhythmic CPs. For reasons of clarity, only one DOF is showed. The rhythmic CP can smoothly modulate the amplitude, frequency, and baseline of the oscillations. 3.2 Learning of Discrete Control Policies by Imitation In this experiment, the task for the robot was to learn tennis forehand and backhand swings demonstrated by a human wearing the joint-angle recording system. Once a particular swing has been learned, the robot is able to repeat the swing motion to different cartesian targets, by providing new goal positions g to the CPs for the different DOFs. Using a system of two-cameras, the position of the ball is given to an inverse kinematic algorithm which computes these new goals in joint space. When the new ball positions are not too distant from the original cartesian target, the modified trajectories reach the ball with swing motions very similar to those used for the demonstration. 3.3 Movement Recognition using the Discrete Control Policies Our learning algorithm, Locally Weighted Learning [8], automatically sets the number of the kernel functions and their centers ci and widths hi depending on the complexity of the function to be approximated, with more kernel functions for highly Figure 3: Humanoid robot learning a forehand swing from a human demonstration. nonlinear details of the movement. An interesting aspect of locally weighted regression is that the regression parameters wi of each kernel i do not depend on the other kernels, since regression is based on a separate cost function for each kernel. This means that kernel functions can be added or removed without affecting the parameters wi of the other kernels. We here use this feature to perform movement recognition within a large variety of trajectories, based on a small subset of kernels at fixed locations ci in phase space. These fixed kernels are common for fitting all the trajectories, in addition to the kernels automatically added by the LWL algorithm. The stability of their parameters wi w.r.t. other kernels generated by LWL makes them well-suited for comparing qualitative trajectory shapes. To illustrate the possibility of using the CPs for movement recognition (i.e., recognition of spatiotemporal patterns, not just spatial patterns as in traditional character recognition), we carried out a simple task of fitting trajectories performed by a human user when drawing two-dimensional single-stroke patterns. The 26 letters of the Graffiti alphabet used in hand-held computers were chosen. These characters are drawn in a single stroke, and are fed as a two-dimensional trajectory (x(t), y(t)) to be fitted by our system. Five examples of each character were presented (see Figure 4 for four examples). Fixed sets of five kernels per DOF were set aside for movement recognition. The correlation wT a wb |wa||wb| between their parameter vectors wa and wb of character a and b can be used to classify movements with similar velocity profiles (Figure 4, right). For instance, for the 5 instances of the N, I, P, S, characters, the correlation is systematically higher with the four other examples of the same character. These similarities in weight space can therefore serve as basis for recognizing demonstrated movements by fitting them and comparing the fitted parameters wi with those of previously learned policies in memory. In this example, a simple one-nearestneighbor classifier in weight space would serve the purpose. Using such a classifier within the whole alphabet (5 instances of each letter), we obtained a 84% recognition rate (i.e. 110 out of the 130 instances had a highest correlation with an instance of the same letter). Further studies are required to evaluate the quality of recognition in larger training and test sets — what we wanted to demonstrate is the ability for recognition without any specific system tuning or sophisticated classification algorithm. 4 Conclusion Based on the analogy between autonomous differential equations and control policies, we presented a novel approach to learn control policies of basic movement skills by shaping the attractor landscape of nonlinear differential equations with statistical learning techniques. To the best of our knowledge, the presented approach is the first realization of a generic learning system for nonlinear dynamical systems that 50 100 150 200 250 300 350 400 450 150 200 250 300 350 400 X Y 0 0.2 0.4 0.6 0.8 1 1.2 100 200 300 400 500 Y 0 0.2 0.4 0.6 0.8 1 1.2 150 200 250 300 350 400 X Time [s] 200 250 300 350 400 280 300 320 340 360 380 400 420 440 460 X Y 0 0.2 0.4 0.6 0.8 250 300 350 400 450 500 Y 0 0.2 0.4 0.6 0.8 260 280 300 320 340 X Time [s] 200 250 300 350 400 320 340 360 380 400 420 440 460 480 X Y 0 0.2 0.4 0.6 0.8 1 1.2 300 350 400 450 500 Y 0 0.2 0.4 0.6 0.8 1 1.2 250 300 350 X Time [s] 200 250 300 350 400 340 360 380 400 420 440 460 480 X Y 0 0.2 0.4 0.6 0.8 1 300 350 400 450 500 Y 0 0.2 0.4 0.6 0.8 1 200 250 300 350 400 X Time [s] 0 2 4 6 8 10 12 14 16 18 20 0 2 4 6 8 10 12 14 16 18 20 N I P S N I P S Figure 4: Left: Examples of two-dimensional trajectories fitted by the CPs. The demonstrated and fitted trajectories are shown with dotted and continuous lines, respectively. Right: Correlation between the weight vectors of the 20 characters (5 of each letter) fitted by the system. The gray scale is proportional to the correlation, with black corresponding to a correlation of +1 (max. correlation) and white to a correlation of 0 or smaller. can guarantee basic stability and convergence properties of the learned nonlinear systems. We demonstrated the applicability of the suggested techniques by learning various movement skills for a complex humanoid robot by imitation learning, and illustrated the usefulness of the learned parameterization for recognition and classification of movement skills. Future work will consider (1) learning of multidimensional control policies without assuming independence between the individual dimensions, and (2) the suitability of the linear parameterization of the control policies for reinforcement learning. Acknowledgments This work was made possible by support from the US National Science Foundation (Awards 9710312 and 0082995), the ERATO Kawato Dynamic Brain Project funded by the Japan Science and Technology Corporation, the ATR Human Information Science Laboratories, and Communications Research Laboratory (CRL). References [1] R. Sutton and A.G. Barto. Reinforcement learning: an introduction. MIT Press, 1998. [2] F.A. Mussa-Ivaldi. Nonlinear force fields: a distributed system of control primitives for representing and learning movements. In IEEE International Symposium on Computational Intelligence in Robotics and Automation, pages 84–90. IEEE, Computer Society, Los Alamitos, 1997. [3] A.J. Ijspeert, J. Nakanishi, and S. Schaal. Movement imitation with nonlinear dynamical systems in humanoid robots. In IEEE International Conference on Robotics and Automation (ICRA2002), pages 1398–1403. 2002. [4] A.J. Ijspeert, J. Nakanishi, and S. Schaal. Learning rhythmic movements by demonstration using nonlinear oscillators. In Proceedings of the IEEE/RSJ Int. Conference on Intelligent Robots and Systems (IROS2002), pages 958–963. 2002. [5] S. Kawamura and N. Fukao. Interpolation for input torque patterns obtained through learning control. In Proceedings of The Third International Conference on Automation, Robotics and Computer Vision (ICARCV’94). 1994. [6] H. Miyamoto, S. Schaal, F. Gandolfo, Y. Koike, R. Osu, E. Nakano, Y. Wada, and M. Kawato. A kendama learning robot based on bi-directional theory. Neural Networks, 9:1281–1302, 1996. [7] S. Schaal. Learning from demonstration. In M. C. Mozer, M. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems 9, pages 1040–1046. Cambridge, MA, MIT Press, 1997. [8] S. Schaal and C.G. Atkeson. Constructive incremental learning from only local information. Neural Computation, 10(8):2047–2084, 1998. [9] C. G. Atkeson, J. Hale, M. Kawato, S. Kotosaka, F. Pollick, M. Riley, S. Schaal, S. Shibata, G. Tevatia, A. Ude, and S. Vijayakumar. Using humanoid robots to study human behavior. IEEE Intelligent Systems, 15:46–56, 2000.
2002
39
2,242
“Name That Song!”: A Probabilistic Approach to Querying on Music and Text Eric Brochu Department of Computer Science University of British Columbia Vancouver, BC, Canada ebrochu@cs.ubc.ca Nando de Freitas Department of Computer Science University of British Columbia Vancouver, BC, Canada nando@cs.ubc.ca Abstract We present a novel, flexible statistical approach for modelling music and text jointly. The approach is based on multi-modal mixture models and maximum a posteriori estimation using EM. The learned models can be used to browse databases with documents containing music and text, to search for music using queries consisting of music and text (lyrics and other contextual information), to annotate text documents with music, and to automatically recommend or identify similar songs. 1 Introduction Variations on “name that song”-types of games are popular on radio programs. DJs play a short excerpt from a song and listeners phone in to guess the name of the song. Of course, callers often get it right when DJs provide extra contextual clues (such as lyrics, or a piece of trivia about the song or band). We are attempting to reproduce this ability in the context of information retrieval (IR). In this paper, we present a method for querying with words and/or music. We focus on monophonic and polyphonic musical pieces of known structure (MIDI files, full music notation, etc.). Retrieving these pieces in multimedia databases, such as the Web, is a problem of growing interest [1, 2]. A significant step was taken by Downie [3], who applied standard text IR techniques to retrieve music by, initially, converting music to text format. Most research (including [3]) has, however, focused on plain music retrieval. To the best of our knowledge, there has been no attempt to model text and music jointly. We propose a joint probabilistic model for documents with music and/or text. This model is simple, easily extensible, flexible and powerful. It allows users to query multimedia databases using text and/or music as input. It is well-suited for browsing applications as it organizes the documents into “soft” clusters. The document of highest probability in each cluster serves as a music thumbnail for automated music summarisation. The model allows one to query with an entire text document to automatically annotate the document with musical pieces. It can be used to automatically recommend or identify similar songs. Finally, it allows for the inclusion of different types of text, including website content, lyrics, and meta-data such as hyper-text links. The interested reader may further wish to consult [4], in which we discuss an application of our model to the problem of jointly modelling music, as well as text and images. 2 Model specification The training data consists of documents with text (lyrics or information about the song) and musical scores in GUIDO notation [5]. (GUIDO is a powerful language for representing musical scores in an HTML-like notation. MIDI files, plentiful on the World Wide Web, can be easily converted to this format.) We model the data with a Bayesian multi-modal mixture model. Words and scores are assumed to be conditionally independent given the mixture component label. We model musical scores with first-order Markov chains, in which each state corresponds to a note, rest, or the start of a new voice. Notes’ pitches are represented by the interval change (in semitones) from the previous note, rather than by absolute pitch, so that a score or query transposed to a different key will still have the same Markov chain. Rhythm is similarly represented as a scalar to the previous value. Rest states are represented similarly, save that pitch is not represented. See Figure 1 for an example. Polyphonic scores are represented by chaining the beginning of a new voice to the end of a previous one. In order to ensure that the first note in each voice appears in both the row and column of the Markov transition matrix, a special “new voice” state with no interval or rhythm serves as a dummy state marking the beginning of a new voice. The first note of a voice has a distinguishing “first note” interval value and the first note or rest has a duration value of one. [ *3/4 b&1*3/16 b1/16 c#2*11/16 b&1/16 a&1*3/16 b&1/16 f#1/2 ] INTERVAL DURATION 0 newvoice 0 1 rest  2 firstnote  3 +1  4 +2  5 -2   6 -2  7 +3  8 -5 Figure 1: Sample melody – the opening notes to “The Yellow Submarine” by The Beatles – in different notations. From top: GUIDO notation, standard musical notation (generated automatically from GUIDO notation), and as a series of states in a first-order Markov chain (also generated automatically from GUIDO notation). The Markov chain representation of a piece of music is then mapped to a sparse transition frequency table  , where   denotes the number of times we observe the transition from state  to state  in document . We use   to denote the initial state of the Markov chain. The associated text is modeled using a standard sparse term frequency vector  , where  denotes the number of times word  appears in document . For notational simplicity, we group the music and text variable as follows:  ! #"  #$ . In essence, this Markovian approach is akin to a text bigram model, save that the states are transitions between musical notes and rests rather than words. Our multi-modal mixture model is as follows:                    ! #" $&%    '     '    " )(+*,"  " .   /'   0 " 12 (1) where   ! '3 " '   4 " '    "  "    4 $ encompasses all the model parameters and where 5     7698 if the first entry of belongs to state  and is : otherwise. The threedimensional matrix '    "  denotes the estimated probability of transitioning from state  to state  in cluster  , the matrix     denotes the initial probabilities of being in state  , given membership in cluster  . The vector  denotes the probability of each cluster. The matrix '    denotes the probability of the word  in cluster  . The mixture model is defined on the standard probability simplex ! !; : for all  and <   34=6>8 $ . We introduce the latent allocation variables ? A@ ! 8 "4BBB"&C $ to indicate that a particular sequence D belongs to a specific cluster  . These indicator variables ! ? FE 6G8 "BB4B")CIH $ correspond to an i.i.d. sample from the distribution  ? 6JK6L . This simple model is easy to extend. For browsing applications, we might prefer a hierarchical structure with levels M :     N    34 O P   M  4,   " M ,'    " M  (2) This is still a multinomial model, but by applying appropriate parameter constraints we can produce a tree-like browsing structure [6]. It is also easy to formulate the model in terms of aspects and clusters as suggested in [7, 8]. 2.1 Prior specification We follow a hierarchical Bayesian strategy, where the unknown parameters  and the allocation variables Q are regarded as being drawn from appropriate prior distributions. We acknowledge our uncertainty about the exact form of the prior by specifying it in terms of some unknown parameters (hyperparameters). The allocation variables ? are assumed to be drawn from a multinomial distribution, ? SR  8 E ) . We place a conjugate Dirichlet prior on the mixing coefficients  UT  3VW . Similarly, we place Dirichlet prior distributions T   ,XY on each     , T   3Z[ on each     "  , T 3\] on each     , and assume that these priors are independent. The posterior for the allocation variables will be required. It can be obtained easily using Bayes’ rule: 3     ? 6^   "  K6     "  ,         6 '3K_a`   '     b N! #" $)% `    `    '    "  (+*,"  " `  '     0 " dc <   eN  '3df[_a`     '   fN   N! #" $)% `     `     '    " dfN (g*h"  " `    '   df, 0 "  e c (3) 3 Computation The parameters of the mixture model cannot be computed analytically unless one knows the mixture indicator variables. We have to resort to numerical methods. One can implement a Gibbs sampler to compute the parameters and allocation variables. This is done by sampling the parameters from their Dirichlet posteriors and the allocation variables from their multinomial posterior. However, this algorithm is too computationally intensive for the applications we have in mind. Instead we opt for expectation maximization (EM) algorithms to compute the maximum likelihood (ML) and maximum a posteriori (MAP) point estimates of the mixture model. 3.1 Maximum likelihood estimation with the EM algorithm After initialization, the EM algorithm for ML estimation iterates between the following two steps: 1. E step: Compute the expectation of the complete log-likelihood with respect to the distribution of the allocation variables ML 6         old %   Q " "      , where   old % represents the value of the parameters at the previous time step. 2. M step: Maximize over the parameters:   new % 6     ML The ML function expands to ML 6  '          34           N #" $ %     '      '    " )( *,"  "     a   )0 " 1 2 B In the E step, we have to compute 3   using equation (3). The corresponding M step requires that we maximize ML subject to the constraints that all probabilities for the parameters sum up to 1. This constrained maximization can be carried out by introducing Lagrange multipliers. The resulting parameter estimates are:  ]3 6 8 CIH   '    (5)  ]    6 <   5     h3   <  '    (6)      " 4 6 <           <     <           (7)      6 <        <   '3   (8) 3.2 Maximum a posteriori estimation with the EM algorithm The EM formulation for MAP estimation is straightforward. One simply has to augment the objective function in the M step, ML, by adding to it the log prior densities. That is, the MAP objective function is MAP 6    !  " old %  ' Q "$# "    6 ML %     The MAP parameter estimates are:   6 & (' 8 % <   '3   <   e  & e ' C % CIH (9)      6 )  ' 8 % <  ' 5     ,'3   <    e  )  e  ' C+* % <  '    (10)      "  6 ,     ' 8 % <           <    e  ,    e  ' C * % <     <  '    '3   (11)  ]    6  ' 8 % <    .  3   < . e   e  ' C  % <    '3   (12) CLUSTER SONG    2 Moby – Porcelain 1 2 Nine Inch Nails – Terrible Lie 1 2 other – ’Addams Family’ theme 1 ... ... ... 4 J. S. Bach – Invention #1 1 4 J. S. Bach – Invention #8 1 4 J. S. Bach – Invention #15 1 4 The Beatles – Yellow Submarine 0.9975 ... ... ... 6 other – ’Wheel of Fortune’ theme 1 ... ... ... 7 The Beatles – Taxman 1 7 The Beatles – Got to Get You Into My Life 0.7247 7 The Cure – Saturday Night 1 ... ... ... 9 R.E.M – Man on the Moon 1 9 Soft Cell – Tainted Love 1 9 The Beatles – Got to Get You Into My Life 0.2753 Figure 2: Representative probabilistic cluster allocations using MAP estimation. These expressions can also be derived by considering the posterior modes and by replacing the cluster indicator variable with its posterior estimate    . This observation opens up room for various stochastic and deterministic ways of improving EM. 4 Experiments To test the model with text and music, we clustered a database of musical scores with associated text documents. The database is composed of various types of musical scores – jazz, classical, television theme songs, and contemporary pop music – as well as associated text files. The scores are represented in GUIDO notation. The associated text files are a song’s lyrics, where applicable, or textual commentary on the score for instrumental pieces, all of which were extracted from the World Wide Web. The experimental database contains 100 scores, each with a single associated text document. There is nothing in the model, however, that requires this one-to-one association of text documents and scores – this was done solely for testing simplicity and efficiency. In a deployment such as the world wide web, one would routinely expect one-to-many or many-to-many mappings between the scores and text. We carried out ML and MAP estimation with EM. The The Dirichlet hyper-parameters were set to V 6 8 " X96 8 : " Z 6 8 : " \ 6 . The MAP approach resulted in sparser (regularised), more coherent clusters. Figure 2 shows some representative cluster probability assignments obtained with MAP estimation. By and large, the MAP clusters are intuitive. The 15 pieces by J. S. Bach each have very high (  : B ) probabilities of membership in the same cluster. A few curious anomalies exist. The Beatles’ song The Yellow Submarine is included in the same cluster as the Bach pieces, though all the other Beatles songs in the database are assigned to other clusters. 4.1 Demonstrating the utility of multi-modal queries A major intended use of the text-score model is for searching documents on a combination of text and music. Consider a hypothetical example, using our database: A music fan is struggling to recall a dimly-remembered song with a strong repeating single-pitch, dotted-eight-note/sixteenthnote bass line, and lyrics containing the words come on, come on, get down. A search on the text portion alone turns up four documents which contain the lyrics. A search on the notes alone returns seven documents which have matching transitions. But a combined search returns only the correct document (figure 3). QUERY RETRIEVED SONGS come on, come on, get down Erksine Hawkins – Tuxedo Junction Moby – Bodyrock Nine Inch Nails – Last Sherwood Schwartz – ‘The Brady Bunch’ theme song The Beatles – Got to Get You Into My Life The Beatles – I’m Only Sleeping The Beatles – Yellow Submarine Moby – Bodyrock Moby – Porcelain Gary Portnoy – ‘Cheers’ theme song Rodgers & Hart – Blue Moon come on, come on, get down Moby – Bodyrock Figure 3: Examples of query matches, using only text, only musical notes, and both text and music. The combined query is more precise. 4.2 Precision and recall We evaluated our retrieval system with randomly generated queries. A query is composed of a random series of 1 to 5 note transitions, and 1 to 5 words,  . We then determine the actual number of matches C in the database, where a match is defined as a song  such that all elements of and  have a frequency of 1 or greater. In order to avoid skewing the results unduly, we reject any query that has C or C  : . To perform a query, we simply sample probabilistically without replacement from the clusters. The probability of sampling from each cluster, '3   , is computed using equation 3. If a cluster contains no items or later becomes empty, it is assigned a sampling probability of zero, and the probabilities of the remaining clusters are re-normalized. In each iteration  , a cluster is selected, and the matching criteria are applied against each piece of music that has been assigned to that cluster until a match is found. If no match is found, an arbitrary piece is selected. The selected piece is returned as the rank  result. Once all the matches have been returned, we compute the standard precision-recall curve [9], as shown in Figure 4. Our querying method enjoys a high precision until recall is approximately  : , and experiences a relatively modest deterioration of precision thereafter. By choosing clusters before Figure 4: Precision-recall curve showing average results, over 1000 randomly-generated queries, combining music and text matching criteria. matching, we overcome the polysemy problem. For example, river banks and money banks appear in separate clusters. We also deal with synonimy since automobiles and cars have high probability of belonging to the same clusters. 4.3 Association The probabilistic nature of our approach allows us the flexibility to use our techniques and database for tasks beyond traditional querying. One of the more promising avenues of exploration is associating documents with each other probabilistically. This could be used, for example, to find suitable songs for web sites or presentations (matching on text), or for recommending songs similar to one a user enjoys (matching on scores). Given an input document, , we first cluster by finding the most likely cluster as determined by computing       (equation 3). Input documents containing text or music only can be clustered using only those components of the database. Input documents that combine text and music are clustered using all the data. We can then find the closest association by computing the distance from the input document to the other document vectors in the cluster using a similarity metric such as Euclidean distance, or cosine measures after carrying out latent semantic indexing [10]. A few selected examples of associations we found are shown in figure 5. The results are often reasonable, though unexpected behavior occasionally occurs. 5 Conclusions We feel that the probabilistic approach to querying on music and text presented here is powerful, flexible, and novel, and suggests many interesting areas of future research. In the future, we should be able to incorporate audio by extracting suitable features from the INPUT CLOSEST MATCH J. S. Bach – Toccata and Fugue in D Minor (score) J. S. Bach – Invention #5 Nine Inch Nails – Closer (score & lyrics) Nine Inch Nails – I Do Not Want This T. S. Eliot – The Waste Land (text poem) The Cure – One Hundred Years Figure 5: The results of associating songs in the database with other text and/or musical input. The input is clustered probabilistically and then associated with the existing song that has the least Euclidean distance in that cluster. The association of The Wasteland with The Cure’s thematically similar One Hundred Years is likely due to the high co-occurance of relatively uncommon words such as water, death, and year(s). signals. This will permit querying by singing, humming, or via recorded music. There are a number of ways of combining our method with images [6, 4], opening up room for novel applications in multimedia [11]. Acknowledgments We would like to thank Kobus Barnard, J. Stephen Downie, Holger Hoos and Peter Carbonetto for their advice and expertise in preparing this paper. References [1] D Huron and B Aarden. Cognitive issues and approaches in music information retrieval. In S Downie and D Byrd, editors, Music Information Retrieval. 2002. [2] J Pickens. A comparison of language modeling and probabilistic text information retrieval approaches to monophonic music retrieval. In International Symposium on Music Information Retrieval, 2000. [3] J S Downie. Evaluating a Simple Approach to Music Information Retrieval: Conceiving Melodic N-Grams as Text. PhD thesis, University of Western Ontario, 1999. [4] E Brochu, N de Freitas, and K Bao. The sound of an album cover: Probabilistic multimedia and IR. In C M Bishop and B J Frey, editors, Ninth International Workshop on Artificial Intelligence and Statistics, Key West, Florida, 2003. To appear. [5] H H Hoos, K A Hamel, K Renz, and J Kilian. Representing score-level music using the GUIDO music-notation format. Computing in Musicology, 12, 2001. [6] K Barnard and D Forsyth. Learning the semantics of words and pictures. In International Conference on Computer Vision, volume 2, pages 408– 415, 2001. [7] T Hofmann. Probabilistic latent semantic analysis. In Uncertainty in Artificial Intelligence, 1999. [8] D M Blei, A Y Ng, and M I Jordan. Latent Dirichlet allocation. In T G Dietterich, S Becker, and Z Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [9] R Baeza-Yates and B Ribeiro-Neto. Modern Information Retrieval. Addison-Wesley, 1999. [10] S Deerwester, S T Dumais, G W Furnas, T K Landauer, and R Harshman. Indexing by latent semantic indexing. Journal of the American Society for Information Science, 41(6):391– 407, 1990. [11] P Duygulu, K Barnard, N de Freitas, and D Forsyth. Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In ECCV, 2002.
2002
4
2,243
Combining Dimensions and Features in Similarity-Based Representations Daniel J. Navarro Department of Psychology Ohio State University navarro.20@osu.edu Michael D. Lee Department of Psychology University of Adelaide michael.lee@psychology.adelaide.edu.au Abstract This paper develops a new representational model of similarity data that combines continuous dimensions with discrete features. An algorithm capable of learning these representations is described, and a Bayesian model selection approach for choosing the appropriate number of dimensions and features is developed. The approach is demonstrated on a classic data set that considers the similarities between the numbers 0 through 9. 1 Introduction A central problem for cognitive science is to understand the way people mentally represent stimuli. One widely used approach for deriving representations from data is to base them on measures of stimulus similarity (see Shepard 1974). Similarity is naturally understood as a measure of the degree to which the consequences of one stimulus generalize to another, and may be measured using a number of experimental methodologies, including ratings scales, confusion probabilities, or grouping or sorting tasks. For a domain with n stimuli, similarity data take the form of an n £ n matrix, S = [sij], where sij is the similarity of the ith and jth stimuli. The goal of similarity-based representation is then to Þnd structured and interpretable descriptions of the stimuli that capture the pattern of similarities. Modeling the similarities between stimuli requires making assumptions about both the representational structures used to describe stimuli, and the processes used to assess the similarities across these structures. The two best developed representational approaches in cognitive modeling are the ‘dimensional’ and ‘featural’ approaches (Goldstone, 1999). In the dimensional approach, stimuli are represented by continuous values along a number of dimensions, so that each stimulus corresponds to a point in a multi-dimensional space, and the similarity between two stimuli is measured according to the distance between their representative points. In the featural approach, stimuli are represented in terms of the presence or absence of a set of discrete (usually binary) features or properties, and the similarity between two stimuli is measured according to their common and distinctive features. The dimensional and featural approaches have di!erent strengths and weaknesses. Dimensional representations are constrained by the metric axioms, such as the triangle inequality, that are violated by some empirical data. Featural representations are ine"cient when representing inherently continuous aspects of the variation between stimuli. It has been argued that spatial representations are most appropriate for low-level perceptual stimuli, whereas featural representations are better suited to high-level conceptual domains (e.g., Carroll 1976, Tenenbaum 1996, Tversky 1977). In general, though, stimuli convey both perceptual and conceptual information. As Carroll (1976) concludes: “Since what is going on inside the head is likely to be complex, and is equally likely to have both discrete and continuous aspects, I believe the models we pursue must also be complex, and have both discrete and continuous components” (p. 462). This paper develops a new model of similarity that combines dimensions with features in the obvious way, allowing a stimulus to take continuous values on a number of dimensions, as well as potentially having a number of discrete features. We describe an algorithm capable of learning these representations from similarity data, and develop a Bayesian model selection approach for choosing the appropriate number of dimensions and features. Finally, we demonstrate the approach on a classic data set that considers the similarities between the numbers 0 through 9. 2 Dimensional, Featural and Combined Representations 2.1 Dimensional Representation In a dimensional representation, the ith stimulus is represented by a point pi = (pi1, . . . , piv) in a v-dimensional coordinate space. The dissimilarity between the ith and jth stimuli is then usually modeled as the distance between their points according to one of the family of Minkowskian metrics ˆdij = à v X k=1 jpik ¡ pjkjr ! 1 r + c, (1) where c is a non-negative constant. Dimensional representations can be learned using a variety of multidimensional scaling algorithms (e.g., Cox & Cox, 1994), which have placed particular emphasis on the r = 1 (City-Block) and r = 2 (Euclidean) cases because of their relationship, respectively, to so-called ‘separable’ and ‘integral’ stimulus dimensions (Garner 1974). Pairs of separable dimensions are those, like shape and size, that can be attended to separately. Integral dimensions, in contrast, are those rarer cases like hue and saturation that are not easily separated. 2.2 Featural Representation In a featural representation, the ith stimulus is represented by a vector of m binary variables fi = (fi1, . . . , fim), where fik = 1 if the ith stimulus possesses the kth feature, and fik = 0 if it does not. Each feature is also usually associated with a positive weight, wk, denoting its importance or salience. No constraints are placed on the way features may be assigned to stimuli. Rather than requiring features partition stimuli, as in many clustering methods, or that features nest within one another, as in many tree-Þtting methods, the ßexible nature of human mental representation demands that features are allowed to overlap in arbitrary ways. Although a number of models have been proposed for measuring the similarity between featurally represented stimuli (Navarro & Lee, 2002), the most widely used is the Contrast Model (Tversky, 1977). The Contrast Model assumes the similarity between two stimuli increases according to the weights of the (common) features they share, decreases according to the weights of the (distinctive) features that one has but the other does not, and these common and distinctive sources of information are themselves weighted in arriving at a Þnal similarity value. Particular emphasis (e.g., Shepard & Arabie, 1979; Tenenbaum, 1996) has been given to the special case of the Contrast Model where only common features are used, and feature weights are additive, so that the similarity of the ith and jth stimuli is given by ˆsij = m X k=1 wkfikfjk + c. (2) Although learning common feature representations is a di"cult combinatorial optimization problem, several successful additive clustering algorithms have been developed (e.g., Lee, 2002; Ruml, 2001; Tenenbaum, 1996). 2.3 Combined Representation The obvious generalization of dimensional and featural approaches is to represent stimuli in terms of continuous values along a set of dimensions and the presence or absence of a number of discrete features. If there are v dimensions and m features, the ith stimulus is deÞned by a point pi, a feature vector fi, and the feature weights w = (w1, . . . , wm). With this representational structure in place, we assume the similarity between the ith and jth stimuli is then simply the sum of the similarity arising from their common features (Eq. 2), minus the dissimilarity arising from their dimensional di!erences (Eq. 1), as follows ˆsij = à m X k=1 wkfikfjk ! ¡ à v X k=1 jpik ¡ pjkjr ! 1 r + c. 3 Model Fitting and Selection Proposing the combined representational approach immediately presents two challenges. The Þrst model Þtting problem is to develop a method for learning representations that Þt the similarity data well using a given number of dimensions and features. The second model selection problem is to choose between alternative combined representations of the same data that use di!erent numbers of features and dimensions. Formally, we conceive of the representational model as specifying the number of dimensions and features and the nature of the distance metric, and being parameterized by the feature variables and weights, coordinate locations and the additive constant. This means a particular representation is given by R! (!) where " = (v, m, r) and ! = (p1, . . . , pn, f1, . . . , fn, w, c). Following Tenenbaum (1996), we assume that the observed similarities come from independent Gaussian distributions with means sij and common variance #. The variance corresponds to the precision of the data which, for empirical similarity data averaged across information sources (such as individual participants) is easily estimated (Lee 2001), and otherwise must be speciÞed by assumption. Under these assumptions, the likelihood of a similarity matrix given a particular representation is p (S j R!, !) = Y i<j 1 # p 2$ exp µ ¡ 1 2#2 (sij ¡ ˆsij)2 ¶ = 1 ¡ # p 2$ ¢n(n!1)/2 exp ! "¡ 1 2#2 X i<j (sij ¡ ˆsij)2 # $ , giving the log-likelihood function ln p (S j R!, !) = ¡ 1 2#2 X i<j (sij ¡ ˆsij)2 ¡ n (n ¡ 1) 2 ln ³ # p 2$ ´ . Within this framework, we solve the model Þtting problem by Þnding the maximum likelihood parameter values !". Measures of data Þt like maximum likelihood, however, are clearly not appropriate for choosing between representations with di!erent numbers of dimensions and features, because of di!erences in model complexity. For this reason, we tackle the model selection problem using a Bayesian approach. 3.1 Fitting Algorithm Our learning algorithm for the combined model relies on the observation (Tenenbaum, 1996) that it is relatively easy to Þnd the maximum likelihood values of the continuous parameters–the coordinate locations, feature weights, and additive constant–given values for the discrete feature assignments. If ! is partitioned into !C = (p1, . . . , pn, w, c) and a Þxed !D = (f1, . . . , fn), then we solve the optimization problem arg max "C ln p (S j R!, !D, !C) where w, c ¸ 0, (3) using the Levenberg-Marquardt approach (More, 1977). Since distances are preserved under translation for the Minkowskian family of metrics, we assume without loss of generality that p1 is the origin. With this optimization capability in place, our learning algorithm may be described by the following Þve stage process: Step 1: Choose a maximum number of dimensions vmax and features mmax. Start with v = 1 and m = 1, making the lone feature the current feature to be optimized. Step 2: Find a starting (seed) value for the current feature by considering all possibilities that have exactly one pair of stimuli with the feature, choosing the possibility with the best data-Þt using Eq. 3. Step 3: Consider all possible representations arising from changing the assignment of one stimulus in relation to the current feature. If any of these changes improve the Þt of the representation as a whole, update the representation to be the one with the best Þt. Repeat this process until no change is found that improves the representation. The current representation at this point is recorded as the bestÞtting representation with v dimensions and m features. Step 4: If there are fewer than mmax features, then add a new feature, make it the current feature, and return to Step 2. Step 5: If there are fewer than vmax dimensions, then add a new dimension, reset the number of features to m = 1, and again make the lone feature the current feature to be optimized. Return to Step 2. The output of this algorithm is a grid of vmax £ mmax representations, one for each possible combination of number of dimensions and number of features. 3.2 Model Selection Given representational models with di!erent numbers of dimensions and features, the Bayesian approach is to select the one with the maximum posterior probability p (R! j S) = p (R!) p (S) Z p (S j R!, !) p (! j R!) d!. Since all models relate to the same similarity data, p (S) is a constant. If we assume that all representations are a priori equally likely, the posterior becomes p (R! j S) / X "D Z p (S j R!, !) p (! j R!) d!C. (4) This Bayesian approach embodies an automatic form of Ockham’s Razor, balancing data-Þt against model complexity, because it considers the model at all of its parameterizations. Complicated models that use many parameters (i.e., have high parametric complexity), or parameters that interact in complicated ways (i.e., have high functional form complexity) to achieve good levels of data-Þt at their optimal values will typically Þt data poorly at other parameter values, and so will have smaller posteriors. For the combined model, the posterior in Eq. 4 is not well approximated by simple measures such as the Bayesian Information (BIC: Schwarz, 1978) that have previously been applied to dimensional and featural representations (Lee & Navarro, 2002). This is because the BIC measures only parametric complexity, and treats each additional parameter as having an equal e!ect on model complexity. Binary feature membership parameters and continuous coordinate location parameters, however, will clearly have di!erent e!ects on model complexity. In addition, because the BIC does not measure functional form complexity, it is not sensitive to the change in representational model complexity arising from di!erent distance metrics. There are also di"culties approximating the posterior by a multivariate Gaussian with !" as the mode, as in the Laplacian approximation (see Kass & Raftery, 1995, p. 778), because the featural component of the combined model makes the posterior multimodal. For these reasons, we employed Monte Carlo methods with importance sampling (e.g., Oh & Berger, 1993), in which the posterior is numerically approximated by p (R! j S) ¼ 1 N N X i=1 p (S j R!, !i) p(!i j R!) g(!i j R!) , where each of the N !i values is independently sampled from g(¢). In the following evaluation, we assumed that p(! j R!) is uniform over !, and speciÞed an importance distribution g(¢) that was Gaussian over !C and multinomial over !D. As the posterior may be multimodal and non-standard, g(¢) was heavy tailed, and we sampled extensively (N = 5 £ 106) to ensure convergence. 1 2 3 4 5 6 7 8 9 0 Feature Weight 2 4 8 0.444 0 1 2 0.345 3 6 9 0.331 6 7 8 9 0.291 2 3 4 5 6 0.255 1 3 5 7 9 0.216 1 2 3 4 0.214 4 5 6 7 8 0.172 additive constant 0.148 (a) (b) Figure 1: Representations of the numbers similarity data using the (a) dimensional and (b) featural approaches. 4 An Illustrative Example Shepard, Kilpatric and Cunningham (1975) collected data measuring the “abstract conceptual similarity” of the numbers 0 through 9. Figure 1(a) displays a twodimensional representation of the numbers, using the City-Block metric. This representation explains only 78.6% of the variance, and fails to capture important regularities evident in the raw data, such the fact that the number 7 is more similar to 8 than it is to 9, or that 3 is much more similar to 0 than it is to 8, and so on. Figure 1(b) shows an eight-feature representation of the numbers using the same data, as reported by Tenenbaum (1996). This representation explains 90.9% of the variance, with features corresponding to arithmetic concepts (e.g., f2, 4, 8g and f3, 6, 9g) and to numerical magnitude (e.g., f1, 2, 3, 4g and f6, 7, 8, 9g). We note in passing that the representations displayed in Figure 1 are also recovered when our algorithm is restricted to purely dimensional or purely featural representations. Figure 1 suggests that the numbers data is a candidate for combined representation. Features are appropriate for representing the arithmetic concepts, but a ‘magnitude’ dimension seems to o!er a more e"cient and meaningful representation of this regularity than the Þve features used in Figure 1(b). We Þtted combined models with between one and three dimensions and one and eight features to the same similarity data, and calculated the log posterior for each. Because the raw data needed to estimate the precision of these averaged data are unavailable, we followed the arguments presented in Lee (2002) to make a conservative choice of # = 0.15. The results are shown in Figure 2. All of the representations using one dimension are more likely than those using two or three dimensions. Of the one dimensional representations, the four feature version is preferred, although the likelihoods of representations with other numbers of features are close enough to warrant consideration in choosing a ‘best’ representation, particularly given the assumptions made about data precision. For the sake of concreteness, however, Figure 3 describes the representation with one dimension and four features, which explains 90.0% of the variance. The one dimension almost orders the numbers according to their magnitude, with the violations being very small. The four features all capture meaningful arithmetic concepts, corresponding to “powers of two”, “multiples of three”, “multiples of two” (or “even 1 2 3 4 5 6 7 8 −20 −10 0 10 1D 2D 3D Number of Features Log Posterior Figure 2: Log posteriors for combined representations with between one and three dimensions, and one and eight features. 1 2 3 4 5 6 7 8 9 0 Feature Weight 2 4 8 0.286 3 6 9 0.282 2 4 6 8 0.224 1 3 9 0.157 additive constant 0.568 Figure 3: Representation of the numbers similarity data using one dimension (shown on the left) and four features (shown on the right). numbers”) and “powers of three”. Encouragingly, these features are close to those in Figure 1(b) that do not deal with numerical magnitude. 5 Conclusion Future work will examine the use of other featural similarity models besides the purely common features approach, and will also look to develop learning algorithms that do not rely on maximum likelihood estimation, but instead consider the posterior probability of a representation. Reliable analytic approximations to the posterior will be required for this purpose. Most importantly, however, the combined representation of a wide range of similarity data needs to be examined. Although the numbers data is a promising start, it is just a Þrst test of the combined approach to similarity-based representation. Demonstrating the generality and usefulness of the ability to represent stimuli in terms of both dimensions and features remains a challenge for future research. Acknowledgments This research was supported by Australian Research Council Grant DP0211406. We thank Tom Gri"ths and two anonymous reviewers for helpful comments and discussions. References [1] Carroll, J. D. (1976). Spatial, non-spatial and hybrid models for scaling. Psychometrika, 41, 439—463. [2] Cox, T. F. & Cox, M. A. A. (1994). Multidimensional Scaling. London: Chapman and Hall. [3] Garner, W. R. (1974).The Processing of Information and Structure. Potomac, MD: Erlbaum. [4] Goldstone, R. L. (1999). Similarity. In R.A. Wilson and F.C. Keil (eds.), MIT Encyclopedia of the Cognitive Sciences, pp. 763—765. Cambridge, MA: MIT Press. [5] Lee, M. D. (2001). Determining the dimensionality of multidimensional scaling representations for cognitive modeling. Journal of Mathematical Psychology, 45(1), 149—166. [6] Lee, M. D. (2002). Generating additive clustering models with limited stochastic complexity. Journal of ClassiÞcation, 19(1), 69-85. [7] Lee, M. D. & Navarro, D. J. (2002). Extending the ALCOVE model of category learning to featural stimulus domains. Psychonomic Bulletin & Review, 9(1), 43-58. [8] Kass, R. E. & Raftery, A. E. (1995). Bayes Factors. Journal of the American Statistical Association, 90(430), 773-795. [9] More, J. J. (1977). The Levenberg-Marquardt algorithm: Implementation and theory. In G.A. Watson (ed.), Lecture Notes in Mathematics, 630, pp. 105—116. New York: Springer-Verlag. [10] Navarro, D. J. & Lee, M. D. (2002). Commonalities and distinctions in featural stimulus representations. In: W. G. Gray, and C. D. Schunn (Eds.) Proceedings of the 24th Annual Conference of the Cognitive Science Society, pp. 685-690, Mahwah, NJ: Lawrence Erlbaum. [11] Oh, M. & Berger J. O. (1993). Integration of multimodal functions by Monte Carlo importance sampling, Journal of the American Statistical Association, 88, 450-456. [12] Ruml, W. (2001). Constructing distributed representations using additive clustering. In: T. G. Dietterich, S. Becker, and Z. Ghahramani (Eds.) Advances in Neural Information Processing 14. Cambridge, MA: MIT Press. [13] Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6(2), 461—464. [14] Shepard, R. N. (1974). Representation of structure in similarity data: Problems and prospects. Psychometrika, 39(4), 373—422. [15] Shepard, R. N. & Arabie, P. (1979). Additive clustering representations of similarities as combinations of discrete overlapping properties. Psychological Review, 86(2), 87—123. [16] Shepard, R. N., Kilpatric, D. W. & Cunningham, J. P. (1975). The internal representation of numbers. Cognitive Psychology, 7, 82—138. [17] Tenenbaum, J. B. (1996). Learning the structure of similarity. In D. S. Touretzky, M. C. Mozer and M. E. Hasselmo (Eds.), Advances in Neural Information Processing Systems, pp. 3—9, Cambridge, MA: MIT Press. [18] Tversky, A. (1977). Features of similarity. Psychological Review, 84(4), 327—352.
2002
40
2,244
Bayesian Monte Carlo Carl Edward Rasmussen and Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College London 17 Queen Square, London WC1N 3AR, England edward,zoubin@gatsby.ucl.ac.uk http://www.gatsby.ucl.ac.uk Abstract We investigate Bayesian alternatives to classical Monte Carlo methods for evaluating integrals. Bayesian Monte Carlo (BMC) allows the incorporation of prior knowledge, such as smoothness of the integrand, into the estimation. In a simple problem we show that this outperforms any classical importance sampling method. We also attempt more challenging multidimensional integrals involved in computing marginal likelihoods of statistical models (a.k.a. partition functions and model evidences). We find that Bayesian Monte Carlo outperformed Annealed Importance Sampling, although for very high dimensional problems or problems with massive multimodality BMC may be less adequate. One advantage of the Bayesian approach to Monte Carlo is that samples can be drawn from any distribution. This allows for the possibility of active design of sample points so as to maximise information gain. 1 Introduction Inference in most interesting machine learning algorithms is not computationally tractable, and is solved using approximations. This is particularly true for Bayesian models which require evaluation of complex multidimensional integrals. Both analytical approximations, such as the Laplace approximation and variational methods, and Monte Carlo methods have recently been used widely for Bayesian machine learning problems. It is interesting to note that Monte Carlo itself is a purely frequentist procedure [O’Hagan, 1987; MacKay, 1999]. This leads to several inconsistencies which we review below, outlined in a paper by O’Hagan [1987] with the title “Monte Carlo is Fundamentally Unsound”. We then investigate Bayesian counterparts to the classical Monte Carlo. Consider the evaluation of the integral:       (1) where   is a probability (density), and    is the function we wish to integrate. For example,   could be the posterior distribution and   the predictions made by a model with parameters , or   could be the parameter prior and      the likelihood so that equation (1) evaluates the marginal likelihood (evidence) for a model. Classical Monte Carlo makes the approximation:            (2) where  are random (not necessarily independent) draws from   , which converges to the right answer in the limit of large numbers of samples,  . If sampling directly from   is hard, or if high density regions in   do not match up with areas where   has large magnitude, it is also possible to draw samples from some importance sampling distribution   to obtain the estimate:                           (3) As O’Hagan [1987] points out, there are two important objections to these procedures. First, the estimator not only depends on the values of      but also on the entirely arbitrary choice of the sampling distribution   . Thus, if the same set of samples        , conveying exactly the same information about   , were obtained from two different sampling distributions, two different estimates of   would be obtained. This dependence on irrelevant (ancillary) information is unreasonable and violates the Likelihood Principle. The second objection is that classical Monte Carlo procedures entirely ignore the values of the  when forming the estimate. Consider the simple example of three points that are sampled from  and the third happens to fall on the same point as the second,    , conveying no extra information about the integrand. Simply averaging the integrand at these three points, which is the classical Monte Carlo estimate, is clearly inappropriate; it would make much more sense to average the first two (or the first and third). In practice points are unlikely to fall on top of each other in continuous spaces, however, a procedure that weights points equally regardless of their spatial distribution is ignoring relevant information. To summarize the objections, classical Monte Carlo bases its estimate on irrelevant information and throws away relevant information. We seek to turn the problem of evaluating the integral (1) into a Bayesian inference problem which, as we will see, avoids the inconsistencies of classical Monte Carlo and can result in better estimates. To do this, we think of the unknown desired quantity   as being random. Although this interpretation is not the most usual one, it is entirely consistent with the Bayesian view that all forms of uncertainty are represented using probabilities: in this case uncertainty arises because we cannot afford to compute    at every location. Since the desired  is a function of   (which is unknown until we evaluate it) we proceed by putting a prior on  , combining it with the observations to obtain the posterior over  which in turn implies a distribution over the desired   . A very convenient way of putting priors over functions is through Gaussian Processes (GP). Under a GP prior the joint distribution of any (finite) number of function values (indexed by the inputs, ) is Gaussian:                 ! #"  (4) where here we take the mean to be zero. The covariance matrix is given by the covariance function, a convenient choice being:1 "$ &%('*)      $  ,+.- /1032546  798  : ;   : 6 $ : *< +  :>=  (5) where the + parameters are hyperparameters. Gaussian processes, including optimization of hyperparameters, are discussed in detail in [Williams and Rasmussen, 1996]. 1Although the function values obtained are assumed to be noise-free, we added a tiny constant to the diagonal of the covariance matrix to improve numerical conditioning. 2 The Bayesian Monte Carlo Method The Bayesian Monte Carlo method starts with a prior over the function,   and makes inferences about  from a set of samples             giving the posterior distribution    . Under a GP prior the posterior is (an infinite dimensional joint) Gaussian; since the integral eq. (1) is just a linear projection (on the direction defined by   ), the posterior    is also Gaussian, and fully characterized by its mean and variance. The average over functions of eq. (1) is the expectation of the average function:                                       (6) where  is the posterior mean function. Similarly, for the variance:           6                      6         6                    %('*) 4       =          (7) where %('*) is the posterior covariance. The standard results for the GP model for the posterior mean and covariance are:   "! $# "&%    and %('*) 4       = '!   6(! $# ")%  ! *#    (8) where # and  are the observed inputs and function values respectively. In general combining eq. (8) with eq. (6-7) may lead to expressions which are difficult to evaluate, but there are several interesting special cases. If the density    and the covariance function eq. (5) are both Gaussian, we obtain analytical results. In detail, if     ,+ .- and the Gaussian kernels on the data points are  */    10  diag +     +  8  then the expectation evaluates to:     '2  " %   32 ,+  0 %  -5476 % .8  /1032 6 :9 */ 6;+  *0<4< %  */ 6;+  (9) a result which has previously been derived under the name of Bayes-Hermite Quadrature [O’Hagan, 1991]. For the variance, we get:    +.-7= = 7 0 %  ->4&6?= = % $8  632  " %  2  (10) with 2 as defined in eq. (9). Other choices that lead to analytical results include polynomial kernels and mixtures of Gaussians for   . 2.1 A Simple Example To illustrate the method we evaluated the integral of a one-dimensional function under a Gaussian density (figure 1, left). We generated samples independently from   , evaluated   at those points, and optimised the hyperparameters of our Gaussian process fit to the function. Figure 1 (middle) compares the error in the Bayesian Monte Carlo (BMC) estimate of the integral (1) to the Simple Monte Carlo (SMC) estimate using the same samples. As we would expect the squared error in the Simple Monte Carlo estimate decreases as  <  where  is the sample size. In contrast, for more than about 10 samples, the BMC estimate improves at a much higher rate. This is achieved because the prior on  allows the method to interpolate between sample points. Moreover, whereas the SMC estimate is invariant to permutations of the values on the axis, BMC makes use of the smoothness of the function. Therefore, a point in a sparse region is far more informative about the shape of the function for BMC than points in already densely sampled areas. In SMC if two samples happen to fall close to each other the function value there will be counted with double weight. This effect means that large numbers of samples are needed to adequately represent    . BMC circumvents this problem by analytically integrating its mean function w.r.t.    . In figure 1 left, the negative log density of the true value of the integral under the predictive distributions are compared for BMC and SMC. For not too small sample sizes, BMC outperforms SMC. Notice however, that for very small sample sizes BMC occasionally has very bad performance. This is due to examples where the random draws of lead to function values    that are consistent with much longer length scale than the true function; the mean prediction becomes somewhat inaccurate, but worse still, the inferred variance becomes very small (because a very slowly varying function is inferred), leading to very poor performance compared to SMC. This problem is to a large extent caused by the optimization of the length scale hyperparameters of the covariance function; we ought instead to have integrated over all possible length scales. This integration would effectively “blend in” distributions with much larger variance (since the data is also consistent with a shorter length scale), thus alleviating the problem, but unfortunately this is not possible in closed form. The problem disappears for sample sizes of around 16 or greater. In the previous example, we chose   to be Gaussian. If you wish to use BMC to integrate w.r.t. non-Gaussian densities then an importance re-weighting trick becomes necessary:                     (11) where the Gaussian process models      <    and   is a Gaussian and   is an arbitrary density which can be evaluated. See Kennedy [1998] for extension to nonGaussian   . 2.2 Optimal Importance Sampler For the simple example discussed above, it is also interesting to ask whether the efficiency of SMC could be improved by generating independent samples from more-cleverly designed distributions. As we have seen in equation (3), importance sampling gives an unbiased estimate of  by sampling  from   and computing:                (12) where     wherever    . The variance of this estimator is given by:                   56    (13) Using calculus of variations it is simple to show that the optimal (minimum variance) importance sampling distribution is:                  (14) which we can substitute into equation (13) to get the minimum variance,   . If   is always non-negative or non-positive then    , which is unsurprising given that we needed to know  in advance to normalise  . For functions that take on both positive and 10 1 10 2 10 −7 10 −6 10 −5 10 −4 10 −3 10 −2 sample size average squared error 10 1 10 2 −5 0 5 10 15 20 sample size minus log density of correct value Bayesian inference Simple Monte Carlo Optimal importance Bayesian inference Simple Monte Carlo −4 −2 0 2 4 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 function f(x) measure p(x) Figure 1: Left: a simple one-dimensional function  (full) and Gaussian density (dashed) with respect to which we wish to integrate  . Middle: average squared error for simple Monte Carlo sampling from  (dashed), the optimal achievable bound for importance sampling (dot-dashed), and the Bayesian Monte Carlo estimates. The values plotted are averages over up to 2048 repetitions. Right: Minus the log of the Gaussian predictive density with mean eq. (6) and variance eq. (7), evaluated at the true value of the integral (found by numerical integration), ‘x’. Similarly for the Simple Monte Carlo procedure, where the mean and variance of the predictive distribution are computed from the samples, ’o’. negative values     <          6   which is a constant times the variance of a Bernoulli random variable (sign    ). The lower bound from this optimal importance sampler as a function of number of samples is shown in figure 1, middle. As we can see, Bayesian Monte Carlo improves on the optimal importance sampler considerably. We stress that the optimal importance sampler is not practically achievable since it requires knowledge of the quantity we are trying to estimate. 3 Computing Marginal Likelihoods We now consider the problem of estimating the marginal likelihood of a statistical model. This problem is notoriously difficult and very important, since it allows for comparison of different models. In the physics literature it is known as free-energy estimation. Here we compare the Bayesian Monte Carlo method to two other techniques: Simple Monte Carlo sampling (SMC) and Annealed Importance Sampling (AIS). Simple Monte Carlo, sampling from the prior, is generally considered inadequate for this problem, because the likelihood is typically sharply peaked and samples from the prior are unlikely to fall in these confined areas, leading to huge variance in the estimates (although they are unbiased). A family of promising “thermodynamic integration” techniques for computing marginal likelihoods are discussed under the name of Bridge and Path sampling in [Gelman and Meng, 1998] and Annealed Importance Sampling (AIS) in [Neal, 2001]. The central idea is to divide one difficult integral into a series of easier ones, parameterised by (inverse) temperature, . In detail:              %   where         and             (15) where ,! is the !  inverse temperature of the annealing schedule and !"   . To compute each fraction we sample from equilibrium from the distribution  %        %    and compute importance weights:   %            %      %         ;     % %   (16) In practice  can be set to 1, to allow very slow reduction in temperature. Each of the intermediate ratios are much easier to compute than the original ratio, since the likelihood function to the power of a small number is much better behaved that the likelihood itself. Often elaborate non-linear cooling schedules are used, but for simplicity we will just take a linear schedule for the inverse temperature. The samples at each temperature are drawn using a single Metropolis proposal, where the proposal width is chosen to get a fairly high fraction of acceptances. The model in question for which we attempt to compute the marginal likelihood was itself a Gaussian process regression fit to the an artificial dataset suggested by [Friedman, 1988].2 We had 9 length scale hyperparameters, a signal variance ( + - ) and an explicit noise variance parameter. Thus the marginal likelihood is an integral over a 7 dimensional hyperparameter space. The log of the hyperparameters are given     priors. Figure 2 shows a comparison of the three methods. Perhaps surprisingly, AIS and SMC are seen to be very comparable, which can be due to several reasons: 1) whereas the SMC samples are drawn independently, the AIS samples have considerable auto-correlation because of the Metropolis generation mechanism, which hampers performance for low sample sizes, 2) the annealing schedule was not optimized nor the proposal width adjusted with temperature, which might possibly have sped up convergence. Further, the difference between AIS and SMC would be more dramatic in higher dimensions and for more highly peaked likelihood functions (i.e. more data). The Bayesian Monte Carlo method was run on the same samples as were generate by the AIS procedure. Note that BMC can use samples from any distribution, as long as   can be evaluated. Another obvious choice for generating samples for BMC would be to use an MCMC method to draw samples from the posterior. Because BMC needs to model the integrand using a GP, we need to limit the number of samples since computation (for fitting hyperparameters and computing the  ’s) scales as   . Thus for sample size greater than 7  we limit the number of samples to 7  , chosen equally spaced from the AIS Markov chain. Despite this thinning of the samples we see a generally superior performance of BMC, especially for smaller sample sizes. In fact, BMC seems to perform equally well for almost any of the investigated sample sizes. Even for this fairly large number of samples, the generation of points from the AIS still dominates compute time. 4 Discussion An important aspect which we have not explored in this paper is the idea that the GP model used to fit the integrand gives errorbars (uncertainties) on the integrand. These error bars 2The data was 100 samples generated from the 5-dimensional function  "!$#&%('(''(%)!+*&,./&021)354 "67! # !98&,;:=< 0 "! >@? 0 ' AB, 8 : /&0 !9C:=A! * :=D , where D is zero mean unit variance Gaussian noise and the inputs are sampled independently from a uniform [0, 1] distribution. 10 3 10 4 10 5 −70 −65 −60 −55 −50 −45 Number of Samples Log Marginal Likelihood True SMC AIS BMC Figure 2: Estimates of the marginal likelihood for different sample sizes using Simple Monte Carlo sampling (SMC; circles, dotted line), Annealed Importance Sampling (AIS; , dashed line), and Bayesian Monte Carlo (BMC; triangles, solid line). The true value (solid straight line) is estimated from a single   sample long run of AIS. For comparison, the maximum log likelihood is 6   (which is an upper bound on the true value). could be used to conduct an experimental design, i.e. active learning. A simple approach would be to evaluate the function at points where the GP has large uncertainty   and    is not too small: the expected contribution to the uncertainty in the estimate of the integral scales as      . For a fixed Gaussian Process covariance function these design points can often be pre-computed, see e.g. [Minka, 2000]. However, as we are adapting the covariance function depending on the observed function values, active learning would have to be an integral part of the procedure. Classical Monte Carlo approaches cannot make use of active learning since the samples need to be drawn from a given distribution. When using BMC to compute marginal likelihoods, the Gaussian covariance function used here (equation 5) is not ideally suited to modeling the likelihood. Firstly, likelihoods are non-negative whereas the prior is not restricted in the values the function can take. Secondly, the likelihood tends to have some regions of high magnitude and variability and other regions which are low and flat; this is not well-modelled by a stationary covariance function. In practice this misfit between the GP prior and the function modelled has even occasionally led to negative values for the estimate of the marginal likelihood! There could be several approaches to improving the appropriateness of the prior. An importance distribution such as one computed from a Laplace approximation or a mixture of Gaussians can be used to dampen the variability in the integrand [Kennedy, 1998]. The GP could be used to model the log of the likelihood [Rasmussen, 2002]; however this makes integration more difficult. The BMC method outlined in this paper can be extended in several ways. Although the choice of Gaussian process priors is computationally convenient in certain circumstances, in general other function approximation priors can be used to model the integrand. For discrete (or mixed) variables the GP model could still be used with appropriate choice of covariance function. However, the resulting sum (analogous to equation 1) may be difficult to evaluate. For discrete  , GPs are not directly applicable. Although BMC has proven successful on the problems presented here, there are several limitations to the approach. High dimensional integrands can prove difficult to model. In such cases a large number of samples may be required to obtain good estimates of the function. Inference using a Gaussian Process prior is at present limited computationally to a few thousand samples. Further, models such as neural networks and mixture models exhibit an exponentially large number of symmetrical modes in the posterior. Again modelling this with a GP prior would typically be difficult. Finally, the BMC method requires that the distribution    can be evaluated. This contrasts with classical MC where many methods only require that samples can be drawn from some distribution   , for which the normalising constant is not necessarily known (such as in equation 16). Unfortunately, this limitation makes it difficult, for example, to design a Bayesian analogue to Annealed Importance Sampling. We believe that the problem of computing an integral using a limited number of function evaluations should be treated as an inference problem and that all prior knowledge about the function being integrated should be incorporated into the inference. Despite the limitations outlined above, Bayesian Monte Carlo makes it possible to do this inference and can achieve performance equivalent to state-of-the-art classical methods despite using a fraction of sample evaluations, even sometimes exceeding the theoretically optimal performance of some classical methods. Acknowledgments We would like to thank Radford Neal for inspiring discussions. References Friedman, J. (1988). Multivariate Adaptive Regression Splines. Technical Report No. 102, November 1988, Laboratory for Computational Statistics, Department of Statistics, Stanford University. Kennedy, M. (1998). Bayesian quadrature with non-normal approximating functions, Statistics and Computing, 8, pp. 365–375. MacKay, D. J. C. (1999). Introduction to Monte Carlo methods. In Learning in Graphical Models, M. I. Jordan (ed), MIT Press, 1999. Gelman, A. and Meng, X.-L. (1998) Simulating normalizing constants: From importance sampling to bridge sampling to path sampling, Statistical Science, vol. 13, pp. 163–185. Minka, T. P. (2000) Deriving quadrature rules from Gaussian processes, Technical Report, Statistics Department, Carnegie Mellon University. Neal, R. M. (2001). Annealed Importance Sampling, Statistics and Computing, 11, pp. 125–139. O’Hagan, A. (1987). Monte Carlo is fundamentally unsound, The Statistician, 36, pp. 247-249. O’Hagan, A. (1991). Bayes-Hermite Quadrature, Journal of Statistical Planning and Inference, 29, pp. 245–260. O’Hagan, A. (1992). Some Bayesian Numerical Analysis. Bayesian Statistics 4 (J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds), Oxford University Press, pp. 345–365 (with discussion). C. E. Rasmussen (2003). Gaussian Processes to Speed up Hybrid Monte Carlo for Expensive Bayesian Integrals, Bayesian Statistics 7 (J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West, eds), Oxford University Press. Williams, C. K. I. and C. E. Rasmussen (1996). Gaussian Processes for Regression, in D. S. Touretzky, M. C. Mozer and M. E. Hasselmo (editors), NIPS 8, MIT Press.
2002
41
2,245
A Model for Learning Variance Components of Natural Images Yan Karklin yan+@cs.cmu.edu Michael S. Lewicki∗ lewicki@cnbc.cmu.edu Computer Science Department & Center for the Neural Basis of Cognition Carnegie Mellon University Abstract We present a hierarchical Bayesian model for learning efficient codes of higher-order structure in natural images. The model, a non-linear generalization of independent component analysis, replaces the standard assumption of independence for the joint distribution of coefficients with a distribution that is adapted to the variance structure of the coefficients of an efficient image basis. This offers a novel description of higherorder image structure and provides a way to learn coarse-coded, sparsedistributed representations of abstract image properties such as object location, scale, and texture. 1 Introduction One of the major challenges in vision is how to derive from the retinal representation higher-order representations that describe properties of surfaces, objects, and scenes. Physiological studies of the visual system have characterized a wide range of response properties, beginning with, for example, simple cells and complex cells. These, however, offer only limited insight into how higher-order properties of images might be represented or even what the higher-order properties might be. Computational approaches to vision often derive algorithms by inverse graphics, i.e. by inverting models of the physics of light propagation and surface reflectance properties to recover object and scene properties. A drawback of this approach is that, because of the complexity of modeling, only the simplest and most approximate models are computationally feasible to invert and these often break down for realistic images. A more fundamental limitation, however, is that this formulation of the problem does not explain the adaptive nature of the visual system or how it can learn highly abstract and general representations of objects and surfaces. An alternative approach is to derive representations from the statistics of the images themselves. This information theoretic view, called efficient coding, starts with the observation that there is an equivalence between the degree of structure represented and the efficiency of the code [1]. The hypothesis is that the primary goal of early sensory coding is to encode information efficiently. This theory has been applied to derive efficient codes for ∗To whom correspondence should be addressed natural images and to explain a wide range of response properties of neurons in the visual cortex [2–7]. Most algorithms for learning efficient representations assume either simply that the data are generated by a linear superposition of basis functions, as in independent component analysis (ICA), or, as in sparse coding, that the basis function coefficients are ’sparsified’ by lateral inhibition. Clearly, these simple models are insufficient to capture the rich structure of natural images, and although they capture higher-order statistics of natural images (correlations beyond second order), it remains unclear how to go beyond this to discover higher-order image structure. One approach is to learn image classes by embedding the statistical density assumed by ICA in a mixture model [8]. This provides a method for modeling classes of images and for performing automatic scene segmentation, but it assumes a fundamentally local representation and therefore is not suitable for compactly describing the large degree of structure variation across images. Another approach is to construct a specific model of non-linear features, e.g. the responses of complex cells, and learn an efficient code of their outputs [9]. With this, one is limited by the choice of the non-linearity and the range of image regularities that can be modeled. In this paper, we take as a starting point the observation by Schwartz and Simoncelli [10] that, for natural images, there are significant statistical dependencies among the variances of filter outputs. By factoring out these dependencies with divisive normalization, Schwartz and Simoncelli showed that the model could account for a wide range of non-linearities observed in neurons in the auditory nerve and primary visual cortex. Here, we propose a statistical model for higher-order structure that learns a basis on the variance regularities in natural images. This higher-order, non-orthogonal basis describes how, for a particular visual image patch, image basis function coefficient variances deviate from the default assumption of independence. This view offers a novel description of higher-order image structure and provides a way to learn sparse distributed representations of abstract image properties such as object location, scale, and surface texture. Efficient coding of natural images The computational goal of efficient coding is to derive from the statistics of the pattern ensemble a compact code that maximally reduces the redundancy in the patterns with minimal loss of information. The standard model assumes that the data is generated using a set of basis functions A and coefficients u: x = Au, (1) Because coding efficiency is being optimized, it is necessary, either implicitly or explicitly, for the model to capture the probability distribution of the pattern ensemble. For the linear model, the data likelihood is [11,12] p(x|A) = p(u)/|detA|. (2) The coefficients ui, are assumed to be statistically independent p(u) =∏ i p(ui). (3) ICA learns efficient codes of natural scenes by adapting the basis vectors to maximize the likelihood of the ensemble of image patterns, p(x1,...,xN) = ∏n p(xn|A), which maximizes the independence of the coefficients and optimizes coding efficiency within the limits of the linear model. a b c Figure 1: Statistical dependencies among natural image independent component basis coefficients. The scatter plots show for the two basis functions in the same row and column the joint distributions of basis function coefficients. Each point represents the encoding of a 20×20 image patch centered at random locations in the image. (a) For complex natural scenes, the joint distributions appear to be independent, because the joint distribution can be approximated by the product of the marginals. (b) Closer inspection of particular image regions (the image in (b) is contained in the lower middle part of the image in (a)) reveals complex statistical dependencies for the same set of basis functions. (c) Images such as texture can also show complex statistical dependencies. Statistical dependencies among ‘independent’ components A linear model can only achieve limited statistical independence among the basis function coefficients and thus can only capture a limited degree of visual structure. Deviations from independence among the coefficients reflect particular kinds of visual structure (fig. 1). If the coefficients were independent it would be possible to describe the joint distribution as the product of two marginal densities, p(ui,u j) = p(ui)p(u j). This is approximately true for natural scenes (fig.1a), but for particular images, the joint distribution of coefficients show complex statistical dependencies that reflect the higher-order structure (figs.1b and 1c). The challenge for developing more general models of efficient coding is formulating a description of these higher-order correlations in a way that captures meaningful higherorder visual structure. 2 Modeling higher-order statistical structure The basic model of standard efficient coding methods has two major limitations. First, the transformation from the pattern to the coefficients is linear, so only a limited class of computations can be achieved. Second, the model can capture statistical relationships among the pixels, but does not provide any means to capture higher order relationships that cannot be simply described at the pixel level. As a first step toward overcoming these limitations, we extend the basic model by introducing a non-independent prior to model higher-order statistical relationships among the basis function coefficients. Given a representation of natural images in terms of a Gabor-wavelet-like representation learned by ICA, one salient statistical regularity is the covariation of basis function coefficients in different visual contexts. Any specific type of image region, e.g. a particular kind of texture, will tend to yield in large values for some coefficients and not others. Different types of image regions will exhibit different statistical regularities among the variances of the coefficients. For a large ensemble of images, the goal is to find a code that describes these higher-order correlations efficiently. In the standard efficient coding model, the coefficients are often assumed to follow a generalized Gaussian distribution p(ui) = ze−|ui/λi|q , (4) where z = q/(2λiΓ[1/q]). The exponent q determines the distribution’s shape and weight of the tails, and can be fixed or estimated from the data for each basis function coefficient. The parameter λi determines the scale of variation (usually fixed in linear models, since basis vectors in A can absorb the scaling). λi is a generalized notion of variance; for clarity, we refer to it simply as variance below. Because we want to capture regularities among the variance patterns of the coefficients, we do not want to model the values of u themselves. Instead, we assume that the relative variances in different visual contexts can be modeled with a linear basis as follows λi = exp([Bv]i) (5) ⇒logλλλ = Bv. (6) where [Bv]i refers to the ith element of the product vector Bv. This formulation is useful because it uses a basis to represent the deviation from the variance assumed by the standard model. If we assume that vi also follows a zero-centered, sparse distribution (e.g. a generalized Gaussian), then Bv is peaked around zero which yields a variance of one, as in standard ICA. Because the distribution is sparse, only a few of the basis vectors in B are needed to describe how any particular image deviates from the default assumption of independence. The joint distribution for the prior (eqn.3) becomes −log p(u|B,v) ∝ L ∑ i ui e[Bv]i q , (7) Having formulated the problem as a statistical model, the choice of the value of v for a given u is determined by maximizing the posterior distribution ˆv = argmax v p(v|u,B) = argmax v p(u|B,v)p(v) (8) Unfortunately, computing the most probable v is not straightforward. Because v specifies the variance of u, there is a range of values that could account for a given pattern – all that changes is the probability of the first order representation, p(u|B,v). For the simulations below, ˆv was estimated by gradient ascent. By maximizing the posterior p(v|u,B), the algorithm is computing the best way to describe how the distribution of vi’s for the current image patch deviates from the default assumption of independence, i.e. v = 0. This aspect of the algorithm makes the transformation from the data to the internal representation fundamentally non-linear. The basis functions in B represent an efficient, sparse, distributed code for commonly observed deviations. In contrast to the first layer, where basis functions in A correspond to specific visual features, higher-order basis functions in B describe the shapes of image distributions. The parameters are adapted by performing gradient ascent on the data likelihood. Using the generalized prior, the data likelihood is computed by marginalizing over the coefficients. Assuming independence between B and v, the marginal likelihood is p(x|A,B) = Z p(u|B,v)p(v)/|detA|dv. (9) This, however, is intractable to compute, so we approximate it by the maximum a posteriori value ˆv p(x|A,B) ≈p(u|B, ˆv)p(ˆv)/|detA|. (10) We assume that p(v) = ∏i p(vi) and that p(vi) ∝exp(−|vi|). We adapt B by maximizing the likelihood over the data ensemble B = argmax B ∑ n log p(un|B, ˆvn)+log p(B) (11) For reasons of space, we omit the (straightforward) derivations of the gradients. Figure 2: A subset of the 400 image basis functions. Each basis function is 20x20 pixels. 3 Results The algorithm described above was applied to a standard set of ten 512×512natural images used in [2]. For computational simplicity, prior to the adaptation of the higher-order basis B, a 20 × 20 ICA image basis was derived using standard methods (e.g. [3]). A subset of these basis functions is shown in fig. 2. Because of the computational complexity of the learning procedure, the number of basis functions in B was limited to 30, although in principle a complete basis of 400 could be learned. The basis B was initialized to small random values and gradient ascent was performed for 4000 iterations, with a fixed step size of 0.05. For each batch of 5000 randomly sampled image patches, ˆv was derived using 50 steps of gradient ascent at a fixed step size of 0.01. Fig. 3 shows three different representations of the basis functions in the matrix B adapted to natural images. The first 10×3 block (fig.3a) shows the values of the 30 basis functions in B in their original learned order. Each square represents 400 weights Bi,j from a particular vj to all the image basis functions ui’s. Black dots represent negative weights; white, positive weights. In this representation, the weights appear sparse, but otherwise show no apparent structure, simply because basis functions in A are unordered. Figs. 3b and 3c show the weights rearranged in two different ways. In fig. 3b, the dots representing the same weights are arranged according to the spatial location within an image patch (as determined by fitting a 2D Gabor function) of the basis function which the weight affects. Each weight is shown as a dot; white dots represent positive weights, black dots negative weights. In fig. 3c, the same weights are arranged according to the orientation and spatial scale of the Gaussian envelope of the fitted Gabor. Orientation ranges from 0 to π counter-clockwise from the horizontal axis, and spatial scale ranges radially from DC at the bottom center to Nyquist. (Note that the learned basis functions can only be approximately fit by Gabor functions, which limits the precision of the visualizations.) In these arrangements, several types of higher-order regularities emerge. The predominant one is that coefficient variances are spatially correlated, which reflects the fact that a common occurrence is an image patch with a small localized object against a relatively uniform background. For example, the pattern in row 5, column 3 of fig. 3b shows that often the coefficient variances in the top and bottom halves of the image patch are anti-correlated, i.e. either the object or scene is primarily across the top or across the bottom. Because vi can be positive or negative, the higher-order basis functions in B represent contrast in the variance patterns. Other common regularities are variance-contrasts between two orientations for all spatial positions (e.g. row 7, column 1) and between low and high spatial scales for all positions and orientations (e.g. row 9, column 3). Most higher-order basis functions have simple structure in either position, orientation, or scale, but there are some whose organization is less obvious. a b c Figure 3: The learned higher-order basis functions. The same weights shown in the original order (a); rearranged according to the spatial location of the corresponding image basis functions (b); rearranged according to frequency and orientation of image basis functions (c). See text for details. Figure 4: Image patches that yielded the largest coefficients for two basis functions in B. The central block contains nine image patches corresponding to higher-order basis function coefficients with values near zero, i.e. small deviations from independent variance patterns. Positions of other nine-patch blocks correspond to the associated values of higher-order coefficients, here v15 and v27 (whose weights to ui’s are shown at the axes extrema). For example, the upper-left block contains image patches for which v15 was highly negative (contrast localized to bottom half of patch) and v27 was highly positive (power predominantly at low spatial scales). This illustrates how different combinations of basis functions in B define distributions of images (in this case, spatial frequency and location). Another way to get insight into the code learned by the model is to display, for a large ensemble of image patches, the patches that yield the largest values of particular vi’s (and their corresponding basis functions in B). This is shown in fig. 4. As a check to see if any of the higher-order structure learned by the algorithm was simply due to random variations in the dataset, we generated a dataset by drawing independent samples un from a generalized Gaussian to produce the pattern xn = Aun. The resulting basis B was composed only of small random values, indicating essentially no deviation from the standard assumption of independence and unit variance. In addition, adapting the model on a synthetic dataset generated from a hand-specified B recovers the original higher-order basis functions. It is also possible to adapt A and B simultaneously (although with considerably greater computational expense). To check the validity of first deriving B for a fixed A, both matrices were adapted simultaneously for small 8×8 patches on the same natural image data set. The results for both the image basis matrix A and the higher-order basis B were qualitatively similar to those reported above. 4 Discussion We have presented a model for learning higher-order statistical regularities in natural images by learning an efficient, sparse-distributed code for the basis function coefficient variances. The recognition algorithm is non-linear, but we have not tested yet whether it can account for non-linearities similar to the types reported in [10]. A (cautious) neurobiological interpretation of the higher-order units is that they are analogous to complex cells which pool output over specific first-order feature dimensions. Rather than achieving a simplistic invariance, however, the model presented here has the specific goal of efficiently representing the higher-order structure by adapting to the statistics of natural images, and thus may predict a broader range of response properties than are commonly tested physiologically. One salient type of higher-order structure learned by the model is the position of image structure within the patch. It is interesting that, rather than encoding specific locations, the model learned a coarse code of position using broadly tuned spatial patterns. This could offer novel insights into the function of the broad tuning of higher level visual neurons. By learning higher-order basis functions for different classes of visual images, the model could not only provide insights into other types of visual response properties, but could provide a way to simplify some of the computations in perceptual organization and other computations in mid-level vision. References [1] H. B. Barlow. Possible principles underlying the transformation of sensory messages. In W. A. Rosenbluth, editor, Sensory Communication, pages 217–234. MIT Press, Cambridge, 1961. [2] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive-field properties by learning a sparse code for natural images. Nature, 381:607–609, 1996. [3] A. J. Bell and T. J. Sejnowski. The ’independent components’ of natural scenes are edge filters. Vision Res., 37(23):3327–3338, 1997. [4] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. Royal Soc. Lond. B, 265:359–366, 1998. [5] J. H. van Hateren and D. L. Ruderman. Independent component analysis of natural image sequences yield spatiotemporal filters similar to simple cells in primary visual cortex. Proc. Royal Soc. Lond. B, 265:2315–2320, 1998. [6] P. O. Hoyer and A. Hyvarinen. Independent component analysis applied to feature extraction from colour and stereo images. Network, 11(3):191–210, 2000. [7] E. Simoncelli and B. Olshausen. Natural image statistics and neural representation. Ann. Rev. Neurosci., 24:1193–1216, 2001. [8] T-W. Lee and M. S. Lewicki. Unsupervised classification, segmentation and de-noising of images using ICA mixture models. IEEE Trans. Image Proc., 11(3):270–279, 2002. [9] P. O. Hoyer and A. Hyvarinen. A multi-layer sparse coding network learns contour coding from natural images. Vision Research, 42(12):1593–1605, 2002. [10] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nat. Neurosci., 4:819–825, 2001. [11] B. A. Pearlmutter and L. C. Parra. A context-sensitive generalization of ICA. In International Conference on Neural Information Processing, pages 151–157, 1996. [12] J-F. Cardoso. Infomax and maximum likelihood for blind source separation. IEEE Signal Processing Letters, 4:109–111, 1997.
2002
42
2,246
Effective Dimension and Generalization of Kernel Learning Tong Zhang IBM T.J. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract We investigate the generalization performance of some learning problems in Hilbert function Spaces. We introduce a concept of scalesensitive effective data dimension, and show that it characterizes the convergence rate of the underlying learning problem. Using this concept, we can naturally extend results for parametric estimation problems in finite dimensional spaces to non-parametric kernel learning methods. We derive upper bounds on the generalization performance and show that the resulting convergent rates are optimal under various circumstances. 1 Introduction The goal of supervised learning is to predict an unobserved output value based on an observed input vector  . This requires us to estimate a functional relationship    from a set of training examples. Usually the quality of the predictor    can be measured by a loss function      . In machine learning, we assume that the data    are drawn from an unknown underlying distribution. Our goal is to find    so that the expected true loss of  given below is as small as possible:          where we use    to denote the expectation with respect to the true (but unknown) underlying distribution. In this paper we focus on smooth convex loss functions that are second order differentiable with respect to the first component. In addition we assume that the second derivative is bounded both above and below (away from zero).1 For example, our analysis applies to important methods such as least squares regression (aka, Gaussian processes) and logistic regression in Hilbert spaces. In order to obtain a good predictor !    from training data, it is necessary to start with a model of the functional relationship. In this paper, we consider models that are subsets in some Hilbert function space " . Denote by #  #%$ the norm in " . In particular, we consider models in a bounded convex subset & of " . We would like to find the best model in & 1This boundedness assumption is not essential. However in this paper, in order to emphasize the main idea, we shall avoid using a more complex derivation that handles more general situations. defined as:                  (1) In supervised learning, we construct an estimator !  of   from a set of  training examples         ! ! " . Throughout the paper, we use symbol ! to denote empirical quantities based on the  observed training data  . Specifically, we use !    to denote the empirical expectation with respect to the training samples, and !      !          #  ! $ % &    %  % ' Assume that input  belongs to a set ( . We make the reasonable assumption that  is point-wise continuous under the #  # $ topology: ) +*,( , - . /0'1     32    where 54 2 is in the sense that # 76 82 # $ 4:9 . This assumption is equivalent to the condition ;<>=@?  ?BADC    FEHGJI  ) K*L(  , implying that each data point  can be regarded as a bounded linear functional M  on " such that )  * " : M        . Since a Hilbert space " is self-dual, we can represent M  by an element in " . Therefore )  we can define M  * " as M       for all  * " , where  denotes the inner product of " . It is clear that M  can be regarded as a representing feature vector of  in " . In the literature, the inner product N    8O  PM Q  M SR is often referred to as the kernel of " , and " as the reproducing kernel Hilbert space which is determined by the kernel function N  T  O  . The purpose of this paper is to develop bounds on the true risk   !   of any empirical estimator !  compared to the optimal risk    based on its observed risk !   !   . Specifically we seek a bound of the following form:   !  VU   8@GXW  !   !   6 !   @GKY  !  Z [ where W is a positive constant that only depends on the loss function , and Z is a parameter that characterizes the effective data dimensionality for the learning problem. If !  is the empirical estimator that minimizes !  in & , then the second term on the right hand side is non-positive. We are thus mainly interested in the third term. It will be shown that if " is a finite dimensional space, then the third term is \ ^]`_  where ] ba> .  "  is the dimension of " . If " is an infinite dimensional space (or when ] is large compared to  ), one can adjust Z appropriately based on the sample size  to get a bound \ c] ! _  where the effective dimension ] ! at the optimal scale Z becomes sample-size dependent. However the dimension will never grow faster than ] ! d\ fe  and hence even in the worse case, Y  !  Z [ converges to zero at a rate no worse than \  # _ge  . A consequence of our analysis is to obtain convergence rates better than \  # _`e  . For empirical estimators with least squares loss, this issue has been considered in [1, 2, 4] among others. The approach in [1] won’t lead to the optimal rate of convergence for nonparametric classes. The  O -covering number based analysis in [2, 4] use the chaining argument [4] and ratio large deviation inequalities. However, it is known that chaining does not always lead to the optimal convergence rate, and for many problems covering numbers can be rather difficult to estimate. The effective dimension based analysis presented here, while restricted to learning problems in Hilbert spaces (kernel methods), addresses these issues. 2 Decomposition of loss function Consider a convex subset &ih " , which is closed under the uniform norm topology. Let be the optimal predictor jin & defined in (1). By differentiating (1) at the optimal solution, and using the convexity of & with respect to  , we obtain the following first order condition:              6      9  )  * &  (2) where    is the derivative of    with respect to  . This inequality will be very important in our analysis. Definition 2.1 The Bregman distance of (with respect to its first variable) is defined as: ]        6   6       6   It is well known (and easy to check) that for a convex function, its Bregman divergence is always non-negative. As mentioned in the introduction, we assume for simplicity that there exist positive constants W  and W such that 9 E W JU    _ U W , where  is the second order derivative of with respect to the first variable. Using Taylor expansion of , it is easy to see that we have the following inequality for ] : W    6   O U ]    UKW   6   O  (3) Now, ) * & , we consider the following decomposition:    6   8    ]          G           6    Clearly by the non-negativeness of Bregman divergence and (2), the two terms on the right hand side of the above equality are all non-negative. This fact is very important in our approach. The above decomposition of  gives the following decomposition of loss function:     6       ]            G            6    ' We thus obtain from (3): W      6     O G           6      U     6      (4) UVW    6     O G           6 8  ' 3 Empirical ratio inequality and generalization bounds Given a positive definite self-adjoint operator  " 4 " , we define an inner product structure on " as:          The corresponding norm is #  #       O  . Given a positive number Z , and let be the identity operator, we define the following self-adjoint operator on " : "!    M gM   GXZ# %$  where we have used the matrix notation M  M   to denote the self-adjoint operator " 4 " defined as:  M  M   & M   M   &  '&   M  . In addition, we consider the inner product space ( ! on the set of self-adjoint operators on " , with the inner product defined as ) +*, .0/  1! ) 1!*  where /    is the trace of a linear operator  (sum of eigenvalues). The corresponding norm is denoted as #  # 2- . We start our analysis with the following simple lemma: Lemma 3.1 For any function    , the following bounds are valid: ;<>=  $  !          6                 O GXZ#  # O $ U # !    M  6      [M  #  ;<>=  $  !     O 6     O       O GKZ#  # O $ U # !  M  M   6  M  M   # 2 Proof Note that      O GZ#  # O $    1! $   . Therefore let   !    [M  6    [M  , we obtain from Cauchy-Schwartz inequality  !           6                  U    ! $     O     !    O  This proves the first inequality. To show the second inequality, we simply observe that the left hand side is the largest absolute eigenvalue of the operator ) ' !  !   M  M   6   M  M    , which is upper bounded by  /  ) O  . Therefore the second inequality follows immediately from the definition of ( ! -norm.  The importance of Lemma 3.1 is that it bounds the behavior of any estimator  * " (which can be sample dependent) in terms of the norm of the empirical mean of  zeromean Hilbert-space valued random vectors. The convergence rate of the latter can be easily estimated from the variance of the random vectors, and therefore we have significantly simplified the problem. In order to estimate the variance of the random vectors on the right hand sides of Lemma 3.1, and hence characterize the behavior of the learning problem, we shall introduce the following notion of effective data dimensionality at a scale Z :  !   M    ! M    #M  # O   Some properties of  ! are listed in Appendix A, which can be used to estimate the quantity. In particular for a finite dimensional space " ,  ! is upper bounded by the dimensionality a`   "  of the space. Moreover the equality can be achieved by letting Z 4 9 as long as  M  M   is full rank. Thus this quantity behaves like (scale-sensitive) data dimension. We also define the following quantities to measure the boundedness of the input data:  $  ;[<3=  #M  # $  !  ;[<>=  # M  #   (5) It is easy to see that  ! U  $ _ e Z . Lemma 3.2 Let W ;[<3=      , then we have     #  M  6     # [M  # O  U W O  !   #M  M   # 2  ! # M  M   6  fM  fM   # O  U  !  ! O  Proof Let M        [M  , then we have   #  [M  6 M # O      #  [M  # O  6 # M # O  UKW O  ! which gives the first inequality. Note that )M * " : # MjM  # 2# M# O  - . Therefore  #M M   # 2  #M  # O    ! leading to the second equality. Since #M  M   # 2 # M  # O  U  ! O , we have   #M  M   # O 2U   #M  M   #  ! O U  !  ! O  Similar to the proof of the first inequality, it is easy to check that this implies the third inequality.  Next we need to use the following version of Bernstein inequality in Hilbert spaces. Proposition 3.1 ([5]) Let % be zero-mean independent random vectors in a Hilbert space. If there exist *   9 such that for all natural numbers   :  ! ! % &   #% #  $ U  R O    $ O . Then for all  9 :  #  !  % % # $   U  = [6 ! O O _  * O G    . In this paper, we shall use the following variant of the above bound for convenience.      #  $ % %      $    G   * U  = [6   (6) Lemma 3.3 Under the assumptions of Lemma 3.2, let  !     O  ! G"! $# ! . Then with probability of at least # 6 %  =  6   : ;<>=  $  !        6              O GXZ#  # O $ U&%!    W Similarly, with probability of at least # 6  = [6   , we have: ;[<>=  $  !      O 6      O      O GXZ#  # O $ U' !     !3 Proof The bounds are straight forward applications of (6) and the previous two lemmas. Due to the limitation of space, we skip the details.  We are now ready to derive the following main result of the paper: Theorem 3.1 Assume ;[<>=       8     UW . Let (  W _ W  where W  and W satisfy (3). Consider any sample dependent estimator !  such that !  * & . That is, !  * & is a function of the training sample  . Let  !    ) O  ! G*! $# ! . If we choose Z such that (+%!     ! U 9 -, , then with probability of at least # 6/.0  =  6   , the generalization error is bounded as:   !   U   TG . (21 !   !   6 !    43>G5 Z3W  # !  6  # O $ G . ( O W O  !    O W   Proof We introduce the following notations for convenience: ! )     !            6 8   )          8        6 8   ! *     !      6 8   O *         6 8   O 6    *  @GKZ#  6  # O $  We obtain from Lemma 3.3 that with probability of at least # 67.0  =  6   :  ! )  !   6 )  !    U& !    W 6  !    O  ! *  !   6 *  !    U& !     ! 6  !   Combining the above two inequalities, we obtain:  ! )  !   6 )  !    GXW   ! *  !   6 *  !    U' !   1 W 6  !    O GXW   ! 6  !  43B Using (4) and recalling (2), we obtain W  W 1   !   6    3@U 1 !   !   6 !    3`G  !   1 W 6  !    O GXW   ! 6  !  43B (7) Let N5    1     6   843G Z3W # J6 # O $ ! N O    1 !     6 !   3G Z W O  W # J6 8# O $ then (2) and (4) imply that W  6   U N5   . We can derive from (7) W  W N   !   U ! N O  !  TG !     W  N5  !   W  G  ! N   !    Using the assumption that (+ !     ! U 9  , , we obtain W  W N   !   U ! N O  !  @G  !   [W  N5  !   W  which can be regarded as a quadratic inequality of N  O   !   . Solving the inequality using elementary algebra, we obtain: N   !  VU . ( ! N O  !  @G . ( O W O  !    O W  which immediately implies the theorem.  Note that both  ! and  ! go to zero as Z 4 I , therefore the assumption ( !     ! U 9 -, can be satisfied as long as we pick Z that is larger than a critical value Z 2 . Using the bound  ! U  $ _ e Z , we easily obtain the following result. Corollary 3.1 Under the assumptions of Theorem 3.1. Assume also that the diameter of & is bounded by ) : )  ;<>=     #  6  # $ . Then for all Z and an upper bound  ! of  ! . If (  !  # and Z _  !  5 ( O   O $ _  , we have with probability of at least # 6 .0  = [6   ,   !   U   @G . (21 !   !   6 !   8 3`GKZ  5 ) O W 3G W O W   O $ ' 4 Examples We will only consider empirical estimator !  that minimizes !    in & . In this case, 1 !   !   6 !   3TU 9 in Corollary 3.1. We shall thus only focus on the third term. Worst case effective dimensionality and generalization In the worst case, we have  !LU  O $ _ Z . Therefore if  U  , we can always let Z  . e (  O $   _  in Corollary 3.1 and obtain with probability at least # 6/.0  = [6   :   !   U    @G . (  5 W  ) O  O $ G W O W      Finite dimensional problems We can use the bound  !5Ua`   "  . Therefore we can let Z "5 a`   "  ( O   O $ _  in Corollary 3.1 and obtain:   !   U   8@G 5 a> .  "  ( O  O $  5W  ) O  O $ G W O W      It is well known that the rate of the order \  a`   "  _  is optimal in this case. Smoothing splines For simplicity, we only consider 1-dimensional problems. For smoothing splines, the corresponding Hilbert space consists of functions  satisfying the smoothness condition that 1     3 O ]  is bounded (  is the  -th derivative of  and   # _ ). We may consider periodic functions (or their restrictions in an interval) and the condition corresponds to a decaying Fourier coefficients condition. Specifically, the space can be regarded as the reproducing kernel Hilbert space with kernel N    8O   $ 2  G #  $ O   ;     ;  O @G ;     ;  8O   Now, using Proposition A.3, we have  ! U     G O %!  O  $   R Q . Therefore  R U !  O  $  . Note that we may take  O $  _   6 #  . Therefore assuming   6 #  O   O   2  ( O  we can let Z  $ O  in Corollary 3.1 where is the largest integer such that O   U  O  $   R ! O! #" R  . This gives the following bound (with probability at least # 6/.0  = [6   ).   !   U   TG O  %$ W  ) O  6 # G W O W  '& (  ( O    6 #  O ') O    O     This rate matches the best possible convergence rate for any data-dependent estimator.2 Exponential kernel Exponential kernel has recently been popularized by Vapnik. Again for simplicity we consider 1-dimensional problems where  */1 6 # # 3 . The kernel function is given by N    8O%   =    O   ! $ % & 2 # *   %   % O  Therefore  ! U  2 1 G G  ! + 3 . We obtain an upper bound   !  +U     G \ -, . ! ! , ./, . !  , implying that the effective dimension is at most \  -  _ - -.  for exponential kernels. 5 Conclusion In this paper, we introduced a concept of scale-sensitive effective data dimension, and used it to derive generalization bounds for some kernel learning problems. The resulting convergence rates are optimal for various learning problems. We have also shown that the 2The lower bound is well-known in the non-parametric statistical literature (for example, see [3]). effective dimension at the appropriate chosen optimal scale can be sample-size dependent and behaves like e  in the worst case. This shows that despite the claim that a kernel method learns a predictor from an infinite dimensional Hilbert space, for a fixed sample size, the effective dimension is rather small. This in fact indicates that they are not any more powerful than learning in an appropriately chosen finite dimensional space. This observation also raises the following computational question: given  -samples, kernel methods use  parameters in the computation but as we have shown, the effective number of parameters (effective dimension) is not more than \  e  . Therefore it could be possible to significantly reduce the computational cost of kernel methods by explicitly parameterizing the effective dimensions. A Properties of scale-sensitive effective data dimension We list some properties of the scale-sensitive data dimension  ! . Due to the limitation of space, we shall skip the proofs. The following lemma implies that the quantity  ! behaves like dimension if the underlying space " is finite dimensional. Proposition A.1 If " is a finite dimensional space, then  ! UKa> .  "  . Moreover, for all Hilbert spaces " , we have the following bound  ! U  O $ _ Z , where  $ is defined in (5). Proposition A.2 Consider the complete set of ortho-normal eigen-pairs   Z %  %   *  # " of the operator   M  M   , where  %   9 if *  and  %   %  # . This gives the decomposition:   M  M     % Z %  %   % , where Z %     %    O . We have the identity:  !   % ! !   ! . In many cases, we can find a so-called feature representation of the kernel function N  j  O  PM  Q  M  R . In such cases the eigenvalues Z can be easily bounded. Proposition A.3 Consider the following feature space decomposition of kernel: M Q  M  R   %  %  j%  %   O  , where each  % is a real valued function. If Zj  Z O %  , then we have the following bound:  % Z % U   %  %    O . This implies  ! UH  2 G ;[<>=  $ %  %    O _ Z  References [1] W.S. Lee, P.L. Bartlett, and R.C. Williamson. The importance of convexity in learning with squared loss. IEEE Trans. Inform. Theory, 44(5):1974–1980, 1998. [2] Shahar Mendelson. Learning relatively small classes. In COLT 01, pages 273–288, 2001. [3] Charles J. Stone. Optimal global rates of convergence for nonparametric regression. Annals of Statistics, 10:1040–1053, 1982. [4] S.A. van de Geer. Empirical Processes in  -estimation. Cambridge University Press, 2000. [5] Vadim Yurinsky. Sums and Gaussian vectors. Springer-Verlag, Berlin, 1995.
2002
43
2,247
Nash Propagation for Loopy Graphical Games Luis E. Ortiz Michael Kearns Department of Computer and Information Science University of Pennsylvania leortiz,mkearns  @cis.upenn.edu Abstract We introduce NashProp, an iterative and local message-passing algorithm for computing Nash equilibria in multi-player games represented by arbitrary undirected graphs. We provide a formal analysis and experimental evidence demonstrating that NashProp performs well on large graphical games with many loops, often converging in just a dozen iterations on graphs with hundreds of nodes. NashProp generalizes the tree algorithm of (Kearns et al. 2001), and can be viewed as similar in spirit to belief propagation in probabilistic inference, and thus complements the recent work of (Vickrey and Koller 2002), who explored a junction tree approach. Thus, as for probabilistic inference, we have at least two promising general-purpose approaches to equilibria computation in graphs. 1 Introduction There has been considerable recent interest in representational and algorithmic issues arising in multi-player game theory. One example is the recent work on graphical games (Kearns et al. 2001) (abbreviated KLS in the sequel). Here a multi-player game is represented by an undirected graph. The interpretation is that while the global equilibria of the game depend on the actions of all players, individual payoffs for a player are determined solely by his own action and the actions of his immediate neighbors in the graph. Like graphical models in probabilistic inference, graphical games may provide an exponentially more succinct representation than the standard “tabular” or normal form of the game. Also as for probabilistic inference, the problem of computing equilibria on arbitrary graphs is intractable in general, and so it is of interest to identify both natural special topologies permitting fast Nash computations, and good heuristics for general graphs. KLS gave a dynamic programming algorithm for computing Nash equilibria in graphical games in which the underlying graph is a tree, and drew analogies to the polytree algorithm for probabilistic inference (Pearl 1988). A natural question following from this work is whether there are generalizations of the basic tree algorithm analogous to those for probabilistic inference. In probabilistic inference, there are two main approaches to generalizing the polytree algorithm. Roughly speaking, the first approach is to take an arbitrary graph and “turn it into a tree” via triangulation, and subsequently run the tree-based algorithm on the resulting junction tree (Lauritzen and Spiegelhalter 1988). This approach has the merit of being guaranteed to perform inference correctly, but the drawback of requiring the computation to be done on the junction tree. On highly loopy graphs, junction tree computations may require exponential time. The other broad approach is to simply run (an appropriate generalization of) the polytree algorithm on the original loopy graph. This method garnered considerable interest when it was discovered that it sometimes performed quite well empirically, and was closely connected to the problem of decoding in Turbo Codes. Belief propagation has the merit of each iteration being quite efficient, but the drawback of having no guarantee of convergence in general (though recent theoretical work has established convergence for certain special cases (Weiss 2000)). In recent work, (Vickrey and Koller 2002) proposed a number of heuristics for equilibria computation in graphical games, including a constraint satisfaction generalization of KLS that essentially provides a junction tree approach for arbitrary graphical games. They also gave promising experimental results for this heuristic on certain loopy graphs that result in manageable junction trees. In this work, we introduce the NashProp algorithm, a different KLS generalization which provides an approach analogous to loopy belief propagation for graphical games. Like belief propagation, NashProp is a local message-passing algorithm that operates directly on the original graph of the game, requiring no triangulation or moralization 1 operations. NashProp is a two-phase algorithm. In the first phase, nodes exchange messages in the form of two-dimensional tables. The table player sends to neighboring player  in the graph indicates the values “believes” he can play given a setting of  and the information he has received in tables from his other neighbors, a kind of conditional Nash equilibrium. In the second phase of NashProp, the players attempt to incrementally construct an equilibrium obeying constraints imposed by the tables computed in the first phase. Interestingly, we can provide rather strong theory for the first phase, proving that the tables must always converge, and result in a reduced search space that can never eliminate an equilibrium. When run using a discretization scheme introduced by KLS, the first phase of NashProp will actually converge in time polynomial in the size of the game representation. We also report on a number of controlled experiments with NashProp on loopy graphs, including some that would be difficult via the junction tree approach due to the graph topology. The results appear to be quite encouraging, thus growing the body of heuristics available for computing equilibria in compactly represented games. 2 Preliminaries The normal or tabular form of an  -player, two-action2 game is defined by a set of  matrices  (    ), each with  indices. The entry      specifies the payoff to player when the joint action of the  players is      . Thus, each  has  entries. The actions 0 and 1 are the pure strategies of each player, while a mixed strategy for player is given by the probability !  " # $ that the player will play 0. For any joint mixed strategy, given by a product distribution  ! , we define the expected payoff to player as %& ' ! )(+*-, ./ , 0  %& 1 2  , where  43  ! indicates that each 5 is 0 with probability ! 5 and 1 with probability 764! 5 . We use  !  98:!2;   to denote the vector which is the same as  ! except in the th component, where the value has been changed to !<;  . A (Nash) equilibrium for the game is a mixed strategy  ! such that for any player , and for any ! ;  =  $ , > ? ! 9@ > ? !  A8! ;    . (We say that !  is a best response to the rest of  ! .) In other words, no player can improve their expected payoff by deviating unilaterally from a Nash equilibrium. The classic theorem of (Nash 1951) states that for any game, there exists a Nash equilibrium in the space of joint mixed strategies. We will also use a straightforward definition for approximate Nash equilibria. An B -Nash equilibrium is a mixed strategy  ! such that for any player , and for any value !2;  C   ,   ' ! )D B @   ? !  E8!2;    . (We say that !  is an B -best response to the rest of  ! .) Thus, no player can improve their expected payoff by more than B by 1Unlike for inference, moralization may be required for games even on undirected graphs. 2For simplicity, we describe our results for two actions, but they generalize to multi-action games. deviating unilaterally from an approximate Nash equilibrium. The following definitions are due to KLS. An  -player graphical game is a pair   , where is an undirected graph on  vertices and  is a set of  matrices " called the local game matrices. Each player is represented by a vertex in , and the interpretation is that each player’s payoff is determined solely by the actions in their local neighborhood in . Thus the matrix   has an index for each of the  neighbors of  , and an index for  itself, and for       ,    < denotes the payoff to  when he and his  neighbors play   . The expected payoff under a mixed strategy  ! +  $   is defined analogously. Note that in the two-action case,  has  entries, which may be considerably smaller than  . Note that any game can be trivially represented as a graphical game by choosing to be the complete graph, and letting the local game matrices be the original tabular form matrices. However, any time in which the local neighborhoods in can be bounded by   , the graphical representation is exponentially smaller than the normal form. We are interested in heuristics that can exploit this succinctness computationally. 3 NashProp: Table-Passing Phase The table-passing phase of NashProp proceeds in a series of rounds. In each round, every node will send a different binary-valued table to each of its neighbors in the graph. Thus, if vertices  and  are neighbors, the table sent from  to  in round  shall be denoted       . Since the vertices are always clear from the lower-case table indices, we shall drop the subscript and simply write     . This table is indexed by the continuum of possible mixed strategies     #  for players  and  , respectively. Intuitively, the binary value     indicates player  ’s (possibly incorrect) “belief” that there exists a (global) Nash equilibrium in which  (  and  (  . As these tables are indexed by continuous values, it is not clear how they can be finitely represented. However, as in KLS, we shall shortly introduce a finite discretization of these tables whose resolution is dependent only on local neighborhood size, yet is sufficient to compute global (approximate) equilibria. For the sake of generality we shall work with the exact tables in the ensuing formal analysis, which will immediately apply to the approximation algorithm as well. For every edge !    , the table-passing phase initialization is "    (  for all      . Let us denote the neighbors of  other than  (if any) by  ( $#%#%# '&(  . For each  %   $ , the table entry  '    is assigned the value 1 if and only if there exists a vector of mixed strategies  )4( ) %#$#%#$) *&+     ,'&( for  such that 1.   )   (  for all  " - 6= ; and 2.  ( is a best response to  (  )   (  . We shall call such a  ) a witness to   '   4(  . If  has no neighbors other than  , we define Condition 1 above to hold vacuously. If either condition is violated, we set  '    (+ . Lemma 1 For all edges !    and all /.  , the table sent from  to  can only contract or remain the same:    8  ?    (  0    8     (   . Proof: By induction on  . The base case  (  holds trivially due to the table initialization to contain all 1 entries. For the induction, assume for contradiction that for some 1.  , there exists a pair of neighboring players !    and a strategy pair    =  $!2 such that     (  yet   '   (  . Since  '    (  , the definition of the table-passing phase implies that there exists a witness  ) for the neighbors  of  other than  meeting Conditions 1 and 2 above. By induction, the fact that   2)   (  in Condition 1 implies that  &( 1  )   (  for all (  %#%#$#  6+ . Since    E(  it must be that  ( is a not best response to  (  )   (  . But then  ) cannot be a witness to  '    (  , a contradiction. Since all tables begin filled with 1 entries, and Lemma 1 states entries can only change from 1 to 0, the table-passing phase must converge: Theorem 2 For all    % # ,2 , the limit            exists. It is also immediately obvious that the limit tables      must all simultaneously balance each other, in the sense of obeying Conditions 1 and 2. That is, we must have that for all edges !    and all    ,     (  implies the existence of a witness  ) for  such that  2)  )(  for all , and  ( is a best response to  (  )   (  . If this were not true the tables would be altered by a single round of the table-passing phase. We next establish that the table-passing phase will never eliminate any global Nash equilibria. Let  !  # $  be any mixed strategy for the entire population of players, and let us use  !    to denote the mixed strategy assigned to player  by  ! . Lemma 3 Let  !    $  be a Nash equilibrium. Then for all rounds  @  of the tablepassing phase, and every edge     ,  ' !      !    )(  . Proof: By induction on  . The base case  (C holds trivially by the table initialization. By induction, for every  and neighbor of  ,  &( 1 ? !      !  A  (  , satisfying Condition 1 for   ? !      !     (  . Condition 2 is immediately satisfied since  ! is a Nash equilibrium. We can now establish a strong sense in which the set of balanced limit tables      characterizes the Nash equilibria of the global game. We say that  ! is consistent with the      if for every vertex  with neighbors    we have  ? !      !  9 9(  , and  !   9 is a witness to this value. In other words, every edge assignment made in  ! is “allowed” by the      , and furthermore the neighborhood assignments made by  ! are witnesses. Theorem 4 Let  !   $  be any global mixed strategy. Then  ! is consistent with the balanced limit tables      if and only if it is a Nash equilibrium. Proof: The forward direction is easy. If  ! is consistent with the      , then by definition, for all  ,  (  !    is a best response to the local neighborhood  (  !      (  !   A . Hence,  ! is a Nash equilibrium. For the other direction, if  ! is a Nash equilibrium, then for all  ,  (  !    is certainly a best response to the strategy of its neighbors  (  !      (  !   9 . So for consistency with the      , it remains to show that for every player  and its neighbors    ,  ' !      !     (  and  ' !      !  )   (  for all . This has already been established in Lemma 3. Theorem 4 is important because it establishes that the table-passing phase provides us with an alternative — and hopefully vastly reduced — seach space for Nash equilibria. Rather than search for equilibria in the space of all mixed strategies, Theorem 4 asserts that we can limit our search to the space of  ! that are consistent with the balanced limit tables      , with no fear of missing equilibria. The demand for consistency with the limit tables is a locally stronger demand than merely asking for a player to be playing a best response to its neighborhood. Heuristics for searching this constrained space are the topic of Section 5. But first let us ask in what ways the search space defined by the      might constitute a significant reduction. The most obvious case is that in which many of the tables contain a large fraction of 0 entries, since every such entry eliminates all mixed strategies in which the corresponding pair of vertices plays the corresponding pair of values. As we shall see in the discussion of experimental results, such behavior seems to occur in many — but certainly not all — interesting cases. We shall also see that even when such reduction does not occur, the underlying graphical structure of the game may still yield significant computational benefits in the search for a consistent mixed strategy. 4 Approximate Tables Thus far we have assumed that the binary-valuedtables      have continuous indices  and  , and thus it is not clear how they can be finitely represented 3. Here we briefly address this issue by asserting that it can be handled using the discretization scheme of KLS. More precisely, in that work it was established that if we restrict all table indices to only assume discrete values that are multiples of , and we relax Condition 2 in the definition of the table-passing phase to ask that  (  be only an B -best response to  (    ( ) , then the choice ( B      > suffices to preserve B -Nash equilibria in the tables. Here  is the maximum degree of any node in the graph. The total number of entries in each table will be    2 and thus exponential in  , but the payoff matrices for the players are already exponential in  , so our tables remain polynomial in the size of the graphical game representation. The crucial point established in KLS is that the required resolution is independent of the total number of players. It is easily verified that none of the key results establishing this fact (specifically, Lemmas 2, 3 and 4 of KLS) depend on the underlying graph being a tree, but hold for all graphical games. Precise analogues of all the results of the preceding section can thus be established for the discretized instantiation of the table-passing phase (details omitted). In particular, the tablepassing phase will now converge to finite balanced limit tables, and consistency with these tables characterizes B -Nash equilibria. Furthermore, since every round prior to convergence must change at least one entry in one table, the table-passing phase must thus converge in at most    2 rounds, which is again polynomial in the size of the game representation. Each round of the table-passing phase takes at most on the order of    ( computational steps in the worst case (though possibly considerably less), giving a total running time to the table-passing phase that scales polynomially with the size of the game. We note that the discretization of each player’s space of mixed strategies allows one to formulate the problem of computing an approximate NE in a graphical game as a CSP(Vickrey and Koller 2002), and there is a precise connection between NashProp and constraint propagation algorithms for (generalized) arc consistency in constraint networks 4. 5 NashProp: Assignment-Passing Phase We have already suggested that the tables      represent a solution space that may be considerably smaller than the set of all mixed strategies. We now describe heuristics for searching this space for a Nash equilibrium. For this it will be convenient to define, for each vertex  , its projection set    , which is indexed by the possible values    #  (or by their allowed values in the aforementioned discretization scheme). The purpose of    is simply to consolidate the information sent to  by all of its neighbors. Thus, if  are all the neighbors of  , we define    to be 1 if and only if there exists  ) (again called a witness to    (  ) such that   )   (  for all , and  (  is a best response to  (  ) ; otherwise we define    to be 0. If  ! is any global mixed strategy, it is easily verified that  ! is consistent with the      3We note that the KLS proof that the exact tables must admit a rectilinear representation holds generally, but we cannot bound their complexity here. 4We are grateful to Michael Littman for helping us establish this connection. if and only if  ? !    E(  for all nodes  , with the assignment of the neighbors of  in  ! as a witness. The first step of the assignment-passing phase of NashProp is thus the computation of the    at each vertex  , which is again a local computation in the graph. Neighboring nodes  and  also exchange their projections    and    . Let us begin by noting that the search space for a Nash equilibrium is immediately reduced to the cross-product of the projection sets by Theorem 4, so if the table-passing phase has resulted in many 0 values in the projections, even an exhaustive search across this (discretized) cross-product space may sometimes quickly yield a solution. However, we would obviously prefer a solution that exploits the local topology of the solution space given by the graph. At a high level, such a local search algorithm is straightforward: 1. Initialization: Choose any node  and any values 2  ) such that    (  with witness  ) , and  )  7(  for all .  assigns itself value  , and assigns each of its neighbors  the value )  . 2. Pick the next node  (in some fixed ordering) that has already been assigned some value  . If there is a partial assignment to the neighbors of  , attempt to extend it to a witness  ) to    (  such that  )  )(  for all , and assign any previously unassigned neighbors their values in this witness. If all the neighbors of  have been assigned, make sure  (  is a best response. Thus, the first vertex chosen assigns both itself and all of its neighbors, but afterwards vertices assign only (some of) their neighbors, and receive their own values from a neighbor. It is easily verified that if this process succeeds in assigning all vertices, the resulting mixed strategy is consistent with the      and thus a Nash equilibrium (or approximate equilibrium in the discretized case). The difficulty, of course, is that the inductive step of the assignment-passing phase may fail due to cycles in the graph — we may reach a node  whose neighbor partial assignment cannot be extended, or whose assigned value  (  is not a best response to its complete neighborhood assignment. In this case, as with any structured local search phase, we have reached a failure point and must backtrack. The overall NashProp algorithm thus consists of the (always converging) table-passing phase followed by the backtracking local assignment-passing phase. NashProp directly generalizes the algorithm of KLS, and as such, on certain special topologies such as trees may provably yield efficient computation of equilibria. Here we have shown that NashProp enjoys several natural and desirable properties even on arbitrary graphs. We now turn to some experimental investigation of NashProp on graphs containing cycles. 6 Experimental Results We have implemented the NashProp algorithm (with distinct table-passing and assignmentpassing 5 phases) as described, and run a series of controlled experiments on loopy graphs of varying size and topology. As discussed in Section 4, there is a relationship suggested by the KLS analysis between the table resolution and the global approximation quality B , but in practice this relationship may be pessimistic (Vickrey and Koller 2002) . Our implementation thus takes both and B as inputs, and attempts to find an B -Nash equilibrium running NashProp on tables of resolution . We first draw attention to Figure 1, in which we provide a visual display of the evolution of the tables computed by the NashProp table-passing phase for a small (3 by 3) grid game. Note that for this game, the table-passing phase constrains the search space tremendously — so much so that the projection sets entirely determine the unique equilibrium, and the assignment-passing phase is superfluous. This is of course ideal behavior. The main results of our controlled experiments are summarized in Figure 2. One of our 5We did not implement backtracking, but this caused an overall rate of failure of only 3% across all 3000 runs described here. r = 2 r = 3 r = 8 r = 1 Figure 1: Visual display of the NashProp table-passing phase after rounds 1,2 and 3 and 8 (where convergence occurs). Each row shows first the projection set, then the four outbound tables, for each of the 9 players in a 3 by 3 grid. For the reward functions, each player has a distinct preference for one of his two actions. For 15 of the 16 possible settings of his 4 neighbors, this preference is the same, but for the remaining setting it is reversed. It is easily verified that every player’s payoff depends on all of his neighbors. (Settings used:       ). primary interests is how the number of rounds in each of the two phases — and therefore the overall running time — scales with the size and complexity of the graph. More detail is provided in the caption, but we created graphs varying in size from 5 to 100 nodes with a number of different topologies: single cycles; single cycles to which a varying number of chords were added, which generates considerably more cycles in the graph; grids; and “ring of rings” (Vickrey and Koller 2002). We also experimented with local payoff matrices in which each entry was chosen randomly from  # $ , and with “biased” rewards, in which for some  fixed number of the settings of its neighbors, each node has a strong preference for one of their actions, and in the remaining settings, a strong preference for the other. The  settings were chosen randomly subject to the constraint that no neighbor is marginalized (thus no simplification of the graph is possible). These classes of graphs seems to generate a nice variability in the relative speed of the table-passing and assignment-passing phases of NashProp, which is why we chose them. We now make a number of remarks regarding the NashProp experiments. First, and most basically, these preliminary results indicate that the algorithm performs well across a range of loopy topologies, including some (such as grids and cycles with many chords) that might pose computational challenges for junction tree approaches as the number of players becomes large. Excluding the small fraction of trials in which the assignment-passing phase failed to find a solution, even on grid and loopy chord graphs with 100 nodes, we find convergence of both the table and assignment-passing phases in less than a dozen rounds. We next note that there is considerable variation across topologies (and little within) in the amount of work done by the table-passing phase, both in terms of the expected number of rounds to convergence, and the fraction of 0 entries that have been computed at completion. For example, for cycles the amount of work in both senses is at its highest, while for grids with random rewards it is lowest. For grids and chordal cycles, decreasing the value of  (and thus increasing the bias of the payoff matrices) generally causes more to be accomplished by the table-passing phase. Intuitively, when rewards are entirely random and unbiased, nodes with large degrees will tend to rarely or never compute 0s in their 0 2 4 6 8 10 12 14 0 20 40 60 80 100 number of rounds number of players Table-Passing Phase 0.61 1.00 0.81 0.60 0.59 0.87 0.65 0.53 0.93 0.81 0.42 0.78 cycle grid chordal(0.25,1,2,3) chordal(0.25,1,1,2) chordal(0.25,1,1,1) chordal(0.5,1,2,3) chordal(0.5,1,1,2) chordal(0.5,1,1,1) grid(3) grid(2) grid(1) ringofrings 0 2 4 6 8 10 0 20 40 60 80 100 number of rounds number of players Assignment-Passing Phase cycle grid chordal(0.25,1,2,3) chordal(0.25,1,1,2) chordal(0.25,1,1,1) chordal(0.5,1,2,3) chordal(0.5,1,1,2) chordal(0.5,1,1,1) grid(3) grid(2) grid(1) ringofrings Figure 2: Plots showing the number of rounds taken by the NashProp table-passing (left) and assignment-passing (right) phases in computing an equilibrium, for a variety of different graph topologies. The -axis shows the total number of vertices in the graph. Topologies and rewards examined included cycles, grids and “ring of rings”(Vickrey and Koller 2002) with random rewards (denoted cycle, grid and ringofrings in the legend); cycles with a fraction  of random chords added, and with biased rewards in which nodes of degree 2 have    , degree 3 have    , and degree 4 have    (see text for definition of  ), denoted chordal(          ); and grids with biased rewards with  , denoted grid(  )). Each data point represents averages over 50 trials for the given topology and number of vertices. In the table-passing plot, each curve is also annotated with the average fraction of 1 values in the converged tables. For cycles, settings used were         ; for ring of rings,         ; for all other classes,        . outbound tables — there have too many neighbors whose combined setting can act as a witnesses for a 1 in an outbound table. However, as suggested by the theory, greater progress (and computation) in the tablepassing phase pays dividends in the assignment-passing phase, since the search space may have been dramatically reduced. For example, for chordal and grid graphs with biased rewards, the ordering of plots by convergence time is essentially reversed from the tablepassing to assignment-passing phases. This suggests that, when it occurs, the additional convergence time in the table-passing phase is worth the investment. However, we again note that even for the least useful table-passing phase (for grids with random rewards), the assignment-passing phase (which thus exploits the graph structure alone) still manages to find an equilibrium rapidly. References M. Kearns, M. Littman, and S. Singh. Graphical models for game theory. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, pages 253–260, 2001. S. Lauritzen and D. Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. J. Royal Stat. Soc. B, 50(2):157–224, 1988. J. F. Nash. Non-cooperative games. Annals of Mathematics, 54:286–295, 1951. J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. D. Vickrey and D. Koller. Multi-agent algorithms for solving graphical games. In Proceedings of the National Conference on Artificial Intelligence (AAAI), 2002. To appear. Yair Weiss. Correctness of local probability propagation in graphical models with loops. Neural Computation, 12(1):1–41, 2000.
2002
44
2,248
Neuromorphic Bistable VLSI Synapses with Spike-Timing-Dependent Plasticity Giacomo Indiveri Institute of Neuroinformatics University/ETH Zurich CH-8057 Zurich, Switzerland giacomo@ini.phys.ethz.ch Abstract We present analog neuromorphic circuits for implementing bistable synapses with spike-timing-dependent plasticity (STDP) properties. In these types of synapses, the short-term dynamics of the synaptic efficacies are governed by the relative timing of the pre- and post-synaptic spikes, while on long time scales the efficacies tend asymptotically to either a potentiated state or to a depressed one. We fabricated a prototype VLSI chip containing a network of integrate and fire neurons interconnected via bistable STDP synapses. Test results from this chip demonstrate the synapse’s STDP learning properties, and its long-term bistable characteristics. 1 Introduction Most artificial neural network algorithms based on Hebbian learning use correlations of mean rate signals to increase the synaptic efficacies between connected neurons. To prevent uncontrolled growth of synaptic efficacies, these algorithms usually incorporate also weight normalization constraints, that are often not biophysically realistic. Recently an alternative class of competitive Hebbian learning algorithms has been proposed based on a spike-timing-dependentplasticity (STDP) mechanism [1]. It has been argued that the STDP mechanism can automatically, and in a biologically plausible way, balance the strengths of synaptic efficacies, thus preserving the benefits of both weight normalization and correlation based learning rules [16]. In STDP the precise timing of spikes generated by the neurons play an important role. If a pre-synaptic spike arrives at the synaptic terminal before a post-synaptic spike is emitted, within a critical time window, the synaptic efficacy is increased. Conversely if the post-synaptic spike is emitted soon before the pre-synaptic one arrives, the synaptic efficacy is decreased. While mean rate Hebbian learning algorithms are difficult to implement using analog circuits, spike-based learning rules map directly onto VLSI [4, 6, 7]. In this paper we present compact analog circuits that, combined with neuromorphic integrate and fire (I&F) neurons and synaptic circuits with realistic dynamics [8, 12, 11] implement STDP learning for short time scales and asymptotically tend to one of two possible states on long time scales. The circuits required to implement STDP, are described in Section 2. The circuits that implement bistability are described in Section 3. The network of I&F neurons used to measure the properties of the bistable STDP synapse is described in Section 4. Long term storage of synaptic efficacies The circuits that drive the synaptic efficacy to one of two possible states on long time scales, were implemented in order to cope with the problem of long term storage of analog values in CMOS technology. Conventional VLSI capacitors, the devices typically used as memory elements, are not ideal, in that they slowly loose the charge they are supposed to store, due to leakage currents. Several solutions have been proposed for long term storage of synaptic efficacies in analog VLSI neural networks. One of the first suggestions was to use the same method used for dynamic RAM: to periodically refresh the stored value. This involves though discretization of the analog value to N discrete levels, a method for comparing the measured voltage to the N levels, and a clocked circuit to periodically refresh the value on the capacitor. An alternative solution is to use analog-to-digital (ADC) converters, an off chip RAM and digital-to-analog converters (DAC), but this approach requires, next to a discretization of the value to N states, bulky ADC and DAC circuits. A more recent suggestion is the one of using floating gate devices [5]. These devices can store very precise analog values for an indefinite amount of time using standard CMOS technology [13], but for spike-based learning rules they would require a control circuit (and thus large area) per synapse. To implement dense arrays of neurons with large numbers of dendritic inputs the synaptic circuits should be as compact as possible. Bistable synapses An alternative approach that uses a very small amount of area per synapse is to use bistable synapses. These types of synapses contain minimum feature-size circuits that locally compare the value of the synaptic efficacy stored on the capacitor with a fixed threshold voltage and slowly drive that value either toward a high analog voltage or toward a low one, depending on the output of the comparator (see Section 3). The assumption that on long time scales the synaptic efficacy can only assume two values is not too severe, for networks of neurons with large numbers of synapses. It has been argued that also biological synapses can be indeed discrete on long time-scales. These assumptions are compatible with experimental data [3] and are supported by experimental evidence [15]. Also from a theoretical perspective it has been shown that the performance of associative networks is not necessarily degraded if the dynamic range of the synaptic efficacy is reduced even to the extreme (two stable states), provided that the transitions between stable states are stochastic [2]. Related work Bistable VLSI synapses in networks of I&F neurons have already been proposed in [6], but in those circuits, the synaptic efficacy is always clamped to either a high value or a low one, also for short-term dynamics, as opposed to our case, in which the synaptic efficacy can assume any analog value between the two. In [7] the authors propose a spike-based learning circuit, based on a modified version of Riccati’s equation [10], in which the synaptic efficacy is a continuous analog voltage; but their synapses require many more transistors than the solution we propose, and do not incorporate long-term bistability. More recently Bofill and Murray proposed circuits for implementing STDP within a framework of pulsebased neural network circuits [4]. But, next to missing the long-term bistability properties, their synaptic circuits require digital control signals that cannot be easily generated within the framework of neuromorphic networks of I&F neurons [8, 12]. pre Vw0 Vdd /post Vp Vd M1 Cw Ipot Idep Vdd Vdd Vtp Vtd Vdep Vpot M2 M3 M4 M5 M6 M7 M8 M9 M11 M12 M10 Figure 1: Synaptic efficacy STDP circuit. 2 The STDP circuits The circuit required to implement STDP in a network of I&F neurons is shown in Fig. 1. This circuit increases or decreases the analog voltage Vw0, depending on the relative timing of the pulses pre and /post. The voltage Vw0 is then used to set the strength of synaptic circuits with realistic dynamics, of the type described in [11]. The pre- and post-synaptic pulses pre and /post are generated by compact, low power I&F neurons, of the type described in [9]. The circuit of Fig. 1 is fully symmetric: upon the arrival of a pre-synaptic pulse pre a waveform Vpot(t) (for potentiating Vw0) is generated. Similarly, upon the arrival of a post-synaptic pulse /post, a complementary waveform Vdep(t) (for depotentiating Vw0) is generated. Both waveforms have a sharp onset and decay linearly with time, at a rate set respectively by Vtp and Vtd. The pre- and post-synaptic pulses are also used to switch on two gates (M8 and M5), that allow the currents Idep and Ipot to flow, as long as the pulses are high, either increasing or decreasing the weight. The bias voltages Vp on transistor M6 and Vd on M7 set an upper bound for the maximum amount of current that can be injected into or removed from the capacitor Cw. If transistors M4−M9 operate in the subthreshold regime [13], we can compute the analytical expression of Ipot(t) and Idep(t): Ipot(t) = I0 e− κ UT Vpot(t−tpre) + e− κ UT Vp (1) Idep(t) = I0 e− κ UT Vdep(t−tpost) + e− κ UT Vd (2) where tpre and tpost are the times at which the pre-synaptic and post-synaptic spikes are emitted, UT is the thermal voltage, and κ is the subthreshold slope factor [13]. The change in synaptic efficacy is then: ( ∆Vw0 = Ipot(tpost) Cp ∆tspk if tpre < tpost ∆Vw0 = −Idep(tpre) Cd ∆tspk if tpost < tpre (3) where ∆tspk is the pre- and post-synaptic spike width, Cp is the parasitic capacitance of node Vpot and Cd the one of node Vdep (not shown in Fig. 1). In Fig. 2(a) we plot experimental data showing how ∆Vw0 changes as a function of ∆t = tpre −tpost for different values of Vtd and Vtp. Similarly, in Fig. 2(b) we show plots −10 −5 0 5 10 −0.5 0 0.5 ∆ t (ms) ∆ Vw0 (V) (a) −10 −5 0 5 10 −0.5 0 0.5 ∆ t (ms) ∆ Vw0 (V) (b) Figure 2: Changes in synaptic efficacy, as a function of the difference between pre- and post-synaptic spike emission times ∆t = tpre−tpost. (a) Curves obtained for four different values of Vpot (in the left quadrant) and four different values of Vdep (in the right quadrant). (b) Typical STDP plot, obtained by setting Vp to 4.0V and Vd to 0.6V. 0 1 2 3 4 5 0 1.5 Vw0 (V) 0 1 2 3 4 5 0 5 Vdep (V) 0 1 2 3 4 5 0 5 Time (ms) pre (V) Figure 3: Changes in Vw0, in response to a sequence of pre-synaptic spikes (top trace). The middle trace shows how the signal Vdep, triggered by the post-synaptic neuron, decreases linearly with time. The bottom trace shows the series of digital pulses pre, generated with every pre-synaptic spike. of ∆Vw0 versus ∆t for three different values of Vp and three different values of Vd. As there are four independent control biases, it is possible to set the maximum amplitude and temporal window of influence independently for positive and negative changes in Vw0. The data of Fig. 2 was obtained using a paired-pulse protocol similar to the one used in physiological experiments [14]: one single pair of pre- and post-synaptic spikes was used to measure each ∆Vw0 data point, by systematically changing the delay tpre −tpost and by separating each stimulation session by a few hundreds of milliseconds (to allow the signals to return to their resting steady-state). Unlike the biological experiments, in our VLSI setup it is possible to evaluate the effect of multiple pulses on the synaptic efficacy, for very long successive stimulation sessions, monitoring all the internal state variables and signals involved in the process. In Fig. 3 we show the effect of multiple pre-synaptic spikes, succeeding a post-synaptic one, plotting a trace of the voltage Vw0, together with the − + Vthr Vw0 Vhigh Vw0 Vleak M1 M2 M3 M4 M5 M6 Vlow Figure 4: Bistability circuit. Depending on Vw0 −Vthr, the comparator drives Vw0 to either Vhigh or Vlow. The rate at which the circuit drives Vw0 toward the asymptote is controlled by Vleak and imposed by transistors M2 and M4. “internal” signal Vdep, generated by the post-synaptic spike, and the pulses pre, generated by the per-synaptic neuron. Note how the change in Vw0 is a positive one, when the postsynaptic spike follows a pre-synaptic one, at t = 0.5ms, and is negative when a series of pre-synaptic spikes follows the post-synaptic one. The effect of subsequent pre pulses following the first post-/pre-synaptic pair is additive, and decreases with time as in Fig. 2. As expected, the anti-causal relationship between pre- and post-synaptic neurons has the net effect of decreasing the synaptic efficacy. 3 The bistability circuit The bistability circuit, shown in Fig. 4, drives the voltage Vw0 toward one of two possible states: Vhigh (if Vw0 > Vthr), or Vlow (if Vw0 < Vthr). The signal Vthr is a threshold voltage that can be set externally. The circuit comprises a comparator, and a mixed-mode analog-digital leakage circuit. The comparator is a five transistor transconductance amplifier [13] that can be designed using minimum feature-size transistors. The leakage circuit contains two gates that act as digital switches (M5, M6) and four transistors that set the two stable state asymptotes Vhigh and Vlow and that, together with the bias voltage Vleak, determine the rate at which Vw0 approaches the asymptotes. The bistability circuit drives Vw0 in two different ways, depending on how large is the distance between the value of Vw0 itself and the asymptote. If |Vw0−Vas| > 4UT the bistability circuit drives Vw0 toward Vas linearly, where Vas represents either Vlow or Vhigh, depending on the sign of (Vw0 −Vthr): ( Vw0(t) = Vw0(0) + Ileak Cw t if Vw0 > Vthr Vw0(t) = Vw0(0) −Ileak Cw t if Vw0 < Vthr (4) where Cw is the capacitor of Fig. 1 and Ileak = I0e κVleak−Vlow UT As Vw0 gets close to the asymptote and |Vw0−Vas| < 4UT , transistors M2 or M4 of Fig. 4 go out of saturation and Vw0 begins to approach the asymptote exponentially: ( Vw0(t) = Vhigh −Vw0(0)e−Ileak CwUT t if Vw0 > Vthr Vw0(t) = Vlow + Vw0(0)e−Ileak CwUT t if Vw0 < Vthr (5) On long time scales the dynamics of Vw0 are governed by the bistability circuit, while on short time-scales they are governed by the STDP circuits and the precise timing of pre- and 0 2 4 6 8 10 1 1.5 2 2.5 3 Time (ms) Vw0 (V) Figure 5: Synaptic efficacy bistability. Transition of Vw0 from below threshold to above threshold (Vthr = 1.52V ), with leakage rate set by Vleak = 0.25V and pre- and postsynaptic neurons stimulated in a way to increase Vw0. I1 I2 M1 M2 O1 O2 Figure 6: Network of leaky I&F neurons with bistable STDP excitatory synapses and inhibitory synapses. The large circles symbolize I&F neurons, the small empty ones bistable STDP excitatory synapses, and the small bars non-plastic inhibitory synapses. The arrows in the circles indicate the possibility to inject current from an external source, to stimulate the neurons. post-synaptic spikes. If the STDP short-term dynamics drive Vw0 above threshold we say that long-term potentiation (LTP) had been induced. And if the short-term dynamics drive Vw0 below threshold, we say that long-term depression (LTD) has been induced. In Fig. 5 we show how the synaptic efficacy Vw0 changes upon induction of LTP, while stimulating the pre- and post-synaptic neurons with uniformly distributed spike trains. The asymptote Vlow was set to zero, and Vhigh to 2.75V. The pre- and post-synaptic neurons were injected with constant DC currents in a way to increase Vw0, on average. As shown, the two asymptotes Vlow and Vhigh act as two attractors, or stable equilibrium points, whereas the threshold voltage Vthr acts as an unstable equilibrium point. If the synaptic efficacy is below threshold the short-term dynamics have to fight against the long-term bistability effect, to increase Vw0. But as soon as Vw0 crosses the threshold, the bistability circuit switches, the effects of the short-term dynamics are reinforced by the asymptotic drive, and Vw0 is quickly driven toward Vhigh. 4 A network of integrate and fire neurons The prototype chip that we used to test the bistable STDP circuits presented in this paper, contains a symmetric network of leaky I&F neurons [9] (see Fig. 6). The experimental data 0 2 4 6 8 10 0 4 Vw0 (V) 0 2 4 6 8 10 0 2 post (V) 0 2 4 6 8 10 0 2 Time (ms) pre (V) (a) 0 2 4 6 8 10 0 4 Vw0 (V) 0 2 4 6 8 10 0 2 post (V) 0 2 4 6 8 10 0 2 Time (ms) pre (V) (b) Figure 7: Membrane potentials of pre- and post-synaptic neurons (bottom and middle traces respectively) and synaptic efficacy values (top traces). (a) Changes in Vw0 for low synaptic efficacy values (Vhigh = 2.1V) and no bistability leakage currents (Vleak = 0). (b) Changes in Vw0 for high synaptic efficacy values (Vwh = 3.6V ) and with bistability asymptotic drive (Vleak = 0.25V). of Figs. 2, 3, and 5 was obtained by injecting currents in the neurons labeled I1 and O1 and by measuring the signals from the excitatory synapse on O1. In Fig. 7 we show the membrane potential of I1, O1, and the synaptic efficacy Vw0 of the corresponding synapse, in two different conditions. Figure 7(a) shows the changes in Vw0 when both neurons are stimulated but no asymptotic drive is used. As shown Vw0 strongly depends on the spike patterns of the pre- and post-synaptic neurons. Figure 7(b) shows a scenario in which only neuron I1 is stimulated, but in which the weight Vw0 is close to its high asymptote (Vhigh = 3.6V) and in which there is a long-term asymptotic drive (Vleak = 0.25). Even though the synaptic weight stays always in its potentiated state, the firing rate of O1 is not as regular as the one of its efferent neuron. This is mainly due to the small variations of Vw0 induced by the STDP circuit. 5 Discussion and future work The STDP circuits presented here introduce a source of variability in the spike timing of the I&F neurons that could be exploited for creating VLSI networks of neurons with stochastic dynamics and for implementing spike-based stochastic learning mechanisms [2]. These mechanisms rely on the variability of the input signals (e.g. of Poisson distributed spike trains) and on their precise spike-timing in order to induce LTP or LTD only to a small specific sub-set of the synapses stimulated. In future experiments we will characterize the properties of the bistable STDP synapse in response to Poisson distributed spike trains, and measure transition probabilities as functions of input statistics and circuit parameters. We presented compact neuromorphic circuits for implementing bistable STDP synapses in VLSI networks of I&F neurons, and showed data from a prototype chip. We demonstrated how these types of synapses can either store their LTP or LTD state for long-term, or switch state depending on the precise timing of the pre- and post-synaptic spikes. In the near future, we plan to use the simple network of I&F neurons of Fig. 6, present on the prototype chip, to analyze the effect of bistable STDP plasticity at a network level. On the long term, we plan to design a larger chip with these circuits to implement a re-configurable network of I&F neurons of O(100) neurons and O(1000) synapses, and use it as a real-time tool for investigating the computational properties of competitive networks and selective attention models. Acknowledgments I am grateful to Rodney Douglas and Kevan Martin for their support, and to Shih-Chii Liu and Stefano Fusi for constructive comments on the manuscript. Some of the ideas that led to the design and implementation of the circuits presented were inspired by the Telluride Workshop on Neuromorphic Engineering (http://www.ini.unizh.ch/telluride). References [1] L. F. Abbott and S. Song. Asymmetric hebbian learning, spike liming and neural response variability. In Advances in Neural Information Processing Systems, volume 11, pages 69–75, 1998. [2] D. J. Amit and S. Fusi. Dynamic learning in neural networks with material synapses. Neural Computation, 6:957, 1994. [3] T. V. P. Bliss and G. L. Collingridge. A synaptic model of memory: Long term potentiation in the hippocampus. Nature, 31:361, 1993. [4] A. Bofill and A.F. Murray. Circuits for VLSI implementation of temporally asymmetric Hebbian learning. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information processing systems, volume 14. MIT Press, Cambridge, MA, 2001. [5] C. Diorio, P. Hasler, B.A. Minch, and C. Mead. A single-transistor silicon synapse. IEEE Trans. Electron Devices, 43(11):1972–1980, 1996. [6] S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D.J. Amit. Spike-driven synaptic plasticity: theory, simulation, VLSI implementation. Neural Computation, 12:2227–2258, 2000. [7] P. H¨afliger, M. Mahowald, and L. Watts. A spike based learning neuron in analog VLSI. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in neuralinformation processing systems, volume 9, pages 692–698. MIT Press, 1997. [8] G. Indiveri. Modeling selective attention using a neuromorphic analog VLSI device. Neural Computation, 12(12):2857–2880, December 2000. [9] G. Indiveri. A low-power adaptive integrate-and-fire neuron circuit. In ISCAS 2003. The 2003 IEEE International Symposium on Circuits and Systems, 2003. IEEE, 2003. [10] T. Kohonen. Self-Organization and Associative Memory. Springer Series in Information Sciences. Springer Verlag, 2nd edition, 1988. [11] S.-C. Liu, M. Boegerhausen, and S. Pascal. Circuit model of short-term synaptic dynamics. In Advances in Neural Information Processing Systems, volume 15, Cambridge, MA, December 2002. MIT Press. [12] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbruck, T. Burg, and R. Douglas. Orientation-selective aVLSI spiking neurons. Neural Networks, 14(6/7):629–643, 2001. Special Issue on Spiking Neurons in Neuroscience and Technology. [13] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbruck, and R. Douglas. Analog VLSI:Circuits and Principles. MIT Press, 2002. [14] H. Markram, J. L¨ubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science, 275:213–215, 1997. [15] C. C. H. Petersen, R. C. Malenka, R. A. Nicoll, and J. J. Hopfield. All-ornone potentiation at CA3-CA1 synapses. Proc. Natl. Acad. Sci., 95:4732, 1998. [16] S. Song, K. D. Miller, and L. F. Abbot. Competitive Hebbian learning through spike-timingdependent plasticity. Nature Neuroscience, 3(9):919–926, 2000.
2002
45
2,249
Source Separation with a Sensor Array Using Graphical Models and Subband Filtering Hagai Attias Microsoft Research Redmond, WA 98052 hagaia@microsoft.com Abstract Source separation is an important problem at the intersection of several fields, including machine learning, signal processing, and speech technology. Here we describe new separation algorithms which are based on probabilistic graphical models with latent variables. In contrast with existing methods, these algorithms exploit detailed models to describe source properties. They also use subband filtering ideas to model the reverberant environment, and employ an explicit model for background and sensor noise. We leverage variational techniques to keep the computational complexity per EM iteration linear in the number of frames. 1 The Source Separation Problem Fig. 1 illustrates the problem of source separation with a sensor array. In this problem, signals from K independent sources are received by each of L ≥K sensors. The task is to extract the sources from the sensor signals. It is a difficult task, partly because the received signals are distorted versions of the originals. There are two types of distortions. The first type arises from propagation through a medium, and is approximately linear but also history dependent. This type is usually termed reverberations. The second type arises from background noise and sensor noise, which are assumed additive. Hence, the actual task is to obtain an optimal estimate of the sources from data. The task is difficult for another reason, which is lack of advance knowledge of the properties of the sources, the propagation medium, and the noises. This difficulty gave rise to adaptive source separation algorithms, where parameters that are related to those properties are adjusted to optimized a chosen cost function. Unfortunately, the intense activity this problem has attracted over the last several years [1–9] has not yet produced a satisfactory solution. In our opinion, the reason is that existing techniques fail to address three major factors. The first is noise robustness: algorithms typically ignore background and sensor noise, sometime assuming they may be treated as additional sources. It seems plausible that to produce a noise robust algorithm, noise signals and their properties must be modeled explicitly, and these models should be exploited to compute optimal source estimators. The second factor is mixing filters: algorithms typically seek, and directly optimize, a transformation that would unmix the sources. However, in many situations, the filters describing medium propagation are non-invertible, or have an unstable inverse, or have a stable inverse that is extremely long. It may hence be advantageous to Figure 1: The source separation problem. Signals from K = 2 speakers propagate toward L = 2 sensors. Each sensor receives a linear mixture of the speaker signals, distorted by multipath propagation, medium response, and background and sensor noise. The task is to infer the original signals from sensor data. estimate the mixing filters themselves, then use them to estimate the sources. The third factor is source properties: algorithms typically use a very simple source model (e.g., a one time point histogram). But in many cases one may easily obtain detailed models of the source signals. This is particularly true for speech sources, where large datasets exist and much modeling expertise has developed over decades of research. Separation of speakers is also one of the major potential commercial applications of source separation algorithms. It seems plausible that incorporating strong source models could improve performance. Such models may potentially have two more advantages: first, they could help limit the range of possible mixing filters by constraining the optimization problem. Second, they could help avoid whitening the extracted signals by effectively limiting their spectral range to the range characteristic of the source model. This paper makes several contributions to the problem of real world source separation. In the following, we present new separation algorithms that are the first to address all three factors. We work in the framework of probabilistic graphical models. This framework allows us to construct models for sources and for noise, combine them with the reverberant mixing transformation in a principled manner, and compute parameter and source estimates from data which are Bayes optimal. We identify three technical ideas that are key to our approach: (1) a strong speech model, (2) subband filtering, and (3) variational EM. 2 Frames, Subband Signals, and Subband Filtering We start with the concept of subband filtering. This is also a good point to define our notation. Let xm denote a time domain signal, e.g., the value of a sound pressure waveform at time point m = 0, 1, 2, .... Let Xn[k] denote the corresponding subband signal at time frame n and subband frequency k. The subband signals are obtained from the time domain signal by imposing an N-point window wm, m = 0 : N −1 on that signal at equally spaced points nJ, n = 0, 1, 2, ..., and FFT-ing the windowed signal, Xn[k] = N−1 X m=0 e−iωkmwmxnJ+m , (1) where ωk = 2πk/N and k = 0 : N −1. The subband signals are also termed frames. Notice the difference in time scale between the time frame index n in Xn[k] and the time point index n in xn. The chosen value of the spacing J depends on the window length N. For J ≤N the original signal xm can be synthesized exactly from the subband signals (synthesis formula omitted). An important consideration for selecting J, as well as the window shape, is behavior under filtering. Consider a filter hm applied to xm, and denote by ym the filtered signal. In the simple case hm = hδm,0 (no filtering), the subband signals keep the same dependence as the time domain ones, yn = hxn −→ Yn[k] = hXn[k] . For an arbitrary filter hm, we use the relation yn = X m hmxn−m −→ Yn[k] = X m Hm[k]Xn−m[k] , (2) with complex coefficients Hm[k] for each k. This relation between the subband signals is termed subband filtering, and the Hm[k] are termed subband filters. Unlike the simple case of non-filtering, the relation (2) holds approximately, but quite accurately using an appropriate choice of J and wm; see [13] for details on accuracy. Throughout this paper, we will assume that an arbitrary filter hm can be modeled by the subband filters Hm[k] to a sufficient accuracy for our purposes. One advantage of subband filtering is that it replaces a long filter hm by a set of short independent filters Hm[k], one per frequency. This will turn out to decompose the source separation problem into a set of small (albeit coupled) problems, one per frequency. Another advantageisthatthisrepresentationallowsusingadetailedspeechmodelonthesamefooting with the filter model. This is because a speech model is defined on the time scale of a single frame, whereas the original filter hm, in contrast with Hm[k], is typically as long as 10 or more frames. As a final point on notation, we define a Gaussian distribution over a complex number Z by p(Z) = N(Z | µ, ν) = ν π exp(−ν | Z −µ |2) . Notice that this is a joint distribution over the real and imaginary parts of Z. The mean is µ = ⟨X⟩and the precision (inverse variance) ν satisfies ν−1 = ⟨| X |2⟩−| µ |2. 3 A Model for Speech Signals We assume independent sources, and model the distribution of source j by a mixture model over its subband signals Xjn, p(Xjn | Sjn = s) = N/2−1 Y k=1 N(Xjn[k] | 0, Ajs[k]) p(Sjn = s) = πjs p(X, S) = Y jn p(Xjn | Sjn)p(Sjn) , (3) where the components are labeled by Sjn. Component s of source j is a zero mean Gaussian with precision Ajs. The mixing proportions of source j are πjs. The DAG representing this model is shown in Fig. 2. A similar model was used in [10] for one microphone speech enhancement for recognition (see also [11]). Here are several things to note about this model. (1) Each component has a characteristic spectrum, which may describe a particular part of a speech phoneme. This is because the precision corresponds to the inverse spectrum: the mean energy (w.r.t. the above distribution) of source j at frequency k, conditioned on label s, is ⟨| Xjn |2⟩= A−1 js . (2) A zero mean model is appropriate given the physics of the problem, since the mean of a sound pressure waveform is zero. (3) k runs from 1 to N/2 −1, since for k > N/2, Xjn[k] = Xjn[N −k]⋆; the subbands k = 0, N/2 are real and are omitted from the model, a common practice in speech recognition engines. (4) Perhaps most importantly, for each source the subband signals are correlated via the component label s, as p(Xjn) = P s p(Xjn, Sjn = s) ̸= Q k p(Xjn[k]) . Hence, when the source separation problem decomposes into one problem per frequency, these problems turn out to be coupled (see below), and independent frequency permutations are avoided. (5) To increase n s n x Figure 2: Graphical model describing speech signals in the subband domain. The model assumes i.i.d. frames; only the frame at time n is shown. The node Xn represents a complex N/2 −1-dimensional vector Xn[k], k = 1 : N/2 −1. model accuracy, a state transition matrix p(Sjn = s | Sj,n−1 = s′) may be added for each source. The resulting HMM models are straightforward to incorporate without increasing the algorithm complexity. There are several modes of using the speech model in the algorithms below. In one mode, the sources are trained online using the sensor data. In a second mode, source models are trained offline using available data on each source in the problem. A third mode correspond to separation of sources known to be speech but whose speakers are unknown. In this case, all sources have the same model, which is trained offline on a large dataset of speech signals, including 150 male and female speakers reading sentences from the Wall Street Journal (see [10] for details). This is the case presented in this paper. The training algorithm used was standard EM (omitted) using 256 clusters, initialized by vector quantization. 4 Separation of Non-Reverberant Mixtures We now present a source separation algorithm for the case of non-reverberant (or instantaneous) mixing. Whereas many algorithms exist for this case, our contribution here is an algorithm that is significantly more robust to noise. Its robustness results, as indicated in the introduction, from three factors: (1) explicitly modeling the noise in the problem, (2) using a strong source model, in particular modeling the temporal statistics (over N time points) of the sources, rather than one time point statistics, and (3) extracting each source signal from data by a Bayes optimal estimator obtained from p(X | Y ). A more minor point is handling the case of less sources than sensors in a principled way. The mixing situation is described by yin = P j hijxjn + uin , where xjn is source signal j at time point n, yin is sensor signal i, hij is the instantaneous mixing matrix, and uin is the noise corrupting sensor i’s signal. The corresponding subband signals satisfy Yin[k] = P j hijXjn[k] + Uin[k] . To turn the last equation into a probabilistic graphical model, we assume that noise i has precision (inverse spectrum) Bi[k], and that noises at different sensors are independent (the latter assumption is often inaccurate but can be easily relaxed). This yields p(Yin | X) = Y k N(Yin[k] | X j hijXjn[k], Bi[k]) p(Y | X) = Y in p(Yin | X) , (4) which together with the speech model (3) forms a complete model p(Y, X, S) for this problem. The DAG representing this model for the case K = L = 2 is shown in Fig. 3. Notice that this model generalizes [4] to the subband domain. 2 2 − n s 2 2 − n x 1 2 − n s 1 2 − n x n s2 n x2 1 1 − n s 1 1 − n x 2 1 − ns 2 1 − n x n s1 n x1 2 2 − n y 1 2 − n y n y2 1 1 − n y 2 1 − n y n y1 Figure 3: Graphical model for noisy, non-reverberant 2×2 mixing, showing a 3 frame-long sequence. All nodes Yin and Xjn represent complex N/2−1-dimensional vectors (see Fig. 2). While Y1n and Y2n have the same parents, X1n and X2n, the arcs from the parents to Y2n are omitted for clarity. The model parameters θ = {hij, Bi[k], Ajs[k], πjs} are estimated from data by an EM algorithm. However, as the number of speech components M or the number of sources K increases, the E-step becomes computationally intractable, as it requires summing over all O(M K) configurations of (S1n, ..., SKn) at each frame. We approximate the E-step using a variational technique: focusing on the posterior distribution p(X, S | Y ), we compute an optimal tractable approximation q(X, S | Y ) ≈p(X, S | Y ), which we use to compute the sufficient statistics (SS). We choose q(X, S | Y ) = Y jn q(Xjn | Sjn, Y )q(Sjn | Y ) , (5) where the hidden variables are factorized over the sources, and also over the frames (the latter factorization is exact in this model, but is an approximation for reverberant mixing). This posterior maintains the dependence of X on S, and thus the correlations between different subbands Xjn[k]. Notice also that this posterior implies a multimodal q(Xjn) (i.e., a mixture distribution), which is more accurate than unimodal posteriors often employed in variational approximations (e.g., [12]), but is also harder to compute. A slightly more general form which allows inter-frame correlations by employing q(S | Y ) = Q jn q(Sjn | Sj,n−1, Y ) may also be used, without increasing complexity. By optimizing in the usual way (see [12,13]) a lower bound on the likelihood w.r.t. q, we obtain q(Xjn, Sjn = s | Y ) = Y k q(Xjn[k] | Sjn = s, Y )q(Sjn = s | Y ) , (6) where q(Xjn[k] | Sjn = s, Y ) = N(Xjn[k] | ρjns[k], νjs[k]) and q(Sjn = s | Y ) = γjns. Both the factorization over k of q(Xjn | Sjn) and its Gaussian functional form fall out from the optimization under the structural restriction (5) and need not be specified in advance. The variational parameters {ρjns[k], νjs[k], γjns}, which depend on the data Y , constitute the SS and are computed in the E-step. The DAG representing this posterior is shown in Fig. 4. 2 2 − n s 2 2 − n x 1 2 − n s 1 2 − n x n s2 n x2 1 1 − n s 1 1 − n x 2 1 − n s 2 1 − n x n s1 n x1 } { im y Figure 4: Graphical model describing the variational posterior distribution applied to the model of Fig. 3. In the non-reverberant case, the components of this posterior at time frame n are conditioned only on the data Yin at that frame; in the reverberant case, the components at frame n are conditioned on the data Yim at all frames m. For clarity and space reasons, this distinction is not made in the figure. After learning, the sources are extracted from data by a variational approximation of the minimum mean squared error estimator, ˆXjn[k] = E(Xjn[k] | Y ) = Z dX q(X | Y )Xjn[k] , (7) i.e., the posterior mean, where q(X | Y ) = P S q(X, S | Y ). The time domain waveform ˆxjm is then obtained by appropriately patching together the subband signals. M-step. The update rule for the mixing matrix hij is obtained by solving the linear equation X k Bi[k]ηij,0[k] = X j′ hij′ X k Bi[k]λj′j,0[k] . (8) The update rule for the noise precisions Bi[k] is omitted. The quantities ηij,m[k] and λj′j,m[k] are computed from the SS; see [13] for details. E-step. The posterior means of the sources (7) are obtained by solving ˆXjn[k] = ˆνjn[k]−1 X i Bi[k]hij  Yin[k] − X j′̸=j hij′ ˆXj′n[k]   (9) for ˆXjn[k], which is a K×K linear system for each frequency k and frame n. The equations for the SS are given in [13], which also describes experimental results. 5 Separation of Reverberant Mixtures In this section we extend the algorithm to the case of reverberant mixing. In that case, due to signal propagation in the medium, each sensor signal at time frame n depends on the source signals not just at the same time but also at previous times. To describe this mathematically, the mixing matrix hij must become a matrix of filters hij,m, and yin = P jm hij,mxj,n−m + uin. It may seem straightforward to extend the algorithm derived above to the present case. However, this appearance is misleading, because we have a time scale problem. Whereas are speech model p(X, S) is frame based, the filters hij,m are generally longer than the frame length N, typically 10 frames long and sometime longer. It is unclear how one can work with both Xjn and hij,m on the same footing (and, it is easy to see that straightforward windowed FFT cannot solve this problem). This is where the idea of subband filtering becomes very useful. Using (2) we have Yin[k] = P jm Hij,m[k]Xj,n−m[k] + Uin[k], which yields the probabilistic model p(Yin | X) = Y k N(Yin[k] | X jm Hij,m[k]Xj,n−m[k], Bi[k]) . (10) Hence, both X and Y are now frame based. Combining this equation with the speech model (3), we now have a complete model p(Y, X, S) for the reverberant mixing problem. The DAG describing this model is shown in Fig. 5. 2 2 − n s 2 2 − n x 1 2 − n s 1 2 − n x n s2 n x2 1 1 − n s 1 1 − n x 2 1 − ns 2 1 − n x n s1 n x1 2 2 − n y 1 2 − n y n y2 1 1 − n y 2 1 − n y n y1 Figure 5: Graphical model for noisy, reverberant 2 × 2 mixing, showing a 3 frame-long sequence. Here we assume 2 frame-long filters, i.e., m = 0, 1 in Eq. (10), where the solid arcs from X to Y correspond to m = 0 (as in Fig. 3) and the dashed arcs to m = 1. While Y1n and Y2n have the same parents, X1n and X2n, the arcs from the parents to Y2n are omitted for clarity. The model parameters θ = {Hij,m[k], Bi[k], Ajs[k], πjs} are estimated from data by a variational EM algorithm, whose derivation generally follows the one outlined in the previous section. Notice that the exact E-step here is even more intractable, due to the history dependence introduced by the filters. M-step. The update rule for Hij,m is obtained by solving the Toeplitz system X j′m′ Hij′,m′[k]λj′j,m−m′[k] = ηij,m[k] (11) where the quantities λj′j,m[k], ηij,m[k] are computed from the SS (see [12]). The update rule for the Bi[k] is omitted. E-step. The posterior means of the sources (7) are obtained by solving ˆXjn[k] = ˆνjn[k]−1 X im Bi[k]Hij,m−n[k]⋆  Yim[k] − X j′m′̸=jm Hij′,m−m′[k] ˆXj′m′[k]  (12) for ˆXjn[k]. Assuming P frames long filters Hij,m, m = 0 : P −1, this is a KP × KP linear system for each frequency k. The equations for the SS are given in [13], which also describes experimental results. 6 Extensions An alternative technique we have been pursuing for approximating EM in our models is Sequential Rao-Blackwellized Monte Carlo. There, we sample state sequences S from the posterior p(S | Y ) and, for a given sequence, perform exact inference on the source signals X conditioned on that sequence (observe that given S, the posterior p(X | S, Y ) is Gaussian and can be computed exactly). In addition, we are extending our speech model to include features such as pitch [7] in order to improve separation performance, especially in cases with less sensors than sources [7–9]. Yet another extension is applying model selection techniques to infer the number of sources from data in a dynamic manner. Acknowledgments I thank Te-Won Lee for extremely valuable discussions. References [1] A.J. Bell, T.J. Sejnowski (1995). An information maximisation approach to blind separation and blind deconvolution. Neural Computation 7, 1129-1159. [2] B.A. Pearlmutter, L.C. Parra (1997). Maximum likelihood blind source separation: A contextsensitive generalization of ICA. Proc. NIPS-96. [3] A. Cichocki, S.-I. Amari (2002). Adaptive Blind Signal and Image Processing. Wiley. [4] H. Attias (1999). Independent Factor Analysis. Neural Computation 11, 803-851. [5] T.-W. Lee et al. (2001) (Ed.). Proc. ICA 2001. [6] S. Griebel, M. Brandstein (2001). Microphone array speech dereverberation using coarse channel modeling. Proc. ICASSP 2001. [7] J. Hershey, M. Casey (2002). Audiovisual source separation via hidden Markov models. Proc. NIPS 2001. [8] S. Roweis (2001). One Microphone Source Separation. Proc. NIPS-00, 793-799. [9] G.-J. Jang, T.-W. Lee, Y.-H. Oh (2003). A probabilistic approach to single channel blind signal separation. Proc. NIPS 2002. [10] H. Attias, L. Deng, A. Acero, J.C. Platt (2001). A new method for speech denoising using probabilistic models for clean speech and for noise. Proc. Eurospeech 2001. [11] Ephraim, Y. (1992). Statistical model based speech enhancement systems. Proc. IEEE 80(10), 1526-1555. [12] M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, L.K. Saul (1999). An introduction to variational methods in graphical models. Machine Learning 37, 183-233. [13] H.Attias (2003). New EM algorithms for source separation and deconvolution with a microphone array. Proc. ICASSP 2003.
2002
46
2,250
Dopamine Induced Bistability Enhances Signal Processing in Spiny Neurons Aaron J. Gruberl,2, Sara A. Solla2,3, and James C. Houk2,l Departments of Biomedical Engineeringl, Physiology2, and Physics and Astronomy3 Northwestern University, Chicago, IL 60201 { a-gruberl, solla, j-houk }@northwestern.edu Abstract Single unit activity in the striatum of awake monkeys shows a marked dependence on the expected reward that a behavior will elicit. We present a computational model of spiny neurons, the principal neurons of the striatum, to assess the hypothesis that direct neuromodulatory effects of dopamine through the activation of D 1 receptors mediate the reward dependency of spiny neuron activity. Dopamine release results in the amplification of key ion currents, leading to the emergence of bistability, which not only modulates the peak firing rate but also introduces a temporal and state dependence of the model's response, thus improving the detectability of temporally correlated inputs. 1 Introduction The classic notion of the basal ganglia as being involved in purely motor processing has expanded over the years to include sensory and cognitive functions. A surprising new finding is that much of this activity shows a motivational component. For instance, striatal activity related to visual stimuli is dependent on the type of reinforcement (primary vs secondary) that a behavior will elicit [1]. Task-related activity can be enhanced or suppressed when a reward is anticipated for correct performance, relative to activity when no reward is expected. Although the origin of this reward dependence has not been experimentally verified, dopamine modulation is likely to playa role. Spiny neurons in the striatum, the input to the basal ganglia, receive a prominent neuromodulatory input from dopamine neurons in the substantia nigra pars compacta. These dopamine neurons discharge in a rewarddependent manner [2]; they respond to the delivery of unexpected rewards and to sensory cues that reliably precede the delivery of expected rewards. Activation of dopamine receptors alters the response characteristics of spiny neurons by modulating the properties of voltage-gated ion channels, as opposed to simple excitatory or inhibitory effects [3]. Activation of the D1 type dopamine receptor alone can either enhance or suppress neural responses depending on the prior state of the spiny neuron [4]. Here, we use a computational approach to assess the hypothesis that the modulation of specific ion channels through the activation of D1 receptors is sufficient to explain both the enhanced and suppressed single unit responses of medium spiny neurons to reward-predicting stimuli. We have constructed a biophysically grounded model of a spiny neuron and used it to investigate whether dopamine neuromodulation accounts for the observed rewarddependence of striatal single-unit responses to visual targets in the memory guided saccade task described by [1]. These authors used an asymmetric reward schedule and compared the response to a given target in rewarded as opposed to unrewarded cases. They report a substantial reward-dependent difference; the majority of these neurons showed a reward-related enhancement of the intensity and duration of discharge, and a smaller number exhibited a reward-related depression. The authors speculated that D1 receptor activation might account for enhanced responses, whereas D2 receptor activation might explain the depressed responses. The model presented here demonstrates that neuromodulatory actions of dopamine through D1 receptors suffice to account for both effects, with interesting consequences for information processing. 2 Model description The membrane properties of the model neuron result from an accurate representation of a minimal set of currents needed to reproduce the characteristic behavior of spiny neurons. In low dopamine conditions, these cells exhibit quasi two-state behavior; they spend most of their time either in a hyperpolarized 'down' state around -85 mV, or in a depolarized 'up' state around -55 mV [5]. This bimodal character of the response to cortical input is attributed to a combination of inward rectifying (IRK) and outward rectifying (ORK) potassium currents [5]. IRK contributes a small outward current at hyperpolarized membrane potentials, thus providing resistance to depolarization and stabilizing the down state. ORK is a major hyperpolarizing current that becomes activated at depolarized potentials and opposes the depolarizing influences of excitatory synaptic and inward ionic currents; it is their balance that determines the membrane potential of the up state. In addition to IRK and ORK currents, the model incorporates the L-type calcium (L-Ca) current that starts to provide an inward current at subthreshold membrane potentials, thus determining the voltage range of the up state. This current has the ability to increase the firing rate of spiny neurons and is critical to the enhancement of spiny neuron responses in the presence of D1 agonists [4]. Our goal is to design a model that provides a consistent description of membrane properties in the 100 - 1000 ms time range. This is the characteristic range of duration for up and down state episodes; it also spans the time course of short term modulatory effects of dopamine. The model is constructed according to the principle of separation of time scales: processes that operate in the 100-1000 ms range are modeled as accurately as possible, those that vary on a much shorter time scale are assumed to instantaneously achieve their steady-state values, and those that occur over longer time scales, such as slow inactivation, are assumed constant. Thus, the model does not incorporate currents which inactivate on a short time scale, and cannot provide a good description of rapid events such as the transitions between up and down states or the generation of action potentials. The membrane of a spiny neuron is modeled here as a single compartment with steady-state voltage-gated ion currents. A first order differential equation relates the temporal change in membrane potential (Vm ) to the membrane currents (Ii), (1) The right hand side of the equation includes active ionic, leakage, and synaptic currents. The multiplicative factor 'Y models the modulatory effects of D1 receptor activation by dopamine, to be described in more detail later. Ionic currents are modeled using a standard formulation; the parameters are as reported in the biophysical literature, except for adjustments that compensate for specific experimental conditions so as to more closely match in vivo realizations. All currents except for L-Ca are modeled by the product of a voltage gated conductance and a linear driving force, Ii = gi (Vm - Ei), where Ei is the reversal potential of ion species i and gi is the corresponding conductance. The leakage conductance is constant; the conductances for IRK and ORK are voltage gated, gi = ?hLi (Vm), where 9i is the maximum conductance and Li (Vm) is a logistic function of the membrane potential. Calcium currents are not well represented by a linear driving force model; extremely low intracellular calcium concentrations result in a nonlinear driving force well accounted for by the Goldman-Hodgkin-Katz equation [6], z VmF Ca i Ca oe RT ( 2 2) ([ ] [ ] _ zV",F ) h -Ca = PL-CaLL-Ca (Vm ) RT 1 _ e-¥r ' (2) where FL-Ca is the maximum permeability. The resulting ionic currents are shown in Fig 1A. The synaptic current is modeled as the product of a conductance and a linear driving force, 18 = g8(Vm - E8), with E8 = O. The synaptic conductance includes two types of cortical input: a phasic sensory-related component gp, and a tonic context-related component gt, which are added to determine the total synaptic input: g8 = ~(gp + gt). The factor ~ is a random variable that simulates the noisy character of synaptic input. Dopamine modulates the properties of ion currents though the activation of specific receptors. Agonists for the D1 type receptor enhance the IRK and L-Ca currents observed in spiny neurons [7, 8]. This effect is modeled by the factor 'Y in Eq 1. An upper bound of'Y = 1.4 is derived from physiological experiments [7, 8]. The lower bound at 'Y = 1.0 corresponds to low dopamine levels; this is the experimental condition in which the ion currents have been characterized. 3 Static and dynamic properties Stationary solutions to Eq 1 correspond to equilibrium values of the membrane potential Vm consistent with specific values of the dopamine controlled conductance gain parameter 'Y and the total synaptic conductance g8; fluctuations of g8 around its mean value are ignored in this section: the noise parameter is set to ~ = 1. Stationary solutions satisfy dVm/dt = 0; it follows from Eq 1 that they result from (3) Intersections between a curve representing the total ionic current (left hand side of Eq 3) as a function of Vm and a straight line representing the negative of the synaptic current (right hand side of Eq 3) determine the stationary values of the membrane potential. Solutions to Eq 3 can be followed as a function of g8 for fixed 'Y by varying the slope of the straight line. For 'Y = 1 there is only one such intersection for any value of g8. At low dopamine levels, Vm is a single-valued monotonically increasing function of g8, shown in Fig 1B (dotted line). This operational curve describes a A N E () 2 ::c 0 +--1----=:::::: __ .6 -80 -60 Vm(mV) B -30 >-50 .s E > -70 -90.j....i~=-.:::::;:...---~ o 10 20 9s (IlS/cm2) Figure 1: Model characterization in low (-y = 1.0, dotted lines) and high (-y = 1.4, solid lines) dopamine conditions. (A) Voltage-gated ion currents. (B) Operational curves: stationary solutions to Eq 1. gradual, smooth transition from hyperpolarized values of Vm corresponding to the down state to depolarized values of Vm corresponding to the up state. At high dopamine levels (-y = 1.4), the membrane potential is a single-valued monotonically increasing function of the synaptic conductance for either g8 < 9.74 JLS/cm2 or g8 > 14.17 JLS/cm 2 . In the intermediate regime 9.74 JLS/cm2 < g8 < 14.17 JLS/cm2 , there are three solutions to Eq 3 for each value of g8. The resulting operational curve, shown Fig 1B (solid line), consists of three branches: two stable and one unstable. The two stable branches (dark solid lines) correspond to a hyperpolarized down state (lower branch) and a depolarized up state (upper branch). The unstable branch (solid gray line) corresponds to intermediate values of Vm that are not spontaneously sustainable. Bistability arises through a saddle node bifurcation with increasing 'Y and has a drastic effect on the response properties of the model neuron in high dopamine conditions. Consider an experiment in which 'Y is fixed at 1.4 and g8 changes slowly so as to allow Vm to follow its equilibrium value on the operational curve for 'Y = 1.4 (see Fig 1B). As g8 increases, the hyperpolarized down state follows the lower stable branch. As g8 reaches 14.17 JLS/cm2 , the synaptic current suddenly overcomes the mostly IRK hyperpolarizing current, and Vm depolarizes abruptly to reach an up state stabilized by the activation of the hyperpolarizing aRK current. This is the down to up (D-+U) state transition. As g8 is increased further, the up state follows the upper stable branch, with a small amount of additional depolarization. If g8 is now decreased, the depolarized up state follows the stable upper branch in the downward direction. It is the inward L-Ca current which counteracts the hyperpolarizing effect of the aRK current and stabilizes the up state until g8 reaches 9.74 JLS/cm2 , where a net hyperpolarizing ionic current overtakes the system and Vm hyperpolarizes abruptly to the down state. This is the up to down (U-+D) state transition. The emergence of bistability in high dopamine conditions results in a prominent hysteresis effect. The state of the model, as described by the value of Vm , depends not only on the current values of 'Y and g8' but also on the particular trajectory followed by these parameters to reach their current values. The appearance of bistability gives a well defined meaning to the notion of a down state and an up state: in this case there is a gap between the two stable branches, while in low dopamine conditions the transition is smooth, with no clear separation between states. We generically refer to hyperpolarized potentials as the down state and depolarized potentials as the up state, for consistency with the electrophysiological terminology. A Unrewarded Trial Rewarded Trial -30 -9p > -50 g VI E > -70 -90 -9p ( :-Da .. ~ .. ' + / ........ . ::' ..... ... ·····[j-+·~9p~ ;,.. . ....... ;.;;.::.:.~p+Da 0 B Unrewarded Trial Rewarded Trial -30 >-50 g VI E > -70 -90 ....... -9p ... ... . ..... ~ ... . ... . .. , ," ~ ..... l +9p I' : ..... 6.:+~p: -Da +Da ...... . .... 0 8 16 0 8 95 16 95 (J.tS/cm2) 95 (J.tS/cm2) Figure 2: Response to a sensory related phasic input in rewarded and unrewarded trials. (A) gt + gp > gD-+U· (B) gt + gp < g;. An important feature of the model is that operational curves for all values of, intersect at a unique point, indicated by a circle in Fig 1B, for which V;' = - 55.1 m V and g; = 13.2 J-tS / cm2 . The appearance of this critical point is due to a perfect cancellation between the IRK and the L-Ca currents; it arises as a solution to the equation I IRK + h-ca = O. When this condition is satisfied, solutions to Eq 3 become independent of,. The existence of a critical point at a slightly more depolarized membrane potential than the firing threshold at VI = - 58 m V is an important aspect of our model; it plays a role in the mechanism that allows dopamine to either enhance or depress the response of the model spiny neuron. The dynamical evolution of Vm due to changes in both g8 and, follows from Eq 1. Consider a scenario in which a tonic input gt maintains V m below VI; the response to an additional phasic input gp sufficient to drive Vm above VI depends on whether it is associated with expected reward and thus triggers dopamine release. The response of the model neuron depends on the combined synaptic input g8 in a manner that is critically dependent on the expectation of reward. We consider two cases: whether g8 exceeds gD-+U (Fig 2A) or remains below g; (Fig 2B). If the phasic input is not associated with reward, the dopamine level does not increase (left panels in Fig 2). The square on the operational curve for , = 1 (dotted line) indicates the equilibrium state corresponding to gt. A rapid increase from g8 = gt to g8 = gt + gp (rightward solid arrow) is followed by an increase in Vm towards its equilibrium value (upward dotted arrow). When the phasic input is removed (leftward solid arrow), Vm decreases to its initial equilibrium value (downN E 9D-U 9; enhanced 9t amplitude c75 7.5 6 d) No Response enhanced amplitude and O-l-----~-~___>,~"t_, o Figure 3: Modulation of response in high dopamine relative to low dopamine conditions as a function of the strength of phasic and tonic inputs. ward dotted arrow). In unrewarded trials, the only difference between a larger and a smaller phasic input is that the former results in a more depolarized membrane potential and thus a higher firing rate. The firing activity, which ceases when the phasic input disappears, encodes for the strength of the sensory-related stimulus. Rewarded trials (right panels in Fig 2) elicit qualitatively different responses. The phasic input is the conditioned stimulus that triggers dopamine release in the striatum, and the operational curve switches from the '"Y = 1 (dotted) curve to the bistable '"Y = 1.4 (solid) curve. The consequences of this switch depend on the strength of the phasic input. If g8 exceeds the value for the D-+ U transition (Fig 2A), Vm depolarizes towards the upper branch of the bistable operational curve. This additional depolarization results in a noticeably higher firing rate than the one elicited by the same input in an unrewarded trial (Fig 2A, left panel). When the phasic input is removed, the unit hyperpolarizes slightly as it reaches the upper branch of the bistable operational curve. If gt exceeds gU--+D, the unit remains in the up state until '"Y decreases towards its baseline level. If this condition is met in a rewarded trial, the response is not only larger in amplitude but also longer in duration. In contrast to these enhancements, if g8 is not sufficient to exceed g; (Fig 2B), Vm hyperpolarizes towards the lower branch of the bistable operational curve. The unit remains in the down state until '"Y decreases towards its baseline level. In this type of rewarded trial, dopamine suppresses the response of the unit. The analysis presented above provides an explanatory mechanism for the observation of either enhanced or suppressed spiny neuron activity in the presence of dopamine. It is the strength of the total synaptic input that selects between these two effects; the generic features of their differentiation are summarized in Fig 3. Enhancement occurs whenever the condition g8 > gD--+U is met, while activity is suppressed if g8 < g;. The separatrix between enhancement and suppression always lies in a narrow band limited by g8 = gD--+U and g8 = g;. Its precise location will depend on the details of the temporal evolution of '"Y as it rises and returns to baseline. But whatever the shape of '"Y(t) might be, there will be a range of values of g8 for which activity is suppressed, and a range of values of g8 for which activity is enhanced. 4 Information processing Dopamine induced bistability improves the ability of the model spiny neuron to detect time correlated sensory-related inputs relative to a context-related background. To illustrate this effect, consider g8 = ~(gt + gp) as a random variable. The multiplicative noise is Gaussian, with <~>= 1 and <e >= 1.038. The total probability density function (PDF) shown in Fig 4A for gt = 9.2 J-LS/cm2 consists of two PDFs corresponding to gp = 0 (left; black line) and gp = 5.8 J-LS/cm2 (right; grey line). These two values of gp occur with equal prior probability; time correlations are introduced through a repeat probability Pr of retaining the current value of gp in the subsequent time step. The total PDF shown in Fig 4A does not depend on the value of Pr. Performance at the task of detecting the sensory-related input (gp -::f- 0) is limited by the overlap of the corresponding PDFs [9]; optimal separation of the two PDFs in Fig 4A results in a Bayesian error of 10.46%. A B c o Vm (mV) D Vm (mV) -60 Vm (mV) -30 Figure 4: Probability density functions for (A) synaptic input, (B) membrane potential at "( = 1, (C) membrane potential at "( = 1.4 for un correlated inputs (Pr = 0.5), and (D) membrane potential at "( = 1.4 for correlated inputs (Pr = 0.975). The transformation of g8 into Vm through the "( = 1 operational curve results in the PDFs shown in Fig 4B; here again, the total PDF does not depend on Pr. An increase in the separation of the two peaks indicates an improved signal-tonoise ratio, but an extension in the tails of the PDFs counteracts this effect: the Bayesian error stays at 10.46%, in agreement with theoretical predictions [9] that hold for any strictly monotonic map from g8 into Vm . For the "( = 1.4 operational curve, the PDFs that characterize Vm depend on Pr and are shown in Fig 4C (Pr = 0.5, for which gp is independently drawn from its prior in each time step) and 4D (Pr = 0.975, which describes phasic input persistance for about 400 ms). The implementation of Bayesian optimal detection of gp -::f- 0 for "( = 1.4 requires three separating boundaries; the corresponding Bayesian errors stand at 10.46% for Fig 4C and 4.23% for Fig 4D. A single separating boundary in the gap between the two stable branches is suboptimal, but is easily implement able by the bistable neuron. This strategy leads to detection errors of 20.06% for Fig 4C and 4.38% for Fig 4D. Note that the Bayesian error decreases only when time correlations are included, and that in this case, detection based on a single separating boundary is very close to optimal. The results for "( = 1.4 clearly indicate that ambiguities in the bistable region make it harder to identify temporally uncorrelated instances of gp -::f- 0 on the basis of a single separating boundary (Fig 4C), while performance improves if instances with gp -::f- 0 are correlated over time (Fig 4D). Bistability thus provides a mechanism for improved detection of time correlated input signals. 5 Conclusions The model presented here incorporates the most relevant effects of dopamine neuromodulation of striatal medium spiny neurons via D1 receptor activation. In the absence of dopamine the model reproduces the bimodal character of medium spiny neurons [5]. In the presence of dopamine, the model undergoes a bifurcation and becomes bistable. This qualitative change in character provides a mechanism to account for both enhancement and depression of spiny neuron discharge in response to inputs associated with expectation of reward. There is only limited direct experimental evidence of bistability in the membrane potential of spiny neurons: the sustained depolarization observed in vitro following brief current injection in the presence of D1 agonists [4] is a hallmark of bistable responsiveness. The activity of single striatal spiny neurons recorded in a memory guided saccade task [1] is strongly modulated by the expectation of reward as reinforcement for correct performance. In these experiments, most units show a more intense response of longer duration to the presentation of visual stimuli indicative of upcoming reward; a few units show instead suppressed activity. These observations are consistent with properties of the model neuron, which is capable of both types of response to such stimuli. The model identifies the strength of the total excitatory cortical input as the experimental parameter that selects between these two response types, and suggests that enhanced responses can have a range of amplitudes but attenuated responses result in an almost complete suppression of activity, in agreement with experimental data [1]. Bistability provides a gain mechanism that nonlinearly amplifies both the intensity and duration of striatal activity. This amplification, exported through thalamocortical pathways, may provide a mechanism for the preferential cortical encoding of salient information related to reward acquisition. The model indicates that through the activation of D1 receptors, dopamine can temporarily desensitize spiny neurons to weak inputs while simultaneously sensitizing spiny neurons to large inputs. A computational advantage of this mechanism is the potential adaptability of signal modulation: the brain may be able to utilize the demonstrated plasticity of corti costriatal synapses so that dopamine release preferentially enhances salient signals related to reward. This selective enhancement of striatal activity would result in a more informative efferent signal related to achieving reward. At the systems level, dopamine plays a significant role in the normal operation of the brain, as evident in the severe cognitive and motor deficits associated with pathologies ofthe dopamine system (e.g. Parkinson's disease, schizophrenia). Yet at the cellular level, the effect of dopamine on the physiology of neurons seems modest. In our model, a small increase in the magnitude of both IRK and L-Ca currents elicited by D1 receptor activation suffices to switch the character of spiny neurons from bimodal to truly bistable, which not only modulates the frequency of neural responses but also introduces a state dependence and a temporal effect. Other models have suggested that dopamine modulates contrast [9], but the temporal effect is a novel aspect that plays an important role in information processing. 6 References [1] Kawagoe R, Takikawa Y, Hikosaka 0 (1998). Nature Neurosci 1:411-416. [2] Schultz W (1998). J Neurophysiol 80:1-27. [3] Nicola SM, Surmeier DJ, Malenka RC (2000). Annu Rev Neurosci 23:185-215. [4] Hernandez-Lopez S, Bargas J , Surmeier DJ, Reyes A, Gallarraga E (1997). J Neurosci 17:3334-42. [5] Wilson CJ, Kawaguchi Y (1996). J Neurosci 7:2397-2410. [6] Hille B (1992) Ionic Channels of Excitable Membranes. Sinauer Ass., Sunderland MA. [7] Pacheco-Cano MT, Bargas J , Hernandez-Lopez S (1996). Exp Brain Res 110:205-21l. [8] Surmeier DJ, Bargas J , Hemmings HC, Nairn AC, Greengard P (1995). Neuron 14:385397. [9] Servan-Schreiber D, Printz H, Cohen JD (1990). Science 249:892-895.
2002
47
2,251
An Information Theoretic Approach to the Functional Classification of Neurons Elad Schneidman,1,2 William Bialek,1 and Michael J. Berry II2 1Department of Physics and 2Department of Molecular Biology Princeton University, Princeton NJ 08544, USA {elads,wbialek,berry}@princeton.edu Abstract A population of neurons typically exhibits a broad diversity of responses to sensory inputs. The intuitive notion of functional classification is that cells can be clustered so that most of the diversity is captured by the identity of the clusters rather than by individuals within clusters. We show how this intuition can be made precise using information theory, without any need to introduce a metric on the space of stimuli or responses. Applied to the retinal ganglion cells of the salamander, this approach recovers classical results, but also provides clear evidence for subclasses beyond those identified previously. Further, we find that each of the ganglion cells is functionally unique, and that even within the same subclass only a few spikes are needed to reliably distinguish between cells. 1 Introduction Neurons exhibit an enormous variety of shapes and molecular compositions. Already in his classical work, Cajal [1] recognized that the shapes of cells can be classified, and he identified many of the cell types that we recognize today. Such classification is fundamentally important, because it implies that instead of having to describe ∼1012 individual neurons, a mature neuroscience might need to deal only with a few thousand different classes of nominally identical neurons. There are three broad methods of classification: morphological, molecular, and functional. Morphological and molecular classification are appealing because they deal with relatively fixed properties, but ultimately the functional properties of neurons are the most important, and neurons that share the same morphology or molecular markers need not embody the same function. With attention to arbitrary detail, every neuron will be individual, while a coarser view might overlook an important distinction; a quantitative formulation of the classification problem is essential. The vertebrate retina is an attractive example: its anatomy is well studied and highly ordered, containing repeated micro-circuits that look out at different angles in visual space [1, 2, 3]; its overall function (vision) is clear, giving the experimenter better intuition about relevant stimuli; and responses of many of its output neurons, ganglion cells, can be recorded simultaneously using a multi-electrode array, allowing greater control of experimental variables than possible with serial recordings [4]. Here we exploit this favorable experimental situation to highlight the mathematical questions that must lie behind any attempt at classification. Functional classification of retinal ganglion cells typically has consisted of finding qualitatively different responses to simple stimuli. Classes are defined by whether ganglion cells fire spikes at the onset or offset of a step of light or both (ON, OFF, ON/OFF cells in frog [5]) or whether they fire once or twice per cycle of a drifting grating (X, Y cells in cat [6]). Further elaborations exist. In the frog, the literature reports 1 class of ON-type ganglion cell and 4 or 5 classes of OFF-type [7]. The salamander has been reported to have only 3 of these OFF-type ganglion cells [8]. The classes have been distinguished using stimuli such as diffuse flashes of light, moving bars, and moving spots. The results are similar to earlier work using more exotic stimuli [9]. In some cases, there is very close agreement between anatomical and functional classes, such as the (α,β) and (Y,X) cells in the cat. However, the link between anatomy and function is not always so clear. Here we show how information theory allows us to define the problem of classification without any a priori assumptions regarding which features of visual stimulus or neural response are most significant, and without imposing a metric on these variables. All notions of similarity emerge from the joint statistics of neurons in a population as they respond to common stimuli. To the extent that we identify the function of retinal ganglion cells as providing the brain with information about the visual world, then our approach finds exactly the classification which captures this functionality in a maximally efficient manner. Applied to experiments on the tiger salamander retina, this method identifies the major types of ganglion cells in agreement with traditional methods, but on a finer level we find clear structure within a group of 19 fast OFF cells that suggests at least 5 functional subclasses. More profoundly, even cells within a subclass are very different from one another, so that on average the ganglion cell responses to the simplified visual stimuli we have used provide ∼6 bits/sec of information about cell identity within our population of 21 cells. This is sufficient to identify uniquely each neuron in an “elementary patch” of the retina within one second, and a typical pair of cells can be distinguished reliably by observing an average of just two or three spikes. 2 Theory Suppose that we could give a complete characterization, for each neuron i = 1, 2, · · · , N in a population, of the probability P(r|⃗s, i) that a stimulus ⃗s will generate the response r. Traditional approaches to functional classification introduce (implicitly or explicitly) a parametric representation for the distributions P(r|⃗s, i) and then search for clusters in this parameter space. For visual neurons we might assume that responses are determined by the projection of the stimulus movie ⃗s onto a single template or receptive field ⃗fi, P(r|⃗s, i) = F(r;⃗fi·⃗s); classifying neurons then amounts to clustering the receptive fields. But it is not possible to cluster without specifying what it means for these vectors to be similar; in this case, since the vectors come from the space of stimuli, we need a metric or distortion measure on the stimuli themselves. It seems strange that classifying the responses of visual neurons requires us to say in advance what it means for images or movies to be similar.1 Information theory suggests a formulation that does not require us to measure similarity among either stimuli or responses. Imagine that we present a stimulus ⃗s and record the response r from a single neuron in the population, but we don’t know which one. This response tells us something about the identity of the cell, and on average this can be quantified 1If all cells are selective for a small number of commensurate features, then the set of vectors ⃗fi must lie on a low dimensional manifold, and we can use this selectivity to guide the clustering. But we still face the problem of defining similarity: even if all the receptive fields in the retina can be summarized meaningfully by the diameters of the center and surround (for example), why should we believe that Euclidean distance in this two dimensional space is a sensible metric? as the mutual information between responses and identity (conditional on the stimulus), I(r; i|⃗s) = 1 N N X i=1 X r P(r|⃗s, i) log2 P(r|⃗s, i) P(r|⃗s)  bits, (1) where P(r|⃗s) = (1/N) PN i=1 P(r|⃗s, i). The mutual information I(r; i|⃗s) measures the extent to which different cells in the population produce reliably distinguishable responses to the same stimulus; from Shannon’s classical arguments [10] this is the unique measure of these correlations which is consistent with simple and plausible constraints. It is natural to ask this question on average in an ensemble of stimuli P(⃗s) (ideally the natural ensemble), ⟨I(r; i|⃗s)⟩⃗s = 1 N N X i=1 Z [d⃗s]P(⃗s)P(r|⃗s, i) log2 P(r|⃗s, i) P(r|⃗s)  ; (2) ⟨I(r; i|⃗s)⟩⃗s is invariant under all invertible transformations of r or ⃗s. Because information is mutual, we also can think of ⟨I(r; i|⃗s)⟩⃗s as the information that cellular identity provides about the responses we will record. But now it is clear what we mean by classifying the cells: If there are clear classes, then we can predict the responses to a stimulus just by knowing the class to which a neuron belongs rather than knowing its unique identity. Thus we should be able to find a mapping i →C of cells into classes C = 1, 2, · · · , K such that ⟨I(r; C|⃗s)⟩⃗s is almost as large as ⟨I(r; i|⃗s)⟩⃗s, despite the fact that the number of classes K is much less than the number of cells N. Optimal classifications are those which use the K different class labels to capture as much information as possible about the stimulus-response relation, maximizing ⟨I(r; C|⃗s)⟩⃗s at fixed K. More generally we can consider soft classifications, described by probabilities P(C|i) of assigning each cell to a class, in which case we would like to capture as much information as possible about the stimulus-response relation while constraining the amount of information that class labels provide directly about identity, I(C; i). In this case our optimization problem becomes, with λ as a Lagrange multiplier, max P (C|i) [⟨I(r; C|⃗s)⟩⃗s −λI(C; i)] . (3) This is a generalization of the information bottleneck problem [11]. Here we confine ourselves to hard classifications, and use a greedy agglomerative algorithm [12] which starts with K = N and makes mergers which at every step provide the smallest reduction in I(r; C|⃗s). This information loss on merging cells (or clusters) i and j is given by D(i, j) ≡∆Iij(r; C|⃗s) = ⟨DJS[P(r|⃗s, i)||P(r|⃗s, j)]⟩⃗s, (4) where DJS is the Jensen–Shannon divergence [13] between the two distributions, or equivalently the information that one sample provides about its source distribution in the case of just these two alternatives. The matrix of “distances” ∆Iij characterizes the similarities among neurons in pairwise fashion. Finally, if cells belong to clear classes, then we ought to be able to replace each cell by a typical or average member of the class without sacrificing function. In this case function is quantified by asking how much information cells provide about the visual scene. There is a strict complementarity of the information measures: information that the stimulus/response relation provides about the identity of the cell is exactly information about the visual scene which will be lost if we don’t know the identity of the cells [14]. Our information theoretic approach to classification of neurons thus produces classes such that replacing cells with average class members provides the smallest loss of information about the sensory inputs. 3 The responses of retinal ganglion cells to identical stimuli We recorded simultaneously 21 retinal ganglion cells from the salamander using a multielectrode array.2 The visual stimulus consisted of 100 repeats of a 20 s segment of spatially uniform flicker (see fig. 1a), in which light intensity values were randomly selected every 30 ms from a Gaussian distribution having a mean of 4 mW/mm2 and an RMS contrast of 18%. Thus, the photoreceptors were presented with exactly the same visual stimulus, and the movie is many correlation times in duration, so we can replace averages over stimuli by averages over time (ergodicity). A 3 s sample of the ganglion cell’s responses to the visual stimulus is shown in Fig. 1b. There are times when many of the cells fire together, while at other times only a subset of these cells is active. Importantly, the same neuron may be part of different active groups at different times. time a 0 10 20 0 5 10 15 20 cell rank order (by rate) Information rate (bits/s) -300 -200 -100 0 -0.2 -0.1 0 0.1 0.2 time relative to spike (ms) mean contrast 500 ms firing rate (spikes/s) 0 5 10 b c d Figure 1: Responses of salamander ganglion cells to modulated uniform field intensity. a: The retina is presented with a series of uniform intensity “images”. The intensity modulation is Gaussian white noise distributed. b: A 3 sec segment of the (concurrent) responses of 21 ganglion cells to repeated presentation of the stimulus. The rasters are ordered from bottom to top according to the average firing rate of the neurons (over the whole movie). c: Firing rate and Information rates of the different cells as a function of their rank, ordered by their firing rate. d: The average stimulus pattern preceding a spike for each of the different cells. Traditionally, these would be classified as 1 ON cell, 1 slow-OFF cell and 19 fast-OFF cells. On a finer time scale than shown here, the latency of the responses of the single neurons and their spiking patterns differ across time. To analyze the responses of the different 2The retina is isolated from the eye of the larval tiger salamander (Ambystoma tigrinum) and perfused in Ringer’s medium. Action potentials were measured extracellularly using a multi-electrode array [4], while light was projected from a computer monitor onto the photoreceptor layer. Because erroneously sorted spikes would strongly effect our results, we were very conservative in our identification of cleanly isolated cells. neurons, we discretize the spike trains into time bins of size ∆t. We examine the response in windows of time having length T, so that an individual neural response r becomes a binary ‘word’ W with T/∆t ‘letters’.3 Since the cells in Fig. 1b are ordered according to their average firing rate, it is clear that there is no ‘simple’ grouping of the cells’ responses with respect to this response parameter; firing rates range continuously from 1 to 7 spikes per second (Fig. 1c). Similarly, the rate of information (estimated according to [15]) that the cells encode about the same stimulus also ranges continuously from 3 to 20 bits/s. We estimate the average stimulus pattern preceding a spike for each of the cells, the spike triggered average (STA), shown in Fig. 1d. According to traditional classification based on the STA, one of the cells is an ON cell, one is a slow OFF cells and 19 belong to the fast OFF class [16]. While it may be possible to separate the 19 waveforms of the fast OFF cells into subgroups, this requires assumptions about what stimulus features are important. Furthermore, there is no clear standard for ending such subclassification. 4 Clustering of the ganglion cells responses into functional types To classify these ganglion cells, we solved the information theoretic optimization problem described above. Figure 2a shows the pairwise distances D(i, j) among the 21 cells, ordered by their average firing rates; again, firing rate alone does not cluster the cells. The result of the greedy clustering of the cells is shown by a binary dendrogram in Fig. 2b. 0 1 2 3 4 5 10 15 20 5 10 15 20 0 1 2 3 4 5 10 15 20 5 10 15 20 0 5 10 15 20 0 0.2 0.4 0.6 0.8 1 number of clusters normalized information about identity 0 0.5 1 1.5 13 14 15 8 1216 201718 10 11 19 21 1 4 distance (bits/s) 5 6 9 2 7 3 a b d bits/s bits/s 1x10ms 2x5ms 5x2ms 1x10ms nn c 2 Figure 2: Clustering ganglion cell responses. a: Average distances between the cells responses; cells are ordered by their average firing rate. b: Dendrogram of cell clustering. Cell names correspond to their firing rate rank. The height of a merge reflects the distance between merged elements. c: The information that the cells’ responses convey about the clusters in every stage of the clustering in (b), normalized to the total information that the responses convey about cell identity. Using different response segment parameters or clustering method (e.g., nearest neighbor) result in very similar behavior. d: reordering of the distance matrix in (a) according to the tree structure given in (b). The greedy agglomerative approximation [12] starts from every cell as a single cluster. We iteratively merge the clusters ci and cj which have the minimal value of D(ci, cj) 3As any fixed choice of T and ∆t is arbitrary, we explore a range of these parameters. and display this distance or information loss as the height of the merger in Fig. 2b. We pool their spike trains together as the responses of the new cell class. We now re-estimate the distances between clusters and repeat the procedure, until we get a single cluster that contains all cells. Fig. 2c shows the compression in information achieved by each of the mergers: for each number of clusters, we plot the mutual information between the clusters and the responses, ⟨I(r; C|⃗s)⟩⃗s, normalized by the information that the response conveys about the full set of cells, ⟨I(r; i|⃗s)⟩⃗s. The clustering structure and the information curve in Fig. 2c are robust (up to one cell difference in the final dendrogram) to changes in the word size and bin size used; we even obtain the same results with a nearest neighbor clustering based on D(i, j). This suggests that the top 7 mergers in Fig. 2b (which correspond to the bottom 7 points in panel c) are of significantly different subgroups. Two of these mergers, which correspond to the rightmost branches of the dendrogram, separate out the ON and slow OFF cells. The remaining 5 clusters are subclasses of fast OFF cells. However, Fig. 2d which shows the dissimilarity matrix from panel a, reordered by the result of the clustering, demonstrates that while there is clear structure within the cell population, the subclasses there are not sharply distinct. How many types are there? While one might be happy with classifying the fast OFF cells into 5 subclasses, we further asked whether the cells within a subclass are reliably distinguishable from one another; that is, are the bottom mergers in Fig. 2b-c significant? To this end we randomly split each of the 21 cells into 2 halves (of 50 repeats each), or ‘siblings’, and re-clustered. Figure 3a shows the resulting dendrogram of this clustering, indicating that the cells are reliably distinguishable from one another: The nearest neighbor of each new half–cell is its own sibling, and (almost) all of the first layer mergers are of the corresponding siblings (the only mismatch is of a sibling merging with a neighboring full cell and then with the other sibling). Figure 3b shows the very different cumulative probability distributions of pairwise distances among the parent cells and that of the distances between siblings. 0.5 1 1.5 0 0.2 0.4 0.6 0.8 1 average distance between cells (bits/s) cumulative distribution 0.1 .2 .3 .4 .5 1 2 3 4 distance (bits/s) a b all pairs "siblings" 13 14 15 8 12 16 2017 18 10 11 19 21 1 4 5 6 9 2 7 3 0 2 Figure 3: Every cell is different from the others. a: Clustering of cell responses after randomly splitting every cell into 2 “siblings”. The nearest neighbor of each of the new cells is its sibling and (except for one case) so is the first merge. From the second level upwards, the tree is identical to Fig. 2b (up to symmetry of tree plotting). b: Cumulative distribution of pairwise distances between cells. The distances between siblings are easily discriminated from the continuous distribution of values of all the (real) cells. How significant are the differences between the cells? It might be that cells are distinguishable, but only after observing their responses for very long times. Since 1 bit is needed to reliably distinguish between a pair of cells, Fig. 3b shows that more than 90% of the pairs are reliably distinguishable within 2 seconds or less. This result is especially striking given the low mean spike rate of these cells; clearly, at times where none of the cells is spiking, it is impossible to distinguish between them. To place the information about identity on an absolute scale, we compare it to the entropy of the responses at each time, using 10 ms segments of the responses at each time during the stimulus (Fig. 4a). Most of the points lie close to the origin, but many of them reflect discrete times when the responses of the neurons are very different and hence highly informative about cell identity: under the conditions of our experiment, roughly 30% of the response variability among cells is informative about their identity.4 On average observing a single neural response gives about 6 bits/s about the identity of the cells within this population. We also computed the average number of spikes per cell which we need to observe to distinguish reliably between cells i and j, nd(i, j) = 1 2(¯ri + ¯rj) D(i, j) . (5) where ¯ri is the average spike rate of cell i in the experiment. Figure 4b shows the cumulative probability distribution of the values of nd. Evidently, more than 80% of the pairs are reliably distinguishable after observing, on average, only 3 spikes from one of the neurons. Since ganglion cells fire in bursts, this suggest that most cells are reliably distinguishable based on a single firing ‘event’! We also show that for the 11 most similar cells (those in the left subtree in Fig. 2b) only a few more spikes, or one extra firing event, are required to reliably distinguish them. 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 entropy of 10 ms response segments (bits) information about identity in 10 ms response segment (bits) a b 20 10 0 0.2 0.4 0.6 0.8 1 cumulative distribution all pairs subtree pairs 30 2 3 4 5 1 0.5 nd (spikes) Figure 4: High diversity among cells. a: The average information that a response segment conveys about the identity of the cell as a function of the entropy of the responses. Every point stands for a time point along the stimulus. Results shown are for 2-letter words of 5 ms bins; similar behavior is observed for different word sizes and bins b: Cumulative distribution of the average number of spikes that are needed to distinguish between pair of cells. 5 Discussion We have identified a diversity of functional types of retinal ganglion cells by clustering them to preserve information about their identity. Beyond the easy classification of the major types of salamander ganglion cells – fast OFF, slow OFF, and ON – in agreement with traditional methods, we have found clear structure within the fast OFF cells that suggests at least 5 more functional classes. Furthermore, we found evidence that each cell is functionally unique. Even under this relatively simple stimulus, the analysis revealed that the 4Since the cells receive the same stimulus and often possess shared circuitry, an efficiency as high as 100% is very unlikely. cell responses convey ∼6 bits/s of information about cell identity within this population of 21 cells. Ganglion cells in the salamander interact with each other and collect information from a ∼250 µm radius; given the density of ganglion cells, the observed rate implies that a single ganglion cell can be discriminated from all the cells in this “elementary patch” within 1 s. This is a surprising degree of diversity, given that 19 cells in our sample would be traditionally viewed as nominally the same. One might wonder if our choice of uniform flicker limits the results of our classification. However, we found that this stimulus was rich enough to distinguish every ganglion cell in our data set. It is likely that stimuli with spatial structure would reveal further differences. Using a larger collection of cells will enable us to explore the possibility that there is a continuum of unique functional units in the retina. How might the brain make use of this diversity? Several alternatives are conceivable. By comparing the spiking of closely related cells, it might be possible to achieve much finer discrimination among stimuli that tend to activate both cells. Diversity also can improve the robustness of retinal signalling: as the retina is constantly setting its adaptive state in response to statistics of the environment that it cannot estimate without some noise, maintaining functional diversity can guard against adaptation that overshoots its optimum. Finally, great functional diversity opens up additional possibilities for learning strategies, in which downstream neurons select the most useful of its inputs rather than merely summing over identical inputs to reduce their noise. The example of the invertebrate retina demonstrates that nature can construct neural circuits with almost crystalline reproducibility from synapse to synapse. This suggests that the extreme diversity found here in the vertebrate retina may not be the result of some inevitable sloppiness of neural development but rather as evolutionary selection of a different strategy for representing the visual world. References [1] Cajal, S.R., Histologie du systeme nerveux de l’homme et des vertebres., Paris: Maloine (1911). [2] Dowling, J., The Retina: An Approachable Part of the Brain. Cambridge, MA: Belknap Press (1987). [3] Masland, R.H., Nat. Neurosci., 4: 877-886 (2001). [4] Meister, M., Pine, J. & Baylor, D.A., J. Neurosci. Methods. 51: 95-106 (1994). [5] Hartline, H.K., Am. J. Physiol., 121: 400-415 (1937). [6] Hochstein, S. & Shapley, R.M., J. Physiol., 262: 265-84 (1976). [7] Grosser, O.-J. & Grosser-Cornehls, U., in Frog Neurobiology, ed: R. Llinas, Precht, W.: 297-385, Springer-Verlag: New York (1976). [8] Grosser-Cornehls, U. & Himstedt, W., Brain Behav. Evol. 7: 145-168 (1973). [9] Lettvin, J.Y., Maturana, H.R., McCulloch, W.S. & Pitts, W.H., Proc. I.R.E., 47: 1940-51 (1959). [10] Shannon, C. E. & Weaver W. Mathematical theory of communication Univ. of Illinois (1949). [11] Tishby, N., Pereira, F. & Bialek, W., in Proceedings of The 37th Allerton conference on communication, control & computing, Univ. of Illinois (1999). see also arXiv: physics/0004057. [12] Slonim, N. & Tishby, N., NIPS 12, 617–623 (2000). [13] Lin, J., IEEE IT, 37, 145–151 (1991). [14] Schneidman, E., Brenner, N., Tishby N., de Ruyter van Steveninck, R. & Bialek, W. NIPS 13: 159-165 (2001). see also arXiv: physics/0005043. [15] Strong, S.P., Koberle, R., de Ruyter van Steveninck, R. & Bialek, W., Phys. Rev. Lett. 80, 197– 200 (1998). see also arXiv: cond-mat/9603127. [16] Keat, J., Reinagel, P., Reid, R.C. & Meister, M., Neuron 30, 803-817 (2001).
2002
48
2,252
An Asynchronous Hidden Markov Model for Audio-Visual Speech Recognition Samy Bengio Dalle Molle Institute for Perceptual Artificial Intelligence (IDIAP) CP 592, rue du Simplon 4, 1920 Martigny, Switzerland bengio@idiap.ch.http://www.idiap.ch/-bengio Abstract This paper presents a novel Hidden Markov Model architecture to model the joint probability of pairs of asynchronous sequences describing the same event. It is based on two other Markovian models, namely Asynchronous Input/ Output Hidden Markov Models and Pair Hidden Markov Models. An EM algorithm to train the model is presented, as well as a Viterbi decoder that can be used to obtain the optimal state sequence as well as the alignment between the two sequences. The model has been tested on an audio-visual speech recognition task using the M2VTS database and yielded robust performances under various noise conditions. 1 Introduction Hidden Markov Models (HMMs) are statistical tools that have been used successfully in the last 30 years to model difficult tasks such as speech recognition [6) or biological sequence analysis [4). They are very well suited to handle discrete of continuous sequences of varying sizes. Moreover, an efficient training algorithm (EM) is available, as well as an efficient decoding algorithm (Viterbi), which provides the optimal sequence of states (and the corresponding sequence of high level events) associated with a given sequence of low-level data. On the other hand, multimodal information processing is currently a very challenging framework of applications including multimodal person authentication, multimodal speech recognition, multimodal event analyzers, etc. In that framework, the same sequence of events is represented not only by a single sequence of data but by a series of sequences of data, each of them coming eventually from a different modality: video streams with various viewpoints, audio stream(s), etc. One such task, which will be presented in this paper, is multimodal speech recognition using both a microphone and a camera recording a speaker simultaneously while he (she) speaks. It is indeed well known that seeing the speaker's face in addition to hearing his (her) voice can often improve speech intelligibility, particularly in noisy environments [7), mainly thanks to the complementarity of the visual and acoustic signals. Previous solutions proposed for this task can be subdivided into two categories [8]: early integration, where both signals are first modified to reach the same frame rate and are then modeled jointly, or late integration, where the signals are modeled separately and are combined later, during decoding. While in the former solution, the alignment between the two sequences is decided a priori, in the latter, there is no explicit learning of the joint probability of the two sequences. An example of late integration is presented in [3], where the authors present a multistream approach where each stream is modeled by a different HMM, while decoding is done on a combined HMM (with various combination approaches proposed). In this paper, we present a novel Asynchronous Hidden Markov Model (AHMM) that can learn the joint probability of pairs of sequences of data representing the same sequence of events, even when the events are not synchronized between the sequences. In fact, the model enables to desynchronize the streams by temporarily stretching one of them in order to obtain a better match between the corresponding frames. The model can thus be directly applied to the problem of audio-visual speech recognition where sometimes lips start to move before any sound is heard for instance. The paper is organized as follows: in the next section, the AHMM model is presented, followed by the corresponding EM training and Viterbi decoding algorithms. Related models are then presented and implementation issues are discussed. Finally, experiments on a audio-visual speech recognition task based on the M2VTS database are presented, followed by a conclusion. 2 The Asynchronous Hidden Markov Model For the sake of simplicity, let us present here the case where one is interested in modeling the joint probability of 2 asynchronous sequences, denoted xi and yr with S ::; T without loss of generality!. We are thus interested in modeling p(xi, Yr). As it is intractable if we do it directly by considering all possible combinations, we introduce a hidden variable q which represents the state as in the classical HMM formulation, and which is synchronized with the longest sequence. Let N be the number of states. Moreover, in the model presented here, we always emit Xt at time t and sometimes emit Ys at time t. Let us first define E(i, t) = P(Tt=sh- l =s - 1, qt=i, xLyf) as the probability that the system emits the next observation of sequence y at time t while in state i. The additional hidden variable Tt = s can be seen as the alignment between y and q (and x which is aligned with q). Hence, we model p(xf,yr, qf, T'[). 2.1 Likelihood Computation Using classical HMM independence assumptions, a simple forward procedure can be used to compute the joint likelihood of the two sequences, by introducing the following 0: intermediate variable for each state and each possible alignment between the sequences x and y: o:(i,s,t) o:(i,s,t) N E(i, t)p(Xt, yslqt=i) L P(qt=ilqt- l =j)o:(j, s - 1, t - 1) j = l (1) lIn fact, we assume that for all pairs of sequences (x, y), the sequence x is always at least as long as the sequence y. If this is not the case, a straightforward extension of the proposed model is then necessary. N + (1 - E(i, t))p(xtlqt=i) L P(qt=ilqt- 1 =j)a(j, s, t - 1) j=l which is very similar to the corresponding a variable used in normal HMMs2. It can then be used to compute the joint likelihood of the two sequences as follows: N p(xi, yf) L p( qT=i, TT=S, xi, yf) (2) i=l N L a(i,S,T) . i=l 2.2 Viterbi Decoding Using the same technique and replacing all the sums by max operators, a Viterbi decoding algorithm can be derived in order to obtain the most probable path along the sequence of states and alignments between x and y: V(i,s ,t) ( . t S) max P qt=Z, Tt=S, Xl' Y1 t l t l T1 ,Ql { (E(i, t)p(Xt, Ys Iqt=i) mJx P(qt=ilqt- 1 =j)V(j, s - 1, t - 1), max (1 - E(i, t))p(xtlqt=i) maxP(qt=ilqt- 1 =j)V(j, s, t - 1)) J (3) The best path is then obtained after having computed V(i , S, Ti for the best final state i and backtracking along the best path that could reach it . 2.3 An EM Training Algorithm An EM training algorithm can also be derived in the same fashion as in classical HMMs. We here sketch the resulting algorithm, without going into more details4 . Backward Step: Similarly to the forward step based on the a variable used to compute the joint likelihood, a backward variable, (3 can also be derived as follows: (3(i,s, t) (4) N (3(i, s, t) L E(j, t + l)p(xt+1' Ys+1Iqt+1 =j)P(qt+1 =j lqt=i)(3(j, s + 1, t + 1) j=l N + L (l - E(j, t + 1))P(Xt+ 1Iqt+1 =j)P(qt+1 =jlqt=i)(3(j, s, t + 1) . j=l 2The full derivations are not given in this paper but can be found in the appendix of [1). 3In the case where one is only interested in the best state sequence (no matter the alignment), the solution is then to marginalize over all the alignments during decoding (essentially keeping the sums on the alignments and the max on the state space). This solution has not yet been tested. 4See the appendix of [1) for more details. E-Step: Using both the forward and backward variables, one can compute the posterior probabilities of the hidden variables of the system, namely the posterior on the state when it emits on both sequences, the posterior on the state when it emits on x only, and the posterior on transitions. Let al(i, s, t) be the part of a(i, s, t) when state i emits on Y at time t: N E(i, t)p(Xt, yslqt=i) L P(qt=ilqt- l =j)a(j, s - 1, t - 1) (5) j = l and similarly, let aO(i, s, t) be the part of a(i, s, t) when state i does not emit on y at time t: N (1 - E(i, t))p(xtlqt=i) L P(qt=ilqt- l =j)a(j, s, t - 1). (6) j = l Then the posterior on state i when it emits joint observations of sequences x and y is ( . ITS) al (i,s,t)(3(i,s,t) Pqt=Z,Tt=STt- I=S- l ,XI ,YI = (T S) , P Xl , YI (7) the posterior on state i when it emits the next observation of sequence x only is ( . ITS) aO(i, s,t)(3(i,s,t) ( ) P qt=Z, Tt=S Tt- l =S, Xl , YI = (T S)' 8 P Xl ,YI and the posterior on the transition between states i and j is P(qt=ilqt- l =j) P(xf, yf) (9) ( * a(j" - 1, t - 1 )p(x" y., Iq,~i)'( i, t) fi ( i, " t)+ ) L a(j, s, t - 1 )p(Xt Iqt=i) (1 - E( i, t) )(3( i, s, t) s=O M-Step: The Maximization step is performed exactly as in normal HMMs: when the distributions are modeled by exponential functions such as Gaussian Mixture Models, then an exact maximization can be performed using the posteriors. Otherwise, a Generalized EM is performed by gradient ascent, back-propagating the posteriors through the parameters of the distributions. 3 Related Models The present AHMM model is related to the Pair HMM model [4], which was proposed to search for the best alignment between two DNA sequences. It was thus designed and used mainly for discrete sequences. Moreover, the architecture of the Pair HMM model is such that a given state is designed to always emit either one OR two vectors, while in the proposed AHMM model, each state can always emit both one or two vectors, depending on E(i, t), which is learned. In fact, when E(i, t) is deterministic and solely depends on i, we can indeed recover the Pair HMM model by slightly transforming the architecture. It is also very similar to the asynchronous version of Input/ Output HMMs [2], which was proposed for speech recognition applications. The main difference here is that in AHMMs both sequences are considered as output, while in Asynchronous IOHMMs one of the sequence (the shorter one, the output) is conditioned on the other one (the input). The resulting Viterbi decoding algorithm is thus different since in Asynchronous IOHMMs one of the sequence, the input, is known during decoding, which is not the case in AHMMs. 4 Implementation Issues 4.1 Time and Space Complexity The proposed algorithms (either training or decoding) have a complexity of O(N2ST) where N is the number of states (and assuming the worst case with ergodic connectivity), S is the length of sequence y and T is the length of sequence x . This can become quickly intractable if both x and yare longer than, say, 1000 frames. It can however be shortened when a priori knowledge is known about possible alignments between x and ¥. For instance, one can force the alignment between Xt and Ys to be such that It - 5s1 < k where k is a constant representing the maximum stretching allowed between x and y, which should not depend on S nor T. In that case, the complexity (both in time and space) becomes O(N2Tk), which is k times the usual HMM training/ decoding complexity. 4.2 Distributions to Model In order to implement this system, we thus need to model the following distributions: • P(qt=ilqt- l =j): the transition distribution, as in normal HMMs; • p(xtlqt=i): the emission distribution in the case where only x is emitted, as in normal HMMs; • p(Xt , yslqt=i): the emission distribution in the case where both sequences are emitted. This distribution could be implemented in various forms, depending on the assumptions made on the data: x and y are independent given state i: p(Xt, Ys Iqt=i) = p(Xt Iqt=i)p(ys Iqt=i) y is conditioned on x : p(Xt, Ys Iqt=i) = p(Ys IXt, qt=i)p(xt Iqt=i) (10) (11) - the joint probability is modeled directly, eventually forcing some common parameters from p(Xt Iqt=i) and p(Xt , Ys Iqt=i) to be shared. In the experiments described later in the paper, we have chosen the latter implementation, with no sharing except during initialization; • E(i, t) = P(Tt=slTt- l =s - 1, qt=i, xi,yf): the probability to emit on sequence y at time t on state i. With various assumptions, this probability could be represented as either independent on i, independent on s, independent on Xt and Ys. In the experiments described later in the paper, we have chosen the latter implementation. 5 Experiments Audio-visual speech recognition experiments were performed using the M2VTS database [5], which contains 185 recordings of 37 subjects, each containing acoustic and video signals of the subject pronouncing the French digits from zero to nine. The video consisted of 286x360 pixel color images with a 25 Hz frame rate, while the audio was recorded at 48 kHz using a 16 bit PCM coding. Although the M2VTS database is one of the largest databases of its type, it is still relatively small compared to reference audio databases used in speech recognition. Hence, in order to increase the significance level of the experimental results, a 5-fold cross-validation method was used. Note that all the subjects always pronounced the same sequence of words but this information was not used during recognition5 . The audio data was down-sampled to 8khz and every 10ms a vector of 16 MFCC coefficients and their first derivative, as well as the derivative of the log energy was computed, for a total of 33 features. Each image of the video stream was coded using 12 shape features and 12 intensity features, as described in [3]. The first derivative of each of these features was also computed, for a total of 48 features. The HMM topology was as follows: we used left-to-right HMMs for each instance of the vocabulary, which consisted of the following 11 words: zero, un, deux trois, quatre, cinq, six, sept, huit, neuf, silence. Each model had between 3 to 9 states including non-emitting begin and end states. In each emitting state, there was 3 distributions: P(Xtlqt) , the emission distribution of audio-only data, which consisted of a Gaussian mixture of 10 Gaussians (of dimension 33), P(Xt , yslqt), the joint emission distribution of audio and video data, which consisted also of a Gaussian mixture of 10 Gaussians (of dimension 33+ 48= 81), and E(i, t), the probability that the system should emit on the video sequence, which was implemented for these preliminary experiments as a simple table. Training was done using the EM algorithm described in the paper. However, in order to keep the computational time tractable, a constraint was imposed in the alignment between the audio and video streams: we did not consider alignments where audio and video information were farther than 0.5 second from each other. Comparisons were made between the AHMM (taking into account audio and video), and a normal HMM taking into account either the audio or the video only. We also compared the model with a normal HMM trained on both audio and video streams manually synchronized (each frame of the video stream was repeated in multiple copies in order to reach the same rate as the audio stream). Moreover, in order to show the interest of robust multimodal speech recognition, we injected various levels of noise in the audio stream during decoding (training was always done using clean audio). The noise was taken from the Noisex database [9], and was injected in order to reach signal-to-noise ratios of 10dB, 5dB and OdB. Note that all the hyper-parameters of these systems, such as the number of Gaussians in the mixtures, the number of EM iterations, or the minimum value of the variances of the Gaussians, were not tuned using the M2VTS dataset. They were taken from a previously trained model on a different task, Numbers'95. Figure 1 and Table 1 present the results. As it can be seen, the AHMM yielded better results as soon as the noise level was significant (for clean data, the performance using the audio stream only was almost perfect, hence no enhancement was expected). Moreover, it never deteriorated significantly (using a 95% confidence interval) under the level of the video stream, no matter the level of noise in the audio stream. 5Nevertheless, it can be argued that transitions between words could have been learned using the training data. 80 ,-r---------~----------~--------~_, 70 60 50 40 30 20 10 Odb 5db noise level 10db audio HMM --+-audio+video HMM ---)(--audio+video AHMM "* video HMM ----0 Figure 1: Word Error Rates (in percent, the lower the better), of various systems under various noise conditions during decoding (from 15 to 0 dB additive noise). The proposed model is the AHMM using both audio and video streams. Observations Model WER (%) and 95% CI 15 dB 10 dB 5 dB o dB audio HMM 2.9 (± 2.4) 11.9 (± 4.7) 38.7 ~± 7.1) 79.1 (± 5.9) audio+ video HMM 21.5 (± 6.0) 28.1 (± 6.5) 35.3 (± 6.9) 45.4 (± 7.2) audio+ video AHMM 4.8 (± 3.1) 11.4 (± 4.6) 22.3 (± 6.0) 41.1 (± 7.1) Table 1: Word Error Rates (WER, in percent, the lower the better) and corresponding Confidence Intervals (CI, in parenthesis), of various systems under various noise conditions during decoding (from 15 to 0 dB additive noise). The proposed model is the AHMM using both audio and video streams. An HMM using the clean video data only obtains 39.6% WER (± 7.1). An interesting side effect of the model is to provide an optimal alignment between the audio and the video streams. Figure 2 shows the alignment obtained while decoding sequence cd01 on data corrupted with 10dB Noisex noise. It shows that the rate between video and audio is far from being constant (it would have followed the stepped line) and hence computing the joint probability using the AHMM appears more informative than using a naive alignment and a normal HMM. 6 Conclusion In this paper, we have presented a novel asynchronous HMM architecture to handle multiple sequences of data representing the same sequence of events. The model was inspired by two other well-known models, namely Pair HMMs and Asynchronous IOHMMs. An EM training algorithm was derived as well as a Viterbi decoding algorithm, and speech recognition experiments were performed on a multimodal database, yielding significant improvements on noisy audio data. Various propositions were made to implement the model but only the simplest ones were tested in this paper. Other solutions should thus be investigated soon. Moreover, other applications of the model should also be investigated, such as multimodal authentication. Audio Figure 2: Alignment obtained by the model between video and audio streams on sequence cdOl corrupted with a 10dE Noisex noise. The vertical lines show the obtained segmentation between the words. The stepped line represents a constant alignment. Acknowledgments This research has been partially carried out in the framework of the European project LAVA, funded by the Swiss OFES project number 01.0412. The Swiss NCCR project 1M2 has also partly funded this research. The author would like to thank Stephane Dupont for providing the extracted visual features and the experimental protocol used in the paper. References [I] S. Bengio. An asynchronous hidden markov model for audio-visual speech recognition. Technical Report IDIAP-RR 02-26, IDIAP, 2002. [2] S. Bengio and Y. Bengio. An EM algorithm for asynchronous input/ output hidden markov models. In Proceedings of the International Conference on Neural Information Processing, ICONIP, Hong Kong, 1996. [3] S. Dupont and J . Luettin. Audio-visual speech modelling for continuous speech recognition. IEEE Transactions on Multimedia, 2:141- 151, 2000. [4] R. Durbin, S. Eddy, A. Krogh, and G. Michison. Biological Sequence Analysis: Probabilistic Models of proteins and nucleic acids. Cambridge University Press, 1998. [5] S. Pigeon and L. Vandendorpe. The M2VTS multimodal face database (release 1.00). In Proceedings of the First International Conference on Audio- and Video-based Biometric Person Authentication ABVPA, 1997. [6] Laurence R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257- 286, 1989. [7] W. H. Sumby and 1. Pollak. Visual contributions to speech intelligibility in noise. Journal of the Acoustical Society of America, 26:212- 215, 1954. [8] A. Q. Summerfield. Lipreading and audio-visual speech perception. Philosophical Transactions of the Royal Society of London, Series B, 335:71- 78, 1992. [9] A. Varga, H.J .M. Steeneken, M. Tomlinson, and D. Jones. The noisex-92 study on the effect of additive noise on automatic speech recognition. Technical report, DRA Speech Research Unit, 1992.
2002
49
2,253
Automatic Derivation of Statistical Algorithms: The EM Family and Beyond Alexander G. Gray Carnegie Mellon University agray@cs.cmu.edu Bernd Fischer and Johann Schumann RIACS / NASA Ames fisch,schumann  @email.arc.nasa.gov Wray Buntine Helsinki Institute for IT buntine@hiit.fi Abstract Machine learning has reached a point where many probabilistic methods can be understood as variations, extensions and combinations of a much smaller set of abstract themes, e.g., as different instances of the EM algorithm. This enables the systematic derivation of algorithms customized for different models. Here, we describe the AUTOBAYES system which takes a high-level statistical model specification, uses powerful symbolic techniques based on schema-based program synthesis and computer algebra to derive an efficient specialized algorithm for learning that model, and generates executable code implementing that algorithm. This capability is far beyond that of code collections such as Matlab toolboxes or even tools for model-independent optimization such as BUGS for Gibbs sampling: complex new algorithms can be generated without new programming, algorithms can be highly specialized and tightly crafted for the exact structure of the model and data, and efficient and commented code can be generated for different languages or systems. We present automatically-derived algorithms ranging from closed-form solutions of Bayesian textbook problems to recently-proposed EM algorithms for clustering, regression, and a multinomial form of PCA. 1 Automatic Derivation of Statistical Algorithms Overview. We describe a symbolic program synthesis system which works as a “statistical algorithm compiler:” it compiles a statistical model specification into a custom algorithm design and from that further down into a working program implementing the algorithm design. This system, AUTOBAYES, can be loosely thought of as “part theorem prover, part Mathematica, part learning textbook, and part Numerical Recipes.” It provides much more flexibility than a fixed code repository such as a Matlab toolbox, and allows the creation of efficient algorithms which have never before been implemented, or even written down. AUTOBAYES is intended to automate the more routine application of complex methods in novel contexts. For example, recent multinomial extensions to PCA [2, 4] can be derived in this way. The algorithm design problem. Given a dataset and a task, creating a learning method can be characterized by two main questions: 1. What is the model? 2. What algorithm will optimize the model parameters? The statistical algorithm (i.e., a parameter optimization algorithm for the statistical model) can then be implemented manually. The system in this paper answers the algorithm question given that the user has chosen a model for the data,and continues through to implementation. Performing this task at the state-of-the-art level requires an intertwined meld of probability theory, computational mathematics, and software engineering. However, a number of factors unite to allow us to solve the algorithm design problem computationally: 1. The existence of fundamental building blocks (e.g., standardized probability distributions, standard optimization procedures, and generic data structures). 2. The existence of common representations (i.e., graphical models [3, 13] and program schemas). 3. The formalization of schema applicability constraints as guards.1 The challenges of algorithm design. The design problem has an inherently combinatorial nature, since subparts of a function may be optimized recursively and in different ways. It also involves the use of new data structures or approximations to gain performance. As the research in statistical algorithms advances, its creative focus should move beyond the ultimately mechanical aspects and towards extending the abstract applicability of already existing schemas (algorithmic principles like EM), improving schemas in ways that generalize across anything they can be applied to, and inventing radically new schemas. 2 Combining Schema-based Synthesis and Bayesian Networks Statistical Models. Externally, 1 model mog as ’Mixture of Gaussians’; 2 const int n_points as ’nr. of data points’ 3 with 0 < n_points; 4 const int n_classes := 3 as ’nr. classes’ 5 with 0 < n_classes 6 with n_classes << n_points; 7 double phi(1..n_classes) as ’weights’ 8 with 1 = sum(I := 1..n_classes, phi(I)); 9 double mu(1..n_classes); 9 double sigma(1..n_classes); 10 int c(1..n_points) as ’class labels’; 11 c ˜ disc(vec(I := 1..n_classes, phi(I))); 12 data double x(1..n_points) as ’data’; 13 x(I) ˜ gauss(mu(c(I)), sigma(c(I))); 14 max pr(x| phi,mu,sigma  ) wrt phi,mu,sigma  ; AUTOBAYES has the look and feel of a compiler. Users specify their model of interest in a high-level specification language (as opposed to a programming language). The figure shows the specification of the mixture of Gaussians example used throughout this paper.2 Note the constraint that the sum of the class probabilities must equal one (line 8) along with others (lines 3 and 5) that make optimization of the model well-defined. Also note the ability to specify assumptions of the kind in line 6, which may be used by some algorithms. The last line specifies the goal inference task: maximize the conditional probability pr        with respect to the parameters   ,  , and  . Note that moving the parameters across to the left of the conditioning bar converts this from a maximum likelihood to a maximum a posteriori problem. Computational logic and theorem proving. Internally, AUTOBAYES uses a class of techniques known as computational logic which has its roots in automated theorem proving. AUTOBAYES begins with an initial goal and a set of initial assertions, or axioms, and adds new assertions, or theorems, by repeated application of the axioms, until the goal is proven. In our context, the goal is given by the input model; the derived algorithms are side effects of constructive theorems proving the existence of algorithms for the goal. 1Schema guards vary widely; for example, compare Nead-Melder simplex or simulated annealing (which require only function evaluation), conjugate gradient (which require both Jacobian and Hessian), EM and its variational extension [6] (which require a latent-variable structure model). 2Here, keywords have been underlined and line numbers have been added for reference in the text. The as-keyword allows annotations to variables which end up in the generated code’s comments. Also, n classes has been set to three (line 4), while n points is left unspecified. The class variable and single data variable are vectors, which defines them as i.i.d. Computer algebra. The first core element which makes automatic algorithm derivation feasible is the fact that we can mechanize the required symbol manipulation, using computer algebra methods. General symbolic differentiation and expression simplification are capabilities fundamental to our approach. AUTOBAYES contains a computer algebra engine using term rewrite rules which are an efficient mechanism for substitution of equal quantities or expressions and thus well-suited for this task.3 Schema-based synthesis. The computational cost of full-blown theorem proving grinds simple tasks to a halt while elementary and intermediate facts are reinvented from scratch. To achieve the scale of deduction required by algorithm derivation, we thus follow a schema-based synthesis technique which breaks away from strict theorem proving. Instead, we formalize high-level domain knowledge, such as the general EM strategy, as schemas. A schema combines a generic code fragment with explicitly specified preconditions which describe the applicability of the code fragment. The second core element which makes automatic algorithm derivation feasible is the fact that we can use Bayesian networks to efficiently encode the preconditions of complex algorithms such as EM. First-order logic representation of Bayesian netclasses N σ µ gauss c Npoints discrete x Nclasses φ works. A first-order logic representation of Bayesian networks was developed by Haddawy [7]. In this framework, random variables are represented by functor symbols and indexes (i.e., specific instances of i.i.d. vectors) are represented as functor arguments. Since unknown index values can be represented by implicitly universally quantified Prolog variables, this approach allows a compact encoding of networks involving i.i.d. variables or plates [3]; the figure shows the initial network for our running example. Moreover, such networks correspond to backtrack-free datalog programs, allowing the dependencies to be efficiently computed. We have extended the framework to work with non-ground probability queries since we seek to determine probabilities over entire i.i.d. vectors and matrices. Tests for independence on these indexed Bayesian networks are easily developed in Lauritzen’s framework which uses ancestral sets and set separation [9] and is more amenable to a theorem prover than the double negatives of the more widely known d-separation criteria. Given a Bayesian network, some probabilities can easily be extracted by enumerating the component probabilities at each node: Lemma 1. Let  be sets of variables over a Bayesian network with   . Then   descendents    and parents   hold 4in the corresponding dependency graph iff the following probability statement holds:  parents ! "$#&% &(' parents (')+* Symbolic probabilistic inference. How can probabilities not satisfying these conditions be converted to symbolic expressions? While many general schemes for inference on networks exist, our principal hurdle is the need to perform this over symbolic expressions incorporating real and integer variables from disparate real or infinite-discrete distributions. For instance, we might wish to compute the full maximum a posteriori probability for the mean and variance vectors of a Gaussian mixture model under a Bayesian framework. While the sum-product framework of [8] is perhaps closer to our formulation, we have out of necessity developed another scheme that lets us extract probabilities on a large class of mixed discrete and real, potentially indexed variables, where no integrals are needed and 3Popular symbolic packages such as Mathematica contain known errors allowing unsound derivations; they also lack the support for reasoning with vector and matrix quantities. 4Note that -, descendents . and /, parents . . all marginalization is done by summing out discrete variables. We give the non-indexed case below; this is readily extended to indexed variables (i.e., vectors). Lemma 2.   descendents   holds and ancestors   is independent of given  iff there exists a set of variables  such that Lemma 1 holds if we replace by   . Moreover, the unique minimal set  satisfying these conditions is given by ancestors    ancestors      Lemma 3. Let  be a subset of   descendents  such that ancestors   is independent of        ancestors   given  . Then Lemma 2 holds if we replace by   & and  by  . Moreover, there is a unique maximal set  satisfying these conditions. Lemma 2 lets us evaluate a probability by a summation: &   " (# Dom  %  & ) '+   while Lemma 3 lets us evaluate a probability by a summation and a ratio: &   &              Since the lemmas also show minimality of the sets  and  & , they also give the minimal conditions under which a probability can be evaluated by discrete summation without integration. These inference lemmas are operationalized as network decomposition schemas. However, we usually attempt to decompose a probability into independent components before applying this schema. 3 The AUTOBAYES System — Implementation Outline Levels of representation. Internally, our system uses three conceptually different levels of representation. Probabilities (including logarithmic and conditional probabilities) are the most abstract level. They are processed via methods for Bayesian network decomposition or match with core algorithms such as EM. Formulae are introduced when probabilities of the form    parents  are detected, either in the initial network, or after the application of network decompositions. Atomic probabilities (i.e., is a single variable) are directly replaced by formulae based on the given distribution and its parameters. General probabilities are decomposed into sums and products of the respective atomic probabilities. Formulae are ready for immediate optimization using symbolic or numeric methods but sometimes they can be decomposed further into independent subproblems. Finally, we use imperative intermediate code as the lowest level to represent both program fragments within the schemas as well as the completely constructed programs. All transformations we apply operate on or between these levels. Transformations for optimization. A number of different kinds of transformations are available. Decomposition of a problem into independent subproblems is always done. Decomposition of probabilities is driven by the Bayesian network; we have a separate system for handling decomposition of formulae. A formula can be decomposed along a loop, e.g., the problem “optimize   for  "!    ” is transformed into a for-loop over subproblems “optimize   for !    .” More commonly, “optimize   for !   $#&%   ” is transformed into the two subprograms “optimize  for !   ” and “optimize  for %   .” The lemmas given earlier are applied to change the level of representation and are thus used for simplification of probabilities. Examples of general expression simplification include simplifying the log of a formula, moving a summation inwards, and so on. When necessary, symbolic differentiation is performed. In the initial specification or in intermediate representations, likelihoods (i.e., subexpressions of the form  %         ) are identified and simplified into linear expression with terms such as mean    and mean    . The statistical algorithm schemas currently implemented include EM, k-means, and discrete model selection. Adding a Gibbs sampling schema would yield functionality comparable to that of BUGS [14]. Usually, the schemas require a particular form of the probabilities involved; they are thus tightly coupled to the decomposition and simplification transformations. For example, EM is a way of dealing with situation where Lemma 2 applies but where is indexed identically to the data. Code and test generation. From the intermediate code, code in a particular target language may be generated. Currently, AUTOBAYES can generate C++ and C which can be used in a stand-alone fashion or linked into Octave or Matlab (as a mex file). During this code-generation phase, most of the vector and matrix expressions are converted into forloops, and various code optimizations are performed which are impossible for a standard compiler. Our tool does not only generate efficient code, but also highly readable, documented programs: model- and algorithm-specific comments are generated automatically during the synthesis phase. For most examples, roughly 30% of the produced lines are comments. These comments provide explanation of the algorithm’s derivation. A generated HTML software design document with navigation capabilities facilitates code understanding and reading. AUTOBAYES also automatically generates a program for sampling from the specified model, so that closed-loop testing with synthetic data of the assumed distributions can be done. This can be done using simple forward sampling. 4 Example: Deriving the EM Algorithm for Gaussian Mixtures 1. User specifies model. First, the user specifies the model as shown in Section 2. 2. System parses model to obtain underlying Bayes net. From the model, the underlying Bayesian network is derived and represented internally as a directed graph. For visualization, AUTOBAYES can also produce a graph drawing as shown in Section 2. 3. System observes hidden-variable structure in Bayesian network. The system attempts to decompose the optimization goal into independent parts, but finds that it cannot. However, it then finds that the probability in the initial optimization statement matches the conditions of Lemma 2 and that the network describes a latent variable model. 4. System invokes abstract EMschema  max Pr    wrt    . . . C = ”[initialize ]; while  )   /* M-step */ max Pr  + wrt  ; /* E-step */ calculate Pr       ;  ” family schema. This triggers the EM-schema, whose overall structure is shown. The syntactic structure of the current subproblem must match the first argument of the schema; if additional applicability constraints (not shown here) hold, this schema is executed. It constructs a piece of code which is returned in the variable . This code fragment can contain recursive calls to other schemas (denoted by !    " ) which return code for subproblems which then is inserted into the schema, such as converging, a generic convergence criterion here imposed over the variables     . Note that the schema actually implements an ME-algorithm (i.e., starts the loop with the M-step) because the initialization already serves as an E-step. The system identifies the discrete variable  # as the single hidden variable, i.e., $   #  . For representation of the distribution of the hidden variable a matrix  % is generated, where % '& is the probability that the ( -th point falls into the ) -th class. AUTOBAYES then constructs the new distribution c(I) ˜ disc(vec(J := 1..n classes, q(I, J)) which replaces the original distribution in the following recursive calls of AUTOBAYES. 5. E-step: System performs marginalizawhile converging          for    for &    Pr   &         max Pr       "!    !      wrt #         tion. The freshly introduced distribution for #  implies that #  can be eliminated from the objective function by summing over %   $ . This gives us the partial program shown in the internal pseudocode. 6. M-step: System recursively decomwhile converging          for    for &     Pr   &          for &   max %'& )(+* , .0/ Pr     1    wrt  1    max %32  (+* 4%5& )(+*     wrt     poses optimization problem. AUTOBAYES is recursively called with the new goal max 6,78 Pr 9 : ;9 <   9 =  9 >  9 ?   wrt  9 = #9 >  9 ?  . Now, the Bayesian network decomposition schema applies with   #    ,        , revealing that   is independent of   , thus the optimization problem can be decomposed into two optimization subproblems: max Pr @9 < 9 : #9 > ;9 ?   wrt A9 >  9 ?  and max Pr B9 :  9 =  wrt  9 =  . 7. System unrolls i.i.d. vectors. The first subgoal from the decomposition schema, max Pr 19 < 9 :  9 > ;9 ?   wrt A9 > ;9 ?  , can be unrolled over the independent and identically distributed vector   using an index decomposition schema which moves expressions out of loops (sums or products) when they are not dependent on the loop index. Since  # and   are co-indexed, unrolling proceeds over both (also independent and identically distributed) vectors in parallel: max DC E)FHG Pr  < E  : E #9 > ;9 ?   wrt A9 >  9 ?  . 8. System identifies and solves Gaussian elimination problem. The probability Pr     #     is atomic because parents     #     . It can thus be replaced by the appropriately instantiated Gaussian density function. Because the strictly monotone IJ#K L function can first be applied to the objective function of the maximization, it becomes max % C EFHG %5M N0FHG+O E N  G PRQ   < E > N  P 67S8UT VW 67S8 ? N  wrt A9 >  9 ?  . Another application of index decomposition allows solution for the two scalars & and & . Gaussian elimination is then used to solve this subproblem analytically, yielding the sequence of expressions &  %  ) X %  &    %   X %  & and &  %   X % '&   Y &   %  ) X %  & . 9. System identifies and solves Lagrange multiplier problem. The second subgoal max Pr B9 :  9 =  wrt  9 =  can be unrolled over the i.i.d. vector  # as before. The specification condition %  &B X  & [Z creates a constrained maximization problem in the vector   which is solved by an application of the Lagrange multiplier schema. This in turn results in two subproblems for a single instance  & and for the multiplier which are both solved symbolically. Thus, the usual EM algorithm for Gaussian mixtures is derived. 10. System checks and optimizes pseudocode. During the synthesis process, AUTOBAYES accumulates a number of constraints which have to hold to ensure proper operation of the code (e.g., absence of divide-by-zero errors). Unless these constraints can be resolved against the model (e.g., ]\_^ ), AUTOBAYES automatically inserts run-time checks into the code. Before finally generating C/C++ code, the pseudocode is optimized using information from the specification (e.g., %  &B X  & `Z ) and the domain. Thus, optimizations well beyond the capability of a regular compiler can be done. 11. System translates pseudocode to real code in desired language. Finally, AUTOBAYES converts the intermediate code into code of the desired target system. The source code contains thorough comments detailing the mathematics implemented. A regular compiler containing generic performance optimizations not repeated by AUTOBAYES turns the code into an executable program. A program for sampling from a mixture of Gaussians is also produced for testing purposes. 5 Range of Capabilities Here, we discuss 18 examples which have been successfully handled by AUTOBAYES, ranging from simple textbook examples to sophisticated EM models and recent multinomial versions of PCA. For each entry, the table below gives a brief description, the number of lines of the specification and synthesized C++ code (loc), and the runtime to generate the code (in secs., measured on a 2.2GHz Linux system). Correctness was checked for these examples using automatically-generated test data and hand-written implementations. Bayesian textbook examples. Simple textbook examples, like Gaussian with simple prior , Gaussian with inverse gamma prior  , or Gaussian with conjugate prior  have closed-form solutions. The symbolic system of AUTOBAYES can actually find these solutions and thus generate short and efficient code. However, a slight relaxation of the prior on (Gaussian with semi-conjugate prior,  ) requires an iterative numerical solver. Gaussians in action.  is a Gaussian change-detection model. A slight extension of our running example, integrating several features, yields a Gaussian Bayes classifier model   .   has been successfully tested on various standard benchmarks [1], e.g., the Abalone dataset. Currently, the number of expected classes has to be given in advance. Mixture models and EM. A wide range of  -Gaussian mixture models can be handled by AUTOBAYES, ranging from the simple 1D (  ) and 2D with diagonal covariance (   ) to 1D models for multi-dimensional classes   and with (conjugate) priors on mean   or variance   . Using only a slight variation in the specification, the Gaussian distribution can be replaced by other distributions (e.g., exponentials,  , for failure analysis) or combinations (e.g., . Gaussian and Beta,  , or  -Cauchy and Poisson   ). In the algorithm generated by  , the analytic subsolution for the Gaussian case is combined with the numerical solver. Finally,  is a  -Gaussians and   -Gaussians two-level hierarchical mixture model which is solved by a nested instantiation of EM [15]: i.e., the M-step of the outer EM algorithm is a second EM algorithm nested inside. Mixtures for Regression. We represented regression with Gaussian error and Legendre polynomials with full conjugate priors allowing smoothing [10]. Two versions of this were then done: robust linear regression  replaces the Gaussian error with a mixture of two Gaussians (one broad, one peaked) both centered at zero. Trajectory clustering   replaces the single regression curve by a mixture of several curves [5]. In both cases an EM algorithm is correctly integrated with the exact regression solutions. Principal Component Analysis. We also tested a multinomial version of PCA called latent Dirichlet allocation [2]. AUTOBAYES currently lacks variational support, yet it manages to combine a  -means style outer loop on the component proportions with an EM-style inner loop on the hidden counts, producing the original algorithm of Hofmann, Lee and Seung, and others [4]. # Description loc  # Description loc   G > N  >      12/137 0.2  P > 13/148 0.2 ? P ? P  ! G #"%$ P'& ( )"*$ P ?     ,+ > N  >    Q./ $     16/188 0.4 ,0 >1 N  >     17/233 0.4 ? P   G #"*$ P &2( )"%$ P ?     ? P   G #"%$ P & ( )"*$ P ?     3 G Gauss step-detect 19/662 2.0 3 P Gauss Bayes Classify 58/1598 4.7 4 G 5 -Gauss mix 1D 17/418 0.7 4 P 5 -Gauss mix 2D, diag 22/599 1.2 46+ –”–, multi-dim 24/900 1.1 470 –”– 1D, > prior 25/456 1.0 4  –”–, ? prior 21/442 0.9 478 5 -Exp mix 15/347 0.5 4:9 Gauss/Beta mix 22/834 1.7 46; 5 -Cauchy/Poisson 21/747 1.0 46< 5G , 5 P -Gauss hierarch 29/1053 2.3 mix = G rob. lin. regression 54/1877 14.5  G PCA mult/w 5 -means 26/390 1.2 = P mixture regression 53/1282 9.8 6 Conclusion Beyond existing systems. Code libraries are common in statistics and learning, but they lack the high level of automation achievable only by deep symbolic reasoning. The Bayes Net Toolbox [12] is a Matlab library which allows users to program in models but does not derive algorithms or generate code. The BUGS system [14] also allows users to program in models but is specialized for Gibbs sampling. Stochastic parametrized grammars [11] allow a concise model specification similar to AUTOBAYES’s specification language, but are currently only a notational device similar to XML. Benefits of automated algorithm and code generation. Industrial-strength code. Code generated by AUTOBAYES is efficient, validated, and commented. Extreme applications. Extremely complex or critical applications such as spacecraft challenge the reliability limits of human-developed software. Automatically generated software allows for pervasive condition checking and correctness-by-construction. Fast prototyping and experimentation. For both the data analyst and machine learning researcher, AUTOBAYES can function as a powerful experimental workbench. New complex algorithms. Even with only the few elements implemented so far, we showed that algorithms approaching research-level results [4, 5, 10, 15] can be automatically derived. As more distributions, optimization methods and generalized learning algorithms are added to the system, an exponentially growing number of complex new algorithms become possible, including non-trivial variants which may challenge any single researcher’s particular algorithm design expertise. Future agenda. The ultimate goal is to give researchers the ability to experiment with the entire space of complex models and state-of-the-art statistical algorithms, and to allow new algorithmic ideas, as they appear, to be implicitly generalized to every model and special case known to be applicable. We have already begun work on generalizing the EM schema to continuous hidden variables, as well as adding schemas for variational methods, fast kd-tree and -body algorithms, MCMC, and temporal models. Availability. A web interface for AUTOBAYES is currently under development. More information is available at http://ase.arc.nasa.gov/autobayes. References [1] C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998. [2] D. Blei, A.Y. Ng, and M. Jordan. Latent Dirichlet allocation. NIPS*14, 2002. [3] W.L. Buntine. Operations for learning with graphical models. JAIR, 2:159–225, 1994. [4] W.L. Buntine. Variational extensions to EM and multinomial PCA. ECML 2002, pp. 23–34, 2002. [5] G.S. Gaffney and P. Smyth. Trajectory clustering using mixtures of regression models. In 5th KDD, pp. 63–72, 1999. [6] Z. Ghahramani and M.J. Beal. Propagation algorithms for variational Bayesian learning. In NIPS*12, pp. 507–513, 2000. [7] P. Haddawy. Generating Bayesian Networks from Probability Logic Knowledge Bases. In UAI 10, pp. 262–269, 1994. [8] F. R. Kschischang, B. Frey, and H.-A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Trans. Inform. Theory, 47(2):498–519, 2001. [9] S.L. Lauritzen, A.P. Dawid, B.N. Larsen, and H.-G. Leimer. Independence properties of directed Markov fields. Networks, 20:491–505, 1990. [10] D.J.C. Mackay. Bayesian interpolation. Neural Computation, 4(3):415–447, 1991. [11] E. Mjolsness and M. Turmon. Stochastic parameterized grammars for Bayesian model composition. In NIPS*2000 Workshop on Software Support for Bayesian Analysis Systems, Breckenridge, December 2000. [12] K. Murphy. Bayes Net Toolbox for Matlab. Interface of Computing Science and Statistics 33, 2001. [13] P. Smyth, D. Heckerman, and M. Jordan. Probabilistic independence networks for hidden Markov models. Neural Computation, 9(2):227–269, 1997. [14] A. Thomas, D.J. Spiegelhalter, and W.R. Gilks. BUGS: A program to perform Bayesian inference using Gibbs sampling. In Bayesian Statistics 4, pp. 837–842, 1992. [15] D.A. van Dyk. The nested EM algorithm. Statistica Sinica, 10:203-225, 2000.
2002
5
2,254
Shape Recipes: Scene Representations that Refer to the Image William T. Freeman and Antonio Torralba Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 {wtf, torralba}@ai.mit.edu Abstract The goal of low-level vision is to estimate an underlying scene, given an observed image. Real-world scenes (eg, albedos or shapes) can be very complex, conventionallyrequiring high dimensional representations which are hard to estimate and store. We propose a low-dimensional representation, called a scene recipe, that relies on the image itself to describe the complex scene configurations. Shape recipes are an example: these are the regression coefficients that predict the bandpassed shape from image data. We describe the benefits of this representation, and show two uses illustrating their properties: (1) we improve stereo shape estimates by learning shape recipes at low resolution and applying them at full resolution; (2) Shape recipes implicitly contain information about lighting and materials and we use them for material segmentation. 1 Introduction From images, we want to estimate various low-level scene properties such as shape, material, albedo or motion. For such an estimation task, the representation of the quantities to be estimated can be critical. Typically, these scene properties might be represented as a bitmap (eg [14]) or as a series expansion in a basis set of surface deformations (eg [10]). To represent accurately the details of real-world shapes and textures requires either fullresolution images or very high order series expansions. Estimating such high dimensional quantities is intrinsically difficult [2]. Strong priors [14] are often needed, which can give unrealistic shape reconstructions. Here we propose a new scene representation with appealing qualities for estimation. The approach we propose is to let the image itself bear as much of the representational burden as possible. We assume that the image is always available and we describe the underlying scene in reference to the image. The scene representation is a set of rules for transforming from the local image information to the desired scene quantities. We call this representation a scene recipe: a simple function for transforming local image data to local scene data. The computer doesn’t have to represent every curve of an intricate shape; the image does that for us, the computer just stores the rules for transforming from image to scene. In this paper, we focus on reconstructing the shapes that created the observed image, deriving shape recipes. The particular recipes we study here are regression coefficients for transforming (a) (b) (c) (c) Figure 1: 1-d example: The image (a) is rendered from the shape (b). The shape depends on the image in a non-local way. Bandpass filtering both signals allows for a local shape recipe. The dotted line (which agrees closely with true solid line) in (d) shows shape reconstruction from 9-parameter linear regression (9-tap convolution) from bandpassed image, (c). bandpassed image data into bandpassed shape data. 2 Shape Recipes The shape representation consists in describing, for a particular image, the functional relationship between image and shape. This relationship is not general for all images, but specific to the particular lighting and material conditions at hand. We call this functional relationship the shape recipe. To simplify the computation to obtain shape from image data, we require that the scene recipes be local: the scene structure in a region should only depend on a local neighborhood of the image. It is easy to show that, without taking special care, the shape-image relationship is not local. Fig. 1 (a) shows the intensity profile of a 1-d image arising from the shape profile shown in Fig. 1 (b) under particular rendering conditions (a Phong model with 10% specularity). Note that the function to recover the shape from the image cannot be local because the identical local images on the left and right sides of the surface edge correspond to different shape heights. In order to obtain locality in the shape-image relationship, we need to preprocess the shape and image signals. When shape and image are represented in a bandpass pyramid, within a subband, under generic rendering conditions [4], local shape changes lead to local image changes. (Representing the image in a Gaussian pyramid also gives a local relationship between image and bandpassed shape, effectively subsuming the image bandpass operation into the shape recipe. That formulation, explored in [16], can give slightly better performance and allows for simple non-linear extensions.) Figures 1 (c) and (d) are bandpass filtered versions of (a) and (b), using a second-derivative of a Gaussian filter. In this example, (d) relates to (c) by a simple shape recipe: convolution with a 9-tap filter, learned by linear regression from rendered random shape data. The solid line shows the true bandpassed shape, while the dotted line is the linear regression estimate from Fig. 1 (c). For 2-d images, we break the image and shape into subbands using a steerable pyramid [13], an oriented multi-scale decomposition with non-aliased subbands (Fig. 3 (a) and (b)). A shape subband can be related to an image intensity subband by a function Zk = fk(Ik) (1) where fk is a local function and Zk and Ik are the kth subbands of the steerable pyramid representation of the shape and image, respectively. The simplest functional relationship between shape and image intensity is via a linear filter with a finite size impulse response: Zk ≈rk ⋆Ik, where ⋆is convolution. The convolution kernel rk (specific to each scale and orientation) transforms the image subband Ik into the shape subband Zk. The recipe rk at each subband is learned by minimizing P x |Zk −Ik ⋆rk|2, regularizing rk as needed to avoid overfitting. rk contains information about the particular lighting conditions and the surface material. More general functions can be built by using non-linear filters and combining image information from different orientations and scales [16]. (a) Image (b) Stereo shape (c) Stereo shape (surface plot) (d) Re-rendered stereo shape Figure 2: Shape estimate from stereo. (a) is one image of the stereo pair; the stereo reconstruction is depicted as (b) a range map and (c) a surface plot and (d) a re-rendering of the stereo shape. The stereo shape is noisy and misses fine details. We conjecture that multiscale shape recipes have various desirable properties for estimation. First, they allow for a compact encoding of shape information, as much of the complexity of the shape is encoded in the image itself. The recipes need only specify how to translate image into shape. Secondly, regularities in how the shape recipes fk vary across scale and space provide a powerful mechanism for regularizing shape estimates. Instead of regularizing shape estimates by assuming a prior of smoothness of the surface, we can assume a slow spatial variation of the functional relationship between image and shape, which should make estimating shape recipes easier. Third, shape recipes implicitly encode lighting and material information, which can be used for material-based segmentation. In the next two sections we discuss the properties of smoothness across scale and space and we show potential applications in improving shape estimates from stereo and in image segmentation based on material properties. 3 Scaling regularities of shape recipes Fig. 2 shows one image of a stereo pair and the associated shape estimated from a stereo algorithm1. The shape estimate is noisy in the high frequencies (see surface plot and rerendered shape), but we assume it is accurate in the low spatial frequencies. Fig. 3 shows the steerable pyramid representations of the image (a) and shape (b) and the learned shape recipes (c) for each subband (linear convolution kernels that give the shape subband from the image subband). We exploit the slow variation of shape recipes over scale and assume that the shape recipes are constant over the top four octaves of the pyramid2 Thus, from the shape recipes learned at low-resolution we can reconstruct a higher resolution shape estimate than the stereo output, by learning the rendering conditions then taking advantage of shape details visible in the image but not exploited by the stereo algorithm. Fig. 4 (a) and (b) show the image and the implicit shape representation: the pyramid’s lowresolution shape and the shape recipes used over the top four scales. Fig. 4 (c) and (d) show explicitly the reconstructed shape implied by (a) and (b): note the high resolution details, including the fine structure visible in the bottom left corner of (d). Compare with the stereo 1We took our stereo photographs using a 3.3 Megapixel Olympus Camedia C-3040 camera, with a Pentax stereo adapter. We calibrated the stereo images using the point matching algorithm of Zhang [18], and rectified the stereo pair (so that epipoles are along scan lines) using the algorithm of [8], estimating disparity with the Zitnick–Kanade stereo algorithm [19]. 2Except for a scale factor. We scale the amplitude of the fixed recipe convolution kernels by 2 for each octave, to account for the differentiation operation in the linear shading approximation to Lambertian rendering [7]. (a) Image pyramid (b) Shape pyramid (c) Shape recipes for each subband Figure 3: Learning shape recipes at each subband. (a) and (b) are the steerable pyramid representations [13] of image and stereo shape. (c) shows the convolution kernels that best predict (b) from (a). The steerable pyramid isolates information according to scale (the smaller subband images represent larger spatial scales) and orientation (clockwise among subbands of one size: vertical, diagonal, horizontal, other diagonal). (a) image (b) low-res shape (center, top row) and recipes (for each subband orientation) (c) recipes shape (surface plot) (d) re-rendered recipes shape Figure 4: Reconstruction from shape recipes. The shape is represented by the information contained in the image (a), the low-res shape pyramid residual and the shape recipes (b) estimated at the lowest resolution. The shape can be regenerated by applying the shape recipes (b) at the 4 highest resolution scales, then reconstructing from the shape pyramid. (d) shows the image re-rendered under different lighting conditions than (a). The reconstruction is not noisy and shows more detail than the stereo shape, Fig. 2, including the fine textures visible at the bottom left of the image (a) but not detected by the stereo algorithm. output in Fig. 2. 4 Segmenting shape recipes Segmenting an image into regions of uniform color or texture is often an approximation to an underlying goal of segmenting the image into regions of uniform material. Shape recipes, by describing how to transform from image to shape, implicitly encode both lighting and material properties. Across unchanging lighting conditions, segmenting by shape recipes allows us to segment according to a material’s rendering properties, even overcoming changes of intensities or texture of the rendered image. (See [6] for a non-parametric approach to material segmentation.) We expect shape recipes to vary smoothly over space except for abrupt boundaries at changes in material or illumination. Within each subband, we can write the shape Zk (a) Shape (b) Image (c) Image-based segmentation (d) Recipe-based segmentation Figure 5: Segmentation example. Shape (a), with a horizontal orientation discontinuity, is rendered with two different shading models split vertically, (b). Based on image information alone, it is difficult to find a good segmentation into 2 groups, (c). A segmentation into 2 different shape recipes naturally falls along the vertical material boundary, (d). as a mixture of recipes: p(Zk|Ik) = N X n=1 p(Zk −fk,n(Ik))pn (2) where N specifies the number of recipes needed to explain the underlying shape Zk. The weights pn, which will be a function of location, will specify which recipe has to be used within each region and, therefore, will provide a segmentation of the image. To estimate the parameters of the mixture (shape recipes and weights), given known shape and the associated image, we use the EM algorithm [17]. We encourage spatial continuity for the weights pn as neighboring pixels are likely to belong to the same material. We use the mean field approximation to implement the spatial smoothness prior in the E step, suggested in [17]. Figure 5 shows a segmentation example. (a) is a fractal shape, with diagonal left structure across the top half, and diagonal right structure across the bottom half. Onto that shape, we “painted” two different Phong shading renderings in the two vertical halves, shown in (b) (the right half is shinier than the left). Thus, texture changes in each of the four quadrants, but the only material transition is across the vertical centerline. An image-based segmentation, which makes use of texture and intensity cues, among others, finds the four quadrants when looking for 4 groups, but can’t segment well when forced to find 2 groups, (c). (We used the normalized cuts segmentation software, available on-line [11].) The shape recipes encode the relationship between image and shape when segmenting into 2 groups, and finds the vertical material boundary, (d). 5 Occlusion boundaries Not all image variations have a direct translation into shape. This is true for paint boundaries and for most occlusion boundaries. These cases need to be treated specially with shape recipes. To illustrate, in Fig. 6 (c) the occluding boundary in the shape only produces a smooth change in the image, Fig. 6 (a). In that region, a shape recipe will produce an incorrect shape estimate, however, the stereo algorithm will often succeed at finding those occlusion edges. On the other hand, stereo often fails to provide the shape of image regions with complex shape details, where the shape recipes succeed. For the special case of revising the stereo algorithm’s output using shape recipes, we propose a statistical framework to combine both sources of information. We want to estimate the shape Z that maximizes the likelihood given the shape from stereo S and shape from image intensity I (a) image (b) image (subband) (c) stereo depth (d) stereo depth (subband) (e) shape recipe (subband) (f) recipe&stereo (subband) (g) recipe&stereo (surface plot) (h) laser range (subband) (i) laser range (surface plot) Figure 6: One way to handle occlusions with shape recipes. Image in full-res (a) and one steerable pyramid subband (b); stereo depth, full-res (c) and subband (d). (e) shows subband of shape reconstruction using learned shape recipe. Direct application of shape recipe across occlusion boundary misses the shape discontinuity. Stereo algorithm catches that discontinuity, but misses other shape details. Probabilistic combination of the two shape estimates (f, subband, g, surface), assuming Laplacian shape statistics, captures the desirable details of both, comparing favorably with laser scanner ground truth, (h, subband, i, surface, at slight misalignment from photos). via shape recipes: p(Z|S, I) = p(S, I|Z)p(Z)/p(S, I) (3) (For notational simplicity, we omit the spatial dependency from I, S and Z.) As both stereo S and image intensity I provide strong constraints for the possible underlying shape Z, the factor p(Z) can be considered constant in the region of support of p(S, I|Z). p(S, I) is a normalization factor. Eq. (3) can be simplified by assuming that the shapes from stereo and from shape recipes are independent. Furthermore, we also assume independence between the pixels in the image and across subbands: p(S, I|Z) = Y k Y x,y p(Sk|Zk)p(Ik|Zk) (4) Sk, Zk and Ik refer to the outputs of the subband k. Although this is an oversimplification it simplifies the analysis and provides good results. The terms p(Sk|Zk) and p(Ik|Zk) will depend on the noise models for the depth from stereo and for the shape recipes. For the shape estimate from stereo we assume a Gaussian distribution for the noise. At each subband and spatial location we have: p(Sk|Zk) = ps(Zk −Sk) = e−|Zk−Sk|2/σ2 s (2π)1/2σs (5) In the case of the shape recipes, a Gaussian noise model is not adequate. The distribution of the error Zk −fk(Ik) will depend on image noise, but more importantly, on all shape and image variations that are not functionally related with each other through the recipes. Fig. 6 illustrates this point: the image data, Fig. 6 (b) does not describe the discontinuity that exists in the shape, Fig. 6(h). When trying to estimate shape using the shape recipe fk(Ik), it fails to capture the discontinuity although it captures correctly other texture variations, Fig. 6 (e). Therefore, Zk −fk(Ik) will describe the distribution of occluding edges that do not produce image variations and paint edges that do not translate into shape variations. Due to the sparse distribution of edges in images (and range data), we expect Zk −fk(Ik) to have a Laplacian distribution typical of the statistics of wavelet outputs of natural images [12]: p(Ik|Zk) = p(Zk −fk(Ik)) = e−|Zk−fk(Ik)|p/σp i 2σi/pΓ(1/p) (6) In order to verify this, we use the stereo information at the low spatial resolutions that we expect is correct so that: p(Zk −fk(Ik)) ≃p(Sk −fk(Ik)). We obtain values of p in the range (0.6, 1.2). We set p = 1 for the results shown here. Note that p = 2 gives a Gaussian distribution. The least square estimate for the shape subband Zk given both stereo and image data, is: ˆZk = Z Zkp(Zk|Sk, Ik)dZk = R Zkp(Sk|Zk)p(Ik|Zk)dZk R p(Sk|Zk)p(Ik|Zk)dZk (7) This integral can be evaluated numerically independently at each pixel. When p = 2, then the LSE estimation is a weighted linear combination of the shape from stereo and shape recipes. However, with p ≃1 this problem is similar to the one of image denosing from wavelet decompositions [12] providing a non-linear combination of stereo and shape recipes. The basic behavior of Eq. (7) is to take from the stereo everything that cannot be explained by the recipes, and to take from the recipes the rest. Whenever both stereo and shape recipes give similar estimates, we prefer the recipes because they are more accurate than the stereo information. Where stereo and shape recipes differ greatly, such as at occlusions, then the shape estimate follows the stereo shape. 6 Discussion and Summary Unlike shape-from-shading algorithms [5], shape recipes are fast, local procedures for computing shape from image. The approximation of linear shading [7] also assumes a local linear relationship between image and shape subbands. However, learning the regression coefficients allows a linearized fit to more general rendering conditions than the special case of Lambertian shading for which linear shading was derived. We have proposed shape recipes as a representation that leaves the burden of describing shape details to the image. Unlike many other shape representations, these are lowdimensional, and should change slowly over time, distance, and spatial scale. We expect that these properties will prove useful for estimation algorithms using these representations, including non-linear extensions [16]. We showed that some of these properties are indeed useful in practice. We developed a shape estimate improver that relies on an initial estimate being accurate at low resolutions. Assuming that a shape recipes change slowly over 4 octaves of spatial scale, we learned the shape recipes at low resolution and applied them at high resolution to find shape from image details not exploited by the stereo algorithm. Comparisons with ground truth shapes show good results. Shape recipes fold in information about both lighting and material properties and can also be used to estimate material boundaries over regions where the lighting is assumed to be constant. Gilchrist and Adelson describe “atmospheres”, which are local formulas for converting image intensities to perceived lightness values [3, 1]. In this framework, atmospheres are “lightness recipes”. A full description of an image in terms of a scene recipe would require both shape recipes and reflectance recipes (for computing reflectance values from image data), which also requires labelling parts of the image as being caused by shading or reflectance changes, such as [15]. At a conceptual level, this representation is consistent with a theme in human vision research, that our visual systems use the world as a framebuffer or visual memory, not storing in the brain what can be obtained by looking [9]. Using shape recipes, we find simple transformation rules that let us convert from image to shape whenever we need to, by examining the image. We thank Ray Jones and Leonard McMillan for providing Cyberware scans, and Hao Zhang for code for rectification of stereo images. This work was funded by the Nippon Telegraph and Telephone Corporation as part of the NTT/MIT Collaboration Agreement. References [1] E. H. Adelson. Lightness perception and lightness illusions. In M. Gazzaniga, editor, The New Cognitive Neurosciences, pages 339–351. MIT Press, 2000. [2] C. M. Bishop. Neural networks for pattern recognition. Oxford, 1995. [3] A. Gilchrist et al. An anchoring theory of lightness. Psychological Review, 106(4):795–834, 1999. [4] W. T. Freeman. The generic viewpoint assumption in a framework for visual perception. Nature, 368(6471):542–545, April 7 1994. [5] B. K. P. Horn and M. J. Brooks, editors. Shape from shading. The MIT Press, Cambridge, MA, 1989. [6] T. Leung and J. Malik. Representing and recognizing the visual appearance of materials using three-dimensional textons. Intl. J. Comp. Vis., 43(1):29–44, 2001. [7] A. P. Pentland. Linear shape from shading. Intl. J. Comp. Vis., 1(4):153–162, 1990. [8] M. Pollefeys, R. Koch, and L. V. Gool. A simple and efficient rectification method for general motion. In Intl. Conf. on Computer Vision (ICCV), pages 496–501, 1999. [9] R. A. Rensink. The dynamic representation of scenes. Vis. Cognition, 7:17–42, 2000. [10] S. Sclaroff and A. Pentland. Generalized implicit functions for computer graphics. In Proc. SIGGRAPH 91, volume 25, pages 247–250, 1991. In Computer Graphics, Annual Conference Series. [11] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000. [12] E. P. Simoncelli. Statistical models for images: Compression, restoration and synthesis. In 31st Asilomar Conf. on Sig., Sys. and Computers, Pacific Grove, CA, 1997. [13] E. P. Simoncelli and W. T. Freeman. The steerable pyramid: a flexible architecture for multi-scale derivative computation. In 2nd Annual Intl. Conf. on Image Processing, Washington, DC, 1995. IEEE. [14] R. Szeliski. Bayesian modeling of uncertainty in low-level vision. Intl. J. Comp. Vis., 5(3):271–301, 1990. [15] M. F. Tappen, W. T. Freeman, and E. H. Adelson. Recovering intrinsic images from a single image. In Adv. in Neural Info. Proc. Systems, volume 15. MIT Press, 2003. [16] A. Torralba and W. T. Freeman. Properties and applications of shape recipes. Technical Report AIM-2002-019, MIT AI lab, 2002. [17] Y. Weiss. Bayesian motion estimation and segmentation. PhD thesis, M.I.T., 1998. [18] Z. Zhang. Determining the epipolar geometry and its uncertainty: A review. Technical Report 2927, Sophia-Antipolis Cedex, France, 1996. see http://wwwsop.inria.fr/robotvis/demo/f-http/html/. [19] C. L. Zitnick and T. Kanade. A cooperative algorithm for stereo matching and occlusion detection. IEEE Pattern Analysis and Machine Intelligence, 22(7), July 2000.
2002
50
2,255
Real-time Particle Filters Cody Kwok Dieter Fox Marina Meil˘a  Dept. of Computer Science & Engineering,  Dept. of Statistics University of Washington Seattle, WA 98195  ctkwok,fox  @cs.washington.edu, mmp@stat.washington.edu Abstract Particle filters estimate the state of dynamical systems from sensor information. In many real time applications of particle filters, however, sensor information arrives at a significantly higher rate than the update rate of the filter. The prevalent approach to dealing with such situations is to update the particle filter as often as possible and to discard sensor information that cannot be processed in time. In this paper we present real-time particle filters, which make use of all sensor information even when the filter update rate is below the update rate of the sensors. This is achieved by representing posteriors as mixtures of sample sets, where each mixture component integrates one observation arriving during a filter update. The weights of the mixture components are set so as to minimize the approximation error introduced by the mixture representation. Thereby, our approach focuses computational resources (samples) on valuable sensor information. Experiments using data collected with a mobile robot show that our approach yields strong improvements over other approaches. 1 Introduction Due to their sample-based representation, particle filters are well suited to estimate the state of non-linear dynamic systems. Over the last years, particle filters have been applied with great success to a variety of state estimation problems including visual tracking, speech recognition, and mobile robotics [1]. The increased representational power of particle filters, however, comes at the cost of higher computational complexity. The application of particle filters to online, real-time estimation raises new research questions. The key question in this context is: How can we deal with situations in which the rate of incoming sensor data is higher than the update rate of the particle filter? To the best of our knowledge, this problem has not been addressed in the literature so far. The prevalent approach in real time applications is to update the filter as often as possible and to discard sensor information that arrives during the update process. Obviously, this approach is prone to losing valuable sensor information. At first sight, the sample based representation of particle filters suggests an alternative approach similar to an any-time implementation: Whenever a new observation arrives, sampling is interrupted and the next observation is processed. Unfortunately, such an approach can result in too small sample sets, causing the filter to diverge [1, 2]. In this paper we introduce real-time particle filters (RTPF) to deal with constraints imposed by limited computational resources. Instead of discarding sensor readings, we distribute the ut t t 1 2 3 St+1 1 St+1 1 t+1 3 t+1 2 t+1 1 ut t+1 t+1 3 1 2 z z z z z z z S S S z z z z S S S z 3 1 1 3 2 1 1 3 3 2 2 1 t t t t t ut1 ut2 u 3 t (a) (b) (c) t t t t t t t t+1 1 t+1 1 t+1 1 . . . . . Figure 1: Different strategies for dealing with limited computational power. All approaches process the same number of samples per estimation interval (window sizeq three). (a) Skip observations, i.e. integrate only every third observation. (b) Aggregate observations within a window and integrate them in one step. (c) Reduce sample set size so that each observation can be considered. samples among the different observations arriving during a filter update. Hence RTPF represents densities over the state space by mixtures of sample sets, thereby avoiding the problem of filter divergence due to an insufficient number of independent samples. The weights of the mixture components are computed so as to minimize the approximation error introduced by the mixture representation. The resuling approach naturally focuses computational resources (samples) on valuable sensor information. The remainder of this paper is organized as follows: In the next section we outline the basics of particle filters in the context of real-time constraints. Then, in Section 3, we introduce our novel technique to real-time particle filters. Finally, we present experimental results followed by a discussion of the properties of RTPF. 2 Particle filters Particle filters are a sample-based variant of Bayes filters, which recursively estimate posterior densities, or beliefs  , over the state  of a dynamical system (see [1, 3] for details):                      ! #"   $ (1) Here % is a sensor measurement and    is control information measuring the dynamics of the system. Particle filters represent beliefs by sets &  of weighted samples ')(+*+,   (+*., 0/ . Each  (.*+,  is a state, and the (+*.,  are non-negative numerical factors called importance weights, which sum up to one. The basic form of the particle filter realizes the recursive Bayes filter according to a sampling procedure, often referred to as sequential importance sampling with resampling (SISR): 1. Resampling: Draw with replacement a random state  from the set &   according to the (discrete) distribution defined through the importance weights -1(.*+,  . 2. Sampling: Use  and the control information   to sample 2 according to the distribution    2    ! , which describes the dynamics of the system. 3. Importance sampling: Weight the sample  2 by the observation likelihood 23     2 . Each iteration of these three steps generates a sample '2  2 / representing the posterior. After 4 iterations, the importance weights of the samples are normalized so that they sum up to one. Particle filters can be shown to converge to the true posterior even in non-Gaussian, non-linear dynamic systems [4]. A typical assumption underlying particle filters is that all samples can be updated whenever new sensor information arrives. Under realtime conditions, however, it is possible that the update cannot be completed before the next sensor measurement arrives. This can be the case for computationally complex sensor models or whenever the underlying posterior requires large sample sets [2]. The majority of filtering approaches deals with this problem by skipping sensor information that arrives during the update of the filter. While this approach works reasonably well in many situations, it is prone to miss valuable sensor information. α1 α2 α3 α α α z z z z z S S Estimation window t+1 1 2 3 t t t Estmation window t t+1 1 2 t+1 z 3 t+1 t t 1 2 3 1 2 3 ’ ’ ’ t S S t+1 t+1 2 t+1 3 S S 1 Figure 2: Real time particle filters. The samples are distributed among the observations within one estimation interval (window size three in this example). The belief is a mixture of the individual sample sets. Each arrow additionally represents the system dynamics    . Before we discuss ways of dealing with such situations, let us introduce some notation. We assume that observations arrive at time intervals  , which we will call observation intervals. Let 4 be the number of samples required by the particle filter. Assume that the resulting update cycle of the particle filter takes  and is called the estimation interval or estimation window. Accordingly,  observations arrive during one estimation interval. We call this number the window size of the filter, i.e. the number of observations obtained during a filter update. The  -th observation and state within window  are denoted  and   , respectively. Fig. 1 illustrates different approaches to dealing with window sizes larger than one. The simplest and most common aproach is shown in Fig. 1(a). Here, observations arriving during the update of the sample set are discarded, which has the obvious disadvantage that valuable sensor information might get lost. The approach in Fig. 1(b) overcomes this problem by aggregating multiple observations into one. While this technique avoids the loss of information, it is not applicable to arbitrary dynamical systems. For example, it assumes that observations can be aggregated optimally, and that the integration of an aggregated observation can be performed as efficiently as the integration of individual observations, which is often not the case. The third approach, shown in Fig. 1(c), simply stops generating new samples whenever an observation is made (hence each sample set contains only 4 samples). While this approach takes advantage of the any-time capabilities of particle filters, it is susceptible to filter divergence due to an insufficent number of samples [2, 1]. 3 Real time particle filters In this paper we propose real time particle filters (RTPFs), a novel approach to dealing with limited computational resources. The key idea of RTPFs is to consider all sensor measurements by distributing the samples among the observations within an update window. Additionally, by weighting the different sample sets within a window, our approach focuses the computational resources (samples) on the most valuable observations. Fig. 2 illustrates the approach. As can be seen, instead of one sample set at time  , we maintain  smaller sample sets at    $$ $ ! . We treat such a “virtual sample set”, or belief, as a mixture of the distributions represented in it. The mixture components represent the state of the system at different points in time. If needed, however, the complete belief can be generated by considering the dynamics between the individual mixture components. Compared to the first approach discussed in the previous section, this method has the advantage of not skipping any observations. In contrast to the approach shown in Fig. 1(b), RTPFs do not make any assumptions about the nature of the sensor data, i.e. whether it can be aggregated or not. The difference to the third approach (Fig. 1(c)) is more subtle. In both approaches, each of the  sample sets can only contain 4 samples. The belief state that is propagated by RTPF to the next estimation interval is a mixture distribution where each mixture component is represented by one of the  sample sets, all generated independently from the previous window. Thus, the belief state propagation is simulated by  "$# sample trajectories, that for computational convenience are represented at the points in time where the observations are integrated. In the approach (c) however, the belief propagation is simulated with only 4 independent samples. We will now show how RTPF determines the weights of the mixture belief. The key idea is to choose the weights that minimize the KL-divergence between the mixture belief and the optimal belief. The optimal belief is the belief we would get if there was enough time to compute the full posterior within the update window. 3.1 Mixture representation Let us restrict our attention to one estimation interval consisting of  observations. The optimal belief 1%!   at the end of an estimation window results from iterative application of the Bayes filter update on each obseration [3]: 1% !    $$ $   *                1   "   #$ $$ "   $ (2) Here     denotes the belief generated in the previous estimation window. In essence, (2) computes the belief by integrating over all trajectories through the estimation interval, where the start position of the trajectories is drawn from the previous belief 1   . The probability of each trajectory is determined using the control information      $$ $   , and the likelihoods of the observations   %$ $$    along the trajectory. Now let 1 *    denote the belief resulting from integrating only the   observation within the estimation window. RTPF computes a mixture of  such beliefs, one for each observation. The mixture, denoted 1  *     , is the weighted sum of the mixture components  *    , where  denotes the mixture weights:   *    3  *   * 1% *    *   *  $$ $    %           %        !     "   $.$.$ "   $ (3) where  * and  *  * 3  . Here, too, we integrate over all trajectories. In contrast to (2), however, each trajectory selectively integrates only one of the  observations within the estimation interval1. 3.2 Optimizing the mixture weights We will now turn to the problem of finding the weights of the mixture. These weights reflect the “importance” of the respective observations for describing the optimal belief. The idea is to set them so as to minimize the approximation error introduced by the mixture distribution. More formally, we determine the mixing weights "! by minimizing the KL-divergence [5] between   *# and   .  ! 3 $&%(')+*#, -/.0 132  1  *# !" 4 . 1%! (4) 3 $&%(')+*#, -/.0    *    576   *#   4 !    "   $ (5) In the above 8 3:9    *#   * 3; < * =?> . Optimizing the weights of mixture approximations can be done using EM [6] or (constrained) gradient descent [7]. Here, we perform a small number of gradient descent steps to find the mixture weights. Denote by 1Note that typically the individual predictions    $       can be “concatenated” so that only two predictions for each trajectory have to be performed, one before and one after the corresponding observation.   the criterion to be minimized in (5). The gradient of   is given by    * 3     *   *      '   *         *   *#      '  !    3     *    ' 1  *#            "     3  $ $$  $ (6) The start point  for the gradient descent is chosen to be the center of the weight domain 8 , that is  3   $ $$   . 3.3 Monte Carlo gradient estimation The exact computation of the gradients in (6) requires the computation of the different beliefs, each in turn requiring several particle filter updates (see (2), (3)), and integreation over all states   . This is clearly not feasible in our case. We solve this problem by Monte Carlo approximation. The approach is based on the observation that the beliefs in (6) share the same trajectories through space and differ only in the observations they integrate. Therefore, we first generate sample trajectories through the estimation window without considering the observations, and then use importance sampling to generate the beliefs needed for the gradient estimation. Trajectory generation is done as follows: we draw a sample    from a sample set of the previous mixture belief, where the probability of chosing a set &)  is given by the mixture weights   . This sample is then moved forward in time by consecutively drawing samples   from the distributions          ! at each time step  *   3  $$ $  . The resulting trajectories are drawn from the following proposal distribution  :     3  $$ $   *                 #"   #$$ $ "    (7) Using importance sampling, we obtain sample-based estimates of  * and 1! by simply weighting each trajectory with       or       %      , respectively (compare (2) and (3)). 1  *# is generated with minimal computational overhead by averaging the weights computed for the individual  * distributions. The use of the same trajectories for all distributions has the advantage that it is highly efficient and that it reduces the variance of the gradient estimate. This variance reduction is due to using the same random bits in evaluating the diverse scenarios of incorporating one or another of the observations [8]. Further variance reduction is achieved by using stratified sampling on trajectories. The trajectories are grouped by determining connected regions in a grid over the state space (at time   ). Neighboring cells are considered connected if both contain samples. To compute the gradients by formula (6), we then perform summation and normalization over the grouped trajectories. Empirical evaluations showed that this grouping greatly reduces the number of trajectories needed to get smooth gradient estimates. An additional, very important benefit of grouping is the reduction of the bias due to different dynamics applied to the different sample sets in the estimation window. In our experiments the number of trajectories is less than  of the total number of samples, resulting in a computational overhead of about 1% of the total estimation time. To summarize, the RTPF algorithm works as follows. The number 4 of independent samples needed to represent the belief, the update rate of incoming sensor data, and the available processing power determine the size  of the estimation window and hence the number of mixture components. RTPF computes the optimal weights of the mixture distribution at the end of each estimation window. This is done by gradient descent using the Monte Carlo estimates of the gradients. The resulting weights are used to generate samples for the individual sample sets of the next estimation window. To do so, we keep track of the control information (dynamics) between the different sample sets of two consecutive windows. 54m 18m Fig. 3: Map of the environment used for the experiment. The robot was moved around the symmetric loop on the left. The task of the robot was to determine its position using data collected by two distance measuring devices, one pointing to its left, the other pointing to its right. 4 Experiments In this section we evaluate the effectiveness of RTPF against the alternatives, using data collected from a mobile robot in a real-world environment. Figure 3 shows the setup of the experiment: The robot was placed in the office floor and moved around the loop on the left. The task of the robot was to determine its position within the map, using data collected by two laser-beams, one pointing to its left, the other pointing to its right. The two laser beams were extracted from a planar laser range-finder, allowing the robot only to determine the distance to the walls on its left and right. Between each observation the robot moved approximately 50cm (see [3] for details on robot localization and sensor models). Note that the loop in the environment is symmetric except for a few “landmarks” along the walls of the corridor. Localization performance was measured by the average distance between the samples and the reference robot positions, which were computed offline. In the experiments, our real-time algorithm, RTPF, is compared to particle filters with skipping observations, called “Skip data” (Figure 1a), and particle filters with insufficient samples, called “Naive” (Figure 1c). Furthermore, to gauge the efficiency of our mixture weighting, we also obtained results for our real-time algorithm without weighting, i.e. we used mixture distributions and fixed the weights to    . We denote this variant “Uniform”. Finally, we also include as reference the “Baseline” approach, which is allowed to generate 4 samples for each observation, thereby not considering real-time constraints. The experiment is set up as follows. First, we fix the sample set size 4 which is sufficient for the robot to localize itself. In our experiment 4 is set empirically to 20,000 (the particle filters may fail at lower 4 , see also [2]). We then vary the computational resources, resulting in different window sizes  . Larger window size means lower computational power, and the number of samples that can be generated for each observation decreases to ( 4  ). Figure 4 shows the evolutions of average localization errors over time, using different window sizes. Each graph is obtained by averaging over 30 runs with different random seeds and start positions. The error bars indicate 95% confidence intervals. As the figures show, “Naive” gives the worst results, which is due to insufficient numbers of samples, resulting in divergence of the filter. While “Uniform” performs slightly better than “Skip data”, RTPF is the most effective of all algorithms, localizing the robot in the least amount of time. Furthermore, RTPF shows the least degradation with limited computational power (larger window sizes). The key advantage of RTPF over “Uniform” lies in the mixture weighting, which allows our approach to focus computational resources on valuable sensor information, for example when the robot passes an informative feature in one of the hallways. For short window sizes (Fig. 4(a)), this advantage is not very strong since in this environment, most features can be detected in several consecutive sensor measurements. Note that because the “Baseline” approach was allowed to integrate all observations with all of the 20,000 samples, it converges to a lower error level than all the other approaches. 0 200 400 600 800 1000 0 50 100 150 200 250 300 350 400 450 Average Localization error [cm] Time [sec] Baseline Skip data RTPF Naive Uniform (a) 0 200 400 600 800 1000 0 50 100 150 200 250 300 350 400 450 Average Localization error [cm] Time [sec] Baseline Skip data RTPF Naive Uniform (b) 0 200 400 600 800 1000 0 50 100 150 200 250 300 350 400 450 Average Localization error [cm] Time [sec] Baseline Skip data RTPF Naive Uniform (c) 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 2 4 6 8 10 12 14 Localization speedup Window size (d) Fig. 4(a)-(c): Performance of the different algorithms for window sizes of 4, 8, and 12 respectively. The  -axis represents time elapsed since the beginning of the localization experiment. The -axis plots the localization error measured in average distance from the reference position. Each figure includes the performance achieved with unlimited computational power as the “Baseline” graph. Each point is averaged over 30 runs, and error bars indicate 95% confidence intervals. Fig. 4(d) represents the localization speedup of RTPF over “Skip data” for various window sizes. The advantage of RTPF increases with the difficulty of the task, i.e. with increasing window size. Between window size 6 and 12, RTPF localizes at least twice as fast as “Skip data”. Without mixture weighting of RTPF, we did not expect “Uniform” to outperform “Skip data” significantly. To see this, consider one estimation window of length  . Suppose only one of the  observations detects a landmark, or very informative feature in the hallway. In such a situation, “Uniform” considers this landmark every time the robot passes it. However, it only assigns 4  samples to this landmark detection. “Skip data” on the other hand, detects the landmark only every  -th time, but assigns all 4 samples to it. Therefore, averaged over many different runs, the mean performance of “Uniform” and “Skip data” is very similar. However, the variance of the error is significantly lower for “Uniform” since it considers the detection in every run. In contrast to both approaches, RTPF detects all landmarks and generates more samples for the landmark detections, thereby gaining the best of both worlds, and Figures 4(a)–(c) show this is indeed the case. In Figure 4(d) we summarize the performance gain of RTPF over “Skip data” for different window sizes in terms of localization time. We considered the robot to be localized if the average localization error remains below 200 cm over a period of 10 seconds. If the run never reaches this level, the localization time is set to the length of the entire run, which is 574 seconds. The  -axis represents the window size and the  -axis the localization speedup. For each window size speedups were determined using  -tests on the localization times for the 30 pairs of data runs. All results are significant at the 95% level. The graph shows that with increasing window size (i.e. decreasing processing power), the localization speedup increases. At small window sizes the speedup is 20-50%, but it goes up to 2.7 times for larger windows, demonstrating the benefits of the RTPF approach over traditional particle filters. Ultimately, for very large window sizes, the speedup decreases again, which is due to the fact that none of the approaches is able to reduce the error below 200cm within the run time of an experiment. 5 Conclusions In this paper we tackled the problem of particle filtering under the constraint of limited computing resources. Our approach makes near-optimal use of sensor information by dividing sample sets between all available observations and then representing the state as a mixture of sample sets. Next we optimize the mixing weights in order to be as close to the true posterior distribution as possible. Optimization is performed efficiently by gradient descent using a Monte Carlo approximation of the gradients. We showed that RTPF produces significant performance improvements in a robot localization task. The results indicate that our approach outperforms all alternative methods for dealing with limited computation. Furthermore, RTPF localized the robot more than 2.7 times faster than the original particle filter approach, which skips sensor data. Based on these results, we expect our method to be highly valuable in a wide range of real-time applications of particle filters. RTPF yields maximal performance gain for data streams containing highly valuable sensor data occuring at unpredictable time points. The idea of approximating belief states by mixtures has also been used in the context of dynamic Bayesian networks [9]. However, Boyen and Koller use mixtures to represent belief states at a specific point in time, not over multiple time steps. Our work is motivated by real-time constraints that are not present in [9]. So far RTPF uses fixed sample sizes and fixed window sizes. The next natural step is to adapt these two “structural parameters” to further speed up the computation. For example, by the method of [2] we can change the sample size on-the-fly, which in turn allows us to change the window size. Ongoing experiments suggest that this combination yields further performance improvements: When the state uncertainty is high, many samples are used and these samples are spread out over multiple observations. On the other hand, when the uncertainty is low, the number of samples is very small and RTPF becomes identical to the vanilla particle filter with one update (sample set) per observation. 6 Acknowledgements This research is sponsored in part by the National Science Foundation (CAREER grant number 0093406) and by DARPA (MICA program). References [1] A. Doucet, N. de Freitas, and N. Gordon, editors. Sequential Monte Carlo in Practice. SpringerVerlag, New York, 2001. [2] D. Fox. KLD-sampling: Adaptive particle filters and mobile robot localization. In Advances in Neural Information Processing Systems (NIPS), 2001. [3] D. Fox, S. Thrun, F. Dellaert, and W. Burgard. Particle filters for mobile robot localization. In Doucet et al. [1]. [4] P. Del Moral and L. Miclo. Branching and interacting particle systems approximations of feynamkac formulae with applications to non linear filtering. In Seminaire de Probabilites XXXIV, number 1729 in Lecture Notes in Mathematics. Springer-Verlag, 2000. [5] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley Series in Telecommunications. Wiley, New York, 1991. [6] W. Poland and R. Shachter. Mixtures of Gaussians and minimum relative entropy techniques for modeling continuous uncertainties. In Proc. of the Conference on Uncertainty in Artificial Intelligence (UAI), 1993. [7] T. Jaakkola and M. Jordan. Improving the mean field approximation via the use of mixture distributions. In Learning in Graphical Models. Kluwer, 1997. [8] P. R. Cohen. Empirical methods for artificial intelligence. MIT Press, 1995. [9] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Proc. of the Conference on Uncertainty in Artificial Intelligence (UAI), 1998.
2002
51
2,256
Critical Lines in Symmetry of Mixture Models and its Application to Component Splitting Kenji Fukumizu Institute of Statistical Mathematics Tokyo 106-8569 Japan fukumizu@ism.ac.jp Shotaro Akaho AIST Tsukuba 305-8568 Japan s.akaho@aist.go.jp Shun-ichi Amari RIKEN Wako 351-0198 Japan amari@brain.riken.go.jp Abstract We show the existence of critical points as lines for the likelihood function of mixture-type models. They are given by embedding of a critical point for models with less components. A sufficient condition that the critical line gives local maxima or saddle points is also derived. Based on this fact, a component-split method is proposed for a mixture of Gaussian components, and its effectiveness is verified through experiments. 1 Introduction The likelihood function of a mixture model often has a complex shape so that calculation of an estimator can be difficult, whether the maximum likelihood or Bayesian approach is used. In the maximum likelihood estimation, convergence of the EM algorithm to the global maximum is not guaranteed, while it is a standard method. Investigation of the likelihood function for mixture models is important to develop effective methods for learning. This paper discusses the critical points of the likelihood function for mixture-type models by analyzing their hierarchical symmetric structure. As generalization of [1], we show that, given a critical point of the likelihood for the model with (H −1) components, duplication of any of the components gives critical points as lines for the model with H components. We call them critical lines of mixture models. We derive also a sufficient condition that the critical lines give maxima or saddle points of the larger model, and show that given a maximum of the likelihood for a mixture of Gaussian components, an appropriate split of any component always gives an ascending direction of the likelihood. Based on this theory, we propose a stable method of splitting a component, which works effectively with the EM optimization for avoiding the dependency on the initial condition and improving the optimization. The usefulness of the algorithm is verified through experiments. 2 Hierarchical Symmetry and Critical Lines of Mixture Models 2.1 Symmetry of Mixture models Suppose fH(x | θ(H)) is a mixture model with H components, defined by fH(x | θ(H)) = PH j=1cj p(x | βj), cj = αj/(α1 + · · · + αH), (1) where p(x | β) is a probability density function with a parameter β. We write, for simplicity, α(H) = (α1, . . . , αH), β(H) = (β1, . . . , βH), and θ(H) = (α(H); β(H)). The key of our discussion is the following two symmetric properties, which are satisfied by mixture models; (S-1) fH(x | α(H); β(H−2), βH−1, βH−1) = fH−1(x | α(H−2), αH−1 + αH; β(H−1)). (S-2) There exists a function A(α) such that for j = H −1 and H, ∂fH ∂βj (x | α(H); β(H−2), βH−1, βH−1) = αj A(α) ∂fH−1 ∂βH−1 (x | α(H−2), αH−1 + αH; β(H−1)). In mixture models, the function A(α) is simply given by A(α) = α1 + · · · + αH. Hereafter, we discuss in general a model with the assumptions (S-1) and (S-2). The results in Sections 2.1 and 2.2 depend only on these assumptions 1. While in mixture models similar conditions are satisfied with any choices of two components, we describe only the case of H −1 and H just for simplicity. We write ΘH for the space of the parameter θ(H). Another example which satisfies (S-1) and (S-2) is Latent Dirichlet Allocation (LDA, [2]), which models data of a group structure (e.g. document as a set of words). For x = (x1, . . . , xM), LDA with H components is defined by fH(x | θ(H)) = Z ∆H−1 DH(u(H)|α(H))QM ν=1 PH j=1ujp(xν|βj)  du(H), (2) where DH(u(H)|α(H)) = Γ(P j αj) Q j Γ(αj) QH j=1 uαj−1 j is the Dirichlet distribution over the (H− 1)-dimensional simplex ∆H−1. It is easy to see (S-1) and (S-2) hold for LDA by using Lemma 6 in Appendix. LDA includes mixture models eq.(1) as the special case of M = 1. It is straightforward from (S-1) that, given a parameter θ(H−1) = (γ(H−1); η(H−1)) of the model with (H −1) components and a scalar λ, the parameter θλ ∈ΘH defined by αj = γj, βj = ηj (1 ≤j ≤H −2) αH−1 = λγH−1, αH = (1 −λ)γH−1, βH−1 = βH = ηH−1 (3) gives the same function as fH−1(x | θ(H−1)). In mixture models/LDA, this corresponds to duplication of the (H −1)-th component with partitioning the mixing/Dirichlet parameter in the ratio λ : (1 −λ). Since λ is arbitrary, a point in the smaller model is embedded into the larger model as a line in the parameter space ΘH. This implies that the parameter to realize fH−1(x | θ(H−1)) lacks identifiability in ΘH. Such singular structure of a model causes various interesting phenomena in estimation, learning, and generalization ([3]). 2.2 Critical Lines – Embedding of a Critical Point Given a sample {X(1), . . . , X(N)}, we define an objective function for learning by LH(θ(H)) = PN n=1Ωn(fH(X(n) | θ(H))), (4) where Ωn(f) are differentiable functions, which may depend on n. The objective of learning is to maximize LH. If Ωn(f) = log f for all n, maximization of LH(θ(H)) is equal to the maximum likelihood estimation. Suppose θ(H−1) ∗ = (γ∗ 1, . . . , γ∗ H−1; η∗ 1, . . . , η∗ H−1) is a critical point of LH−1(θ(H−1)), that is, ∂LH−1 ∂θ(H−1) (θ(H−1) ∗ ) = 0. Embedding of this point into ΘH gives a critical line; 1The results do not require that p(x | β) is a density function. Thus, they can be easily extended to function fitting in regression, which gives the results on multilayer neural networks in [1]. Theorem 1 (Critical Line). Suppose that a model satisfies (S-1) and (S-2). Let θ(H−1) ∗ be a critical point of LH−1 with γ∗ H−1 ̸= 0, and θλ be a parameter given by eq.(3) for θ(H−1) ∗ . Then, θλ is a critical point of LH(θ(H)) for all λ. Proof. Although this is essentially the same as Theorem 1 in [1], the following proof gives better intuition. Let (s, t; ζ, ξ) be reparametrization of (αH−1, αH; βH−1, βH), defined by s = αH−1 + αH, t = αH−1 −αH, βH−1 = ζ + αHξ, βH = ζ −αH−1ξ. (5) This is a one-to-one correspondence, if αH−1 + αH ̸= 0. Note that ξ = 0 is equivalent to the condition βH−1 = βH. Let ω = (α(H−2), s, t; β(H−2), ζ, ξ) be the new coordinate, ℓH(ω) be the objective function eq.(4) under this parametrization, and ωλ be the parameter corresponding to θλ. Since we have, by definition, ℓH(ω) = LH(α(H−2), s+t 2 , s−t 2 ; β(H−2), ζ + s−t 2 ξ, ζ −s+t 2 ξ), the condition (S-1) means ℓH(α(H−2), s, t; β(H−2), ζ, 0) = LH−1(α(H−2), s; β(H−2), ζ). (6) Then, it is clear that the first derivatives of ℓH at ωλ with respect to α(H−2), s, β(H−2), and ζ are equal to those of LH−1(θ(H−1)) at θ(H−1) ∗ , and they are zero. The derivative ∂ℓH(ωλ)/∂t vanishes from eq.(6), and ∂ℓH(ωλ)/∂ξ = 0 from following Lemma 2. Lemma 2. Let H be a hyperplane given by {ω | ξ = 0}. Then, for all ωo ∈H, we have ∂fH ∂ξ (x | ωo) = 0. (7) Proof. Straightforward from the assumption (S-2) and ∂ ∂ξ = αH ∂ ∂βH−1 −αH−1 ∂ ∂βH . Given that a maximum of LH is larger than that of LH−1, Theorem 1 implies that the function LH always has critical points which are not global maximum. Those points lie on lines in the parameter space. Further embedding of the critical lines into larger models gives high-dimensional critical planes in the parameter space. This property is very general, and in LDA and mixture models we do not need any assumptions on p(x | β). In these models, by the permutation symmetry of components, there are many choices for embedding, which induces many critical lines and planes for LH. 2.3 Embedding of a Maximum Point in LDA and Mixture Models The next question is whether or not the critical lines from a maximum of LH−1 gives maxima of LH. The answer requires information on the second derivatives, and depends on models. We show a general result on LDA, and that on mixture models as its corollary. Theorem 3. Suppose that the model is LDA defined by eq.(2). Let θ(H−1) ∗ be an isolated maximum point of LH−1, and θλ be its embedding given by eq.(3). Define a symmetric matrix R of the size dimβ by R = PN n=1Ω′ n(fH−1(X(n) | θ(H−1) ∗ )) nPM µ=1I(n) µ ∂2p(X(n) µ | η∗ H−1) ∂β∂β + 1 P H−1 j=1 γ∗ j +1 PM µ=1 PM τ=1 τ̸=µJ(n) µ,τ ∂p(X(n) µ | η∗ H−1) ∂β ∂p(X(n) τ | η∗ H−1) ∂β o , where Ω′(f) denotes the derivative of Ω(f) w.r.t. f, and I(n) µ = Z ∆H−2 DH−1(u | γ∗ 1, . . . , γ∗ H−2, γ∗ H−1 + 1) Y ν̸=µ PH−1 j=1 ujp(X(n) ν | βj)  du(H−1), J(n) µ,τ = Z ∆H−2 DH−1(u | γ∗ 1, . . . , γ∗ H−2, γ∗ H−1 + 2) Y ν̸=µ,τ PH−1 j=1 ujp(X(n) ν | βj)  du(H−1). Then, we have (i) If R is negative definite, the parameter θλ is a maximum of LH for all λ ∈(0, 1). (ii) If R has a positive eigenvalue, the parameter θλ is a saddle point for all λ ∈(0, 1). Remark: The conditions on R depend only on the parameter θ(H−1) ∗ . Proof. We use the parametrization ω defined by eq.(5). For each t, let Ht be a hyperplane with t fixed, and ˜LH,t be the function LH restricted on Ht. The hyperplane Ht is a slice transversal to the critical line, along which LH has the same value. Therefore, if the Hessian matrix of ˜LH,t on Ht is negative definite at the intersection ωλ (λ = (t + 1)/2), the point is a maximum of LH, and if the Hessian has a positive eigenvalue, ωλ is a saddle point. Since in ω coordinate we have ˜LH,t(α(H−1), s; β(H−1), ζ, 0) = LH−1(α(H−1), s; β(H−1), ζ), the Hessian of ˜LH,t at ωλ is given by Hess˜LH,t(ωλ) = HessLH−1(θ(H−1) ∗ ) O O ∂2 ˜LH,t(ωλ) ∂ξ∂ξ ! . (8) The off-diagonal blocks are zero, because we have ∂2 ˜LH,t(ωλ) ∂ξ∂ωa = 0 for ωa ̸= ξ from Lemma 2. By assumption, HessLH−1(θ(H−1) ∗ ) is negative definite. Noting that the terms including ∂fH(X(n); θλ)/∂ξ vanish from Lemma 2, it is easy to obtain ∂2 ˜LH,t(ωλ) ∂ξ∂ξ = λ(1 −λ)(γ∗ H−1)3/(PH−1 j=1 γ∗ j ) × R by using Lemma 6 and the definition of ξ. By setting M = 1 in LDA model, we have the sufficient conditions for mixture models. Corollary 4. For a mixture model, the same assertions as Theorem 3 hold for ˜R = PN n=1Ω′ n(fH−1(X(n) | θ(H−1) ∗ ))∂2p(X(n) | η∗ H−1) ∂β∂β . (9) Proof. For M = 1, J(n) µ,τ = 0 and I(n) = γ∗ H−1/ PH−1 j=1 γ∗ j . The assertion is obvious. 2.4 Critical Lines in Various Models We further investigate the critical lines for specific models. Hereafter, we consider the maximum likelihood estimation, setting Ωn(f) = log f for all n. Gaussian Mixture, Mixture of Factor Analyzers, and Mixture of PCA Assume that each component is the D-dimensional Gaussian density with mean µ and variance-covariance matrix V as parameters, which is denoted by φ(x; µ, V ). The matrix ˜R in eq.(9) has a form ˜R = S2 S3 ST 3 S4  , where S2, S3, and S4 correspond to the second derivatives with respect to (µ, µ), (µ, V ), and (V, V ), respectively. It is well known that the second derivative ∂2φ/∂µ∂µ of a Gaussian density is equal to the first derivative ∂φ/∂V . Then, S2 is equal to zero by the condition of a critical point. If the data is randomly generated, S3 and S4 are of full rank almost surely. This type of matrix necessarily has a positive eigenvalue. It is not difficult to extend this discussion to models with scalar or diagonal variance-covariance matrices as variable parameters. Similar arguments hold for mixture of factor analyzers (MFA, [4]) and mixture of probabilistic PCA (MPCA, [5]). In factor analyzers or probabilistic PCA, the variancecovariance matrix is restricted to the form V = FF T + S, where F is a factor loading of rank k and S is a diagonal or scalar matrix. Because the first derivative of φ(x; µ, FF T + S) with respect to F is ∂φ(x;µ,F F T +S) ∂V F, the block in ˜R corresponding to the second derivatives on µ is not of full rank. In a similar manner to Gaussian mixtures, ˜R has a positive eigenvalue. In summary, we have the following Theorem 5. Suppose that a model is Gaussian mixture, MFA, or MPCA. If ˜R is of full rank, every point θλ on the critical line is a saddle point of LH. This theorem means that if we have the maximum likelihood estimator for H −1 components, we can find an ascending direction of likelihood by splitting a component and modifying their means and variance-covariance matrices in the direction of the positive eigenvector. This leads a component splitting method, which will be shown in Section 3.1. Latent Dirichlet Allocation We consider LDA with multinomial components. Using the D-dimensional random vector x = (xa) ∈{(1, 0, . . . , 0)T , . . . , (0, . . . , 0, 1)T }, which indicates a chosen element, the multinomial distribution over D elements is expressed as an exponential family by p(x | β) = QD a=1(pa)xa = exp{PD−1 a=1 βaxa −log(1 + PD−1 a=1 eβa)}, where pa is the expectation of xa, and β ∈RD−1 is a natural parameter given by βa = log(pa/pD). It is easy to obtain R = PN n=1Ω′(fH−1(X(n) | θ(H−1) ∗ ))PM µ=1 P τ̸=µJ(n) µ,τ p(X(n) µ | γ∗ H−1)p(X(n) τ | γ∗ H−1) × ( ˜X(n) µ −p∗ (H−1))( ˜X(n) τ −p∗ (H−1))T , (10) where ˜X(n) ν is the truncated (D −1)-dimensional vector, and p∗ (H−1) ∈(0, 1)D−1 is the expectation parameter for (H −1)-th component of θ(H−1) ∗ . In general, J(n) µ,τ are intractable in large problems. We explain a simple case of H = 2 and M = D. Let bp be the frequency vector of the D elements, which is the maximum likelihood estimator for the one multinomial model. In this case, we have J (n) µ,τ = 1 and R = PN n=1 PM µ,τ=1( ˜X(n) µ −bp)( ˜X(n) τ −bp)T −PM µ=1( ˜X(n) µ −bp)( ˜X(n) µ −bp)T . First, suppose we have a data set with X(n) ν = eν for all n and 1 ≤ν ≤D = M, where ej is the D-dimensional vector with the j-th component 1 and others zero. Then, we have bp = (1/D, . . . , 1/D) and PD µ=1( ˜X(n) µ −bp) = 0, which means R < 0. The critical line gives maxima for LDA with H = 2. Next, suppose the data consists of D groups, and every data in the j-th group is given by X(n) ν = ej. While we have again bp = (1/D, . . . , 1/D), the matrix R is PD j=1(N/D) × D(D −1)(ej −bp)(ej −bp)T > 0. Thus, all the points on the critical lines are saddle points. These examples explain two extreme cases; in the former we have no advantage in using two components because all the data X (n) are the same, while in the latter the multiple components fits better to the variety of X (n). 3 Component Splitting Method in Mixture of Gaussian Components 3.1 EM with Component Splitting It is well known that the EM algorithm suffers from strong dependency on initialization. In addition, because the likelihood of a mixture of Gaussian components is not upper bounded Algorithm 1 : EM with component splitting for Gaussian mixture 1. Initialization: calculate the sample mean µ1 and variance-covariance matrix V1. 2. H := 1. 3. For all 1 ≤h ≤H, diagonalize Vh∗as Vh∗= UhΛhU T h , and calculate ˜Rh according to eq.(12) in Appendix. 4. For 1 ≤h ≤H, calculate the eigenvector (rh, Wh) of ˜Rh corresponding to the largest eigenvalue. 5. For 1 ≤h ≤H, optimize β by line search to maximize the likelihood for ch = 1 2ch∗, µh = µh∗−βrh, Vh = Uhe−βWhΛhe−βWhU T h , cH+1 = 1 2ch∗, µH+1 = µh∗+ βrh, VH+1 = UheβWhΛheβWhU T h . (11) Let βo h be the optimizer and Lh be the likelihood. 6. For h† := arg maxh Lh, split h†-th component according to eq.(11) with βo h†. 7. Optimize the parameter θ(H+1) using EM algorithm. Let θ(H+1) ∗ be the result. 8. If H + 1 = MAX H, then END. Otherwise, H := H + 1 and go to 3. -6 -3 0 3 6 -6 -3 0 3 6 -1 0 1 2 3 4 5 -3 0 3 -3 0 3 0 1 2 3 4 -3 0 3 -3 0 3 0 1 2 3 4 (a) Data (b) Success (c) Failure Figure 1: Spiral data. In (b) and (c), the lines represent the factor loading vectors Fh and −Fh at the mean values, and the radius of a sphere is the scalar part of the variance. for small variances, we should use an optimization technique to give an appropriate maximum. Sequential split of components can give a solution to these problems. From Theorem 5, a stable and effective way of splitting a Gaussian component is derived to increase the likelihood. We propose EM with component splitting, which adds a component one by one after maximizing the likelihood at each size. Ueda et al ([6]) proposes Split and Merge EM, in which the components repeat split and merge in a triplet, keeping the total number fixed. While their method works well, it requires a large number of trials of EM for candidate triplets, and the splitting method is heuristic. Our splitting method is well based on theory, and EM with splitting gives a series of estimators for all model sizes in a single run. Algorithm 1 is the procedure of learning. We show only the case of mixture of Gaussian. The exact algorithm for the mixture of PCA/FA will be shown in a forthcoming paper. It is noteworthy that in splitting a component, not only the means but also the variancecovariance matrices must be modified. The simple additive rule Vnew = Vold + ∆V tends to fail, because it may make the matrix non-positive definite. To solve this problem, we use Lie algebra expression to add a vector of ascending direction. Let V = UΛU T be the diagonalization of V , and consider V (W) = UeW ΛeW U T for a symmetric matrix W. This gives a local coordinate of the positive definite matrices around V = V (0). Modification of V through W gives a stable way of updating variance-covariance matrices. 3.2 Experimental results We show through experiments how the proposed EM with component splitting effectively maximizing the likelihood. In the first experiment, the mixture of PCA with 8 components of rank 1 is employed to fit the synthesized 150 data generated along a piecewise linear spiral (Fig.1). Table 1-(a) shows the results over 30 trials with different random numbers. We use the on-line EM algorithm ([7]), presenting data one-by-one in a random order. The EM with random initialization reaches the best state (Fig.1-(b)) only 6 times, while EM with component splitting achieves it 26 times. Fig.1-(c) shows an example of failure. 20 40 60 80 100 120 140 160 20 40 60 80 100 120 140 160 Figure 2: ”Lenna”. The next experiment is an image compression problem, in which the image ”Lenna” of 160×160 pixels (Fig.2) is used. The image is partitioned into 20×20 blocks of 8×8 pixels, which are regarded as 400 data in R64. We use the mixture of PCA with 10 components of rank 4, and obtain a compressed image by ˆX = Fh(F T h Fh)−1F T h X, where X is a 64 dimensional block and h indicates the component of the shortest Euclidean distance ∥X −µh∥. Table 1-(b) shows the residual square error (RSE), P400 j=1 ∥Xj −ˆXj∥2, which shows the quality of the compression. In both experiments, we can see the better optimization performance of the proposed algorithm. (a) Likelihood for spiral data (30 runs) (b) RSE for ”Lenna” (10 runs) EM EMCS Best -534.9 (6 times) -534.9 (26 times) Worst -648.1 -587.9 Av. -583.9 -541.3 ×104 EM EMCS Best 5.94 5.38 Worst 6.40 6.12 Av. 6.15 5.78 Table 1: Experimental results. EM is the conventional EM with random initialization, and EMCS is the proposed EM with component splitting. 4 Discussions In EM with component splitting, we obtain the estimators up to the specified number of components. We need a model selection technique to choose the best one, which is another important problem. We do not discuss it in this paper, because our method can be combined with many techniques, which select a model after obtaining the estimators. However, we should note that some famous methods such as AIC and MDL, which are based on statistical asymptotic theory, cannot be applied to mixture models because of the unidentifiability of the parameter. Further studies are necessary on model selection for mixture models. Although the computation to calculate the matrix R is not cheap in a mixture of Gaussian components, the full variance-covariance matrices are not always necessary in practical problems. It can save the computation drastically. Also, some methods to reduce the computational cost should be more investigated. In selecting a component to split, we try line search for all the components and choose the one giving the largest likelihood. While this works well in our experiments, the proposed method of component splitting can be combined with other criterions to select a component. One of them is to select the component giving the largest eigenvalue of ˜Rh. In Gaussian mixture models, this is very natural; the block of the second derivatives w.r.t. V in ˜R is equal to the weighted fourth cummulant, and a component with a large cummulant should be split. However, in mixture of FA and PCA, this does not necessarily work well, because the decomposition V = FF T + S does not give a natural parametrization. Although we have discussed only local properties, a method incorporating global information might be more preferable. These are left as a future work. Appendix Lemma 6. Suppose ϕH(u(H); β(H)) satisfies the assumption (S-1). Define IH(α(H); β(H)) = R ∆H−1 ϕ(u(H); β(H))DH(u(H) | α(H))du(H). Then, IH also satisfies (S-1); IH(α(H); β(H−2), βH−1, βH−1) = IH−1(α(H−2), αH−1 + αH; β(H−1)). Proof. Direct calculation. Matrix ˜Rh for Gaussian mixture We omit the index h for simplicity, and use Einstein’s convention. Let U = (u1, . . . , uD) and Λ = diag(λ1, . . . , λD). For V (W) = UeW ΛeW U T , we have ∂V (O)/∂Wab = (λa+(1−δab)λb)(uauT b +ubuT a ), where δab is Kronecker’s delta. Let T (3) and T (4) be the weighted third and fourth sample moments, respectively, with weights φ(x(n);µ∗,V∗) f (H−1)(x(n);θ(H−1) ∗ ). ˜T(3) and ˜T(4) are defined by ˜T abc (3) = V apV bqV crT (3) pqr and ˜T abcd (4) = V apV bqV crV dsT (4) pqrs, respectively, where V ap is the (ap)-component of V −1. Direct calculation leads that the matrix ˜R =  O B BT C  , where the decomposition corresponds to β = (µ, W), is given by Bµa,Wbc = (λb + (1 −δbc)λc)uT b ˜T ··a (3)uc CWabWcd = (λaubuT a + (1 −δab)λbuauT b )pq(λcuduT c + (1 −δcd)λducuT d )rs ×  ˜T pqrs (4) −(V pqV rs + V prV qs + V psV qr) . (12) In the above equation, ˜T ··a (3) is the D × D matrix with fixed a for ˜T bca (3) . References [1] K. Fukumizu and S. Amari. Local minima and plateaus in hierarchical structures of multilayer perceptrons. Neural Networks, 13(3):317–327, 2000. [2] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Advances in Neural Information Processing Systems, 14, 2002. MIT Press. [3] S. Amari, H. Park, and T. Ozeki. Geometrical singularities in the neuromanifold of multilayer perceptrons. Advances in Neural Information Processing Systems, 14, 2002. MIT Press. [4] Z. Ghahramani and G. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, University of Toronto, Department of Computer Science, 1997. [5] M. Tipping and C. Bishop. Mixtures of probabilistic principal component analysers. Neural Computation, 11:443–482, 1999. [6] N. Ueda, R. Nakano, Z. Ghahramani, and G. Hinton. SMEM algorithm for mixture models. Neural Computation, 12(9):2109–2128, 2000. [7] M. Sato and S. Ishii. On-line EM algorithm for the normalized Gaussian network. Neural Computation, 12(2):2209–2225, 2000.
2002
52
2,257
Manifold Parzen Windows Pascal Vincent and Yoshua Bengio Dept. IRO, Université de Montréal C.P. 6128, Montreal, Qc, H3C 3J7, Canada {vincentp,bengioy}@iro.umontreal.ca http://www.iro.umontreal.ca/ vincentp Abstract The similarity between objects is a fundamental element of many learning algorithms. Most non-parametric methods take this similarity to be fixed, but much recent work has shown the advantages of learning it, in particular to exploit the local invariances in the data or to capture the possibly non-linear manifold on which most of the data lies. We propose a new non-parametric kernel density estimation method which captures the local structure of an underlying manifold through the leading eigenvectors of regularized local covariance matrices. Experiments in density estimation show significant improvements with respect to Parzen density estimators. The density estimators can also be used within Bayes classifiers, yielding classification rates similar to SVMs and much superior to the Parzen classifier. 1 Introduction In [1], while attempting to better understand and bridge the gap between the good performance of the popular Support Vector Machines and the more traditional K-NN (K Nearest Neighbors) for classification problems, we had suggested a modified Nearest-Neighbor algorithm. This algorithm, which was able to slightly outperform SVMs on several realworld problems, was based on the geometric intuition that the classes actually lived “close to” a lower dimensional non-linear manifold in the high dimensional input space. When this was not properly taken into account, as with traditional K-NN, the sparsity of the data points due to having a finite number of training samples would cause “holes” or “zig-zag” artifacts in the resulting decision surface, as illustrated in Figure 1. Figure 1: A local view of the decision surface, with “holes”, produced by the Nearest Neighbor when the data have a local structure (horizontal direction). The present work is based on the same underlying geometric intuition, but applied to the well known Parzen windows [2] non-parametricmethod for density estimation, using Gaussian kernels. Most of the time, Parzen Windows estimates are built using a “spherical Gaussian” with a single scalar variance (or width) parameter  . It is also possible to use a “diagonal Gaussian”, i.e. with a diagonal covariance matrix, or even a “full Gaussian” with a full covariance matrix, usually set to be proportional to the global empirical covariance of the training data. However these are equivalent to using a spherical Gaussian on preprocessed, normalized data (i.e. normalized by subtracting the empirical sample mean, and multiplying by the inverse sample covariance). Whatever the shape of the kernel, if, as is customary, a fixed shape is used, merely centered on every training point, the shape can only compensate for the global structure (such as global covariance) of the data. Now if the true density that we want to model is indeed “close to” a non-linear lower dimensional manifold embedded in the higher dimensional input space, in the sense that most of the probability density is concentrated around such a manifold (with a small noise component away from it), then using Parzen Windows with a spherical or fixed-shape Gaussian is probably not the most appropriate method, for the following reason. While the true density mass, in the vicinity of a particular training point  , will be mostly concentrated in a few local directions along the manifold, a spherical Gaussian centered on that point will spread its density mass equally along all input space directions, thus giving too much probability to irrelevant regions of space and too little along the manifold. This is likely to result in an excessive “bumpyness” of the thus modeled density, much like the “holes” and “zig-zag” artifacts observed in KNN (see Fig. 1 and Fig. 2). If the true density in the vicinity of  is concentrated along a lower dimensional manifold, then it should be possible to infer the local direction of that manifold from the neighborhood of  , and then anchor on  a Gaussian “pancake” parameterized in such a way that it spreads mostly along the directions of the manifold, and is almost flat along the other directions. The resulting model is a mixture of Gaussian “pancakes”, similar to [3], mixtures of probabilistic PCAs [4] or mixtures of factor analyzers [5, 6], in the same way that the most traditional Parzen Windows is a mixture of spherical Gaussians. But it remains a memory-based method, with a Gaussian kernel centered on each training points, yet with a differently shaped kernel for each point. 2 The Manifold Parzen Windows algorithm In the following we formally define and justify in detail the proposed algorithm. Let  be an  -dimensional random variable with values in   , and an unknown probability density function     . Our training set contains  samples of that random variable, collected in a  matrix  whose row  is the  -th sample. Our goal is to estimate the density . Our estimator     has the form of a mixture of Gaussians, but unlike the Parzen density estimator, its covariances   are not necessarily spherical and not necessarily identical everywhere:     ! " $# % &('*),+ -) .0/ (1) where &21 + . is the multivariate Gaussian density with mean vector 3 and covariance matrix  : & 1 + . 4 65879;:9< =<>@? A B8C ' ? 1EDF -HG A C ' ? 1ED (2) where < =< is the determinant of  . How should we select the individual covariances   ? From the above discussion, we expect that if there is an underlying “non-linear principal manifold”, those gaussians would be “pancakes” aligned with the plane locally tangent to this underlying manifold. The only available information (in the absence of further prior knowledge) about this tangent plane can be gathered from the training samples int the neighborhood of  . In other words, we are interested in computing the principal directions of the samples in the neighborhood of  . For generality, we can define a soft neighborhood of  with a neighborhood kernel I . J  that will associate an influence weight to any point in the neighborhood of  . We can then compute the weighted covariance matrix )   # % ! +   # I  J ,0 . , 6  ,  # % ! +   # I  J , (3) where    .   denotes the outer product. I . /  could be a spherical Gaussian centered on  for instance, or any other positive definite kernel, possibly incorporating prior knowledge as to what constitutes a reasonable neighborhood for point  . Notice that if I 9/ , is a constant (uniform kernel),  ) is the global training sample covariance. As an important special case, we can define a hard k-neighborhood for training sample  by assigning a weight of to any point no further than the  -th nearest neighbor of  among the training set, according to some metric such as the Euclidean distance in input space, and assigning a weight of  to points further than the  -th neighbor. In that case, ) is the unweighted covariance of the  nearest neighbors of  . Notice what is happening here: we start with a possibly rough prior notion of neighborhood, such as one based on the ordinary Euclidean distance in input space, and use this to compute a local covariance matrix, which implicitly defines a refined local notion of neighborhood, taking into account the local direction observed in the training samples. Now that we have a way of computing a local covariance matrix for each training point, we might be tempted to use this directly in equations 2 and 1. But a number of problems must first be addressed:  Equation 2 requires the inverse covariance matrix, whereas  ) is likely to be illconditioned. This situation will definitely arise if we use a hard k-neighborhood with   . In this case we get a Gaussian that is totally flat outside of the affine subspace spanned by  and its  neighbors, and it does not constitute a proper density in   . A common way to deal with this problem is to add a small isotropic (spherical) Gaussian noise of variance   in all directions, which is done by simply adding   to the diagonal of the covariance matrix:    )   .  Even if we regularize   by adding   , when we deal with high dimensional spaces, it would be prohibitive in computation time and storage to keep and use the full inverse covariance matrix as expressed in 2. This would in effect multiply both the time and storage requirement of the already expensive ordinary Parzen Windows by   . So instead, we use a different, more compact representation of the inverse Gaussian, by storing only the eigenvectors associated with the first few largest eigenvalues of   , as described below. The eigen-decomposition of a covariance matrix  can be expressed as:   , where the columns of  are the orthonormal eigenvectors and  is a diagonal matrix with the eigenvalues %   ! : , that we will suppose sorted in decreasing order, without loss of generality. The first " eigenvectors with largest eigenvalues correspond to the principal directions of the local neighborhood, i.e. the high variance local directions of the supposed underlying " -dimensional manifold (but the true underlying dimension is unknown and may actually vary across space). The last few eigenvalues and eigenvectors are but noise directions with a small variance. So we may, without too much risk, force those last few components to the same low noise level   . We have done this by zeroing the last  " eigenvalues (by considering only the first " leading eigenvalues) and then adding   to all eigenvalues. This allows us to store only the first " eigenvectors, and to later compute & 1 +  in time # .%$ "  instead of #    . Thus both the storage requirement and the computational cost when estimating the density at a test point is only about "  times that of ordinary Parzen. It can easily be shown that such an approximation of the covariance matrix yields to the following computation of & 1 +  : Algorithm LocalGaussian( 9/ ;/  /!  / " /   ) Input: test vector   9 , training vector     , " eigenvalues   , " eigenvectors in the columns of   , dimension " , and the regularization hyper-parameter   . (1)   " 5E79    # %           "      (2)   %  B <$<  <$<     # % %  %  B  <$<   .   <$<  Output: Gaussian density > ?    C D In the case of the hard k-neighborhood, the training algorithm pre-computes the local principal directions  of the  nearest neighbors of each training point  (in practice we compute them with a SVD rather than an eigen-decomposition of the covariance matrix, see below). Note that with "   , we trivially obtain the traditional Parzen windows estimator. Algorithm MParzen::Train( / " /!/   ) Input: training set matrix  with  rows     , chosen number of principal directions " , chosen number of neighbors  " , and regularization hyper-parameter   . (1) For   / 5 /    /  (2) Collect  nearest neighbors  of  , and put   in the rows of matrix . (3) Perform a partial singular value decomposition of , to obtain the leading " singular values !  ( " # /    /!"$ ) and singular column vectors  &%  of . (4) For "  /    /!"$ , let      (' B  ! Output: The model ) = . / / // " /    , where  is an     " tensor that collects all the eigenvectors and is a   " matrix with all the eigenvalues. Algorithm MParzen::Test( /*) ) Input: test point and model )   /!/! 9/! / " /    . (1) !,+  (2) For   / 5 /    /  (3) !-+.!  LocalGaussian( 9/ /! /  / " /   ) Output: manifold Parzen estimator    ' ! . 3 Related work As we have already pointed out, Manifold Parzen Windows, like traditional Parzen Windows and so many other density estimation algorithms, results in defining the density as a mixture of Gaussians. What differs is mostly how those Gaussians and their parameters are chosen. The idea of having a parameterization of each Gaussian that orients it along the local principal directions also underlies the already mentioned work on mixtures of Gaussian pancakes [3], mixtures of probabilistic PCAs [4], and mixtures of factor analysers [5, 6]. All these algorithms typically model the density using a relatively small number of Gaussians, whose centers and parameters must be learnt with some iterative optimisation algorithm such as EM (procedures which are known to be sensitive to local minima traps). By contrast our approach is, like the original Parzen windows, heavily memory-based. It avoids the problem of optimizing the centers by assigning a Gaussian to every training point, and uses simple analytic SVD to compute the local principal directions for each. Another successful memory-based approach that uses local directions and inspired our work is the tangent distance algorithm [7]. While this approach was initially aimed at solving classification tasks with a nearest neighbor paradigm, some work has already been done in developing it into a probabilistic interpretation for mixtures with a few gaussians, as well as for full-fledged kernel density estimation [8, 9]. The main difference between our approach and the above is that the Manifold Parzen estimator does not require prior knowledge, as it infers the local directions directly from the data, although it should be easy to also incorporate prior knowledge if available. We should also mention similarities between our approach and the Local Linear Embedding and recent related dimensionality reduction methods [10, 11, 12, 13]. There are also links with previous work on locally-defined metrics for nearest-neighbors [14, 15, 16, 17]. Lastly, it can also be seen as an extension along the line of traditional variable and adaptive kernel estimators that adapt the kernel width locally (see [18] for a survey). 4 Experimental results Throughout this whole section, when we mention Parzen Windows (sometimes abbreviated Parzen ), we mean ordinary Parzen windows using a spherical Gaussian kernel with a single hyper-parameter  , the width of the Gaussian. When we mention Manifold Parzen Windows (sometimes abbreviated MParzen), we used a hard k-neighborhood, so that the hyper-parameters are: the number of neighbors  , the number of retained principal components " , and the additional isotropic Gaussian noise parameter  . When measuring the quality of a density estimator   , we used the average negative log likelihood: ANLL  %     # %   = .  with the examples  from a test set. 4.1 Experiment on 2D artificial data A training set of 300 points, a validation set of 300 points and a test set of 10000 points were generated from the following distribution of two dimensional . /  points: 2    9   ' /       where    /   ,  '   /     ,    /     ,  /"!  is uniform in the interval /!0 and  3H/   is a normal density. We trained an ordinary Parzen, as well as MParzen with "( and "( 5 on the training set, tuning the hyper-parameters to achieve best performance on the validation set. Figure 2 shows the training set and gives a good idea of the densities produced by both kinds of algorithms (as the visual representation for MParzen with "  and "  5 did not appear very different, we show only the case "  ). The graphic reveals the anticipated “bumpyness” artifacts of ordinary Parzen, and shows that MParzen is indeed able to better concentrate the probability density along the manifold, even when the training data is scarce. Quantitative comparative results of the two models are reported in table 1 Table 1: Comparative results on the artificial data (standard errors are in parenthesis). Algorithm Parameters used ANLL on test-set Parzen     $#  -1.183 (0.016) MParzen "  , = @ ,     &% -1.466 (0.009) MParzen "  5 , =  ,         -1.419 (0.009) Several points are worth noticing:  Both MParzen models seem to achieve a lower ANLL than ordinary Parzen (even though the underlying manifold really has dimension "  ), and with more consistency over the test sets (lower standard error).  The optimal width  for ordinary Parzen is much larger than the noise parameter of the true generating model (0.01), probably because of the finite sample size.  The optimal regularization parameter  for MParzen with "  (i.e. supposing a one-dimensional underlying manifold) is very close to the actual noise parameter of the true generating model. This suggests that it was able to capture the underlying structure quite well. Also it is the best of the three models, which is not surprising, since the true model is indeed a one dimensional manifold with an added isotropic Gaussian noise.  The optimal additional noise parameter  for MParzen with "  5 (i.e. supposing a two-dimensional underlying manifold) is close to 0, which suggests that the model was able to capture all the noise in the second “principal direction”. Figure 2: Illustration of the density estimated by ordinary Parzen Windows (left) and Manifold Parzen Windows (right). The two images on the bottom are a zoomed area of the corresponding image at the top. The 300 training points are represented as black dots and the area where the estimated density   is above 1.0 is painted in gray. The excessive “bumpyness” and holes produced by ordinary Parzen windows model can clearly be seen, whereas Manifold Parzen density is better aligned with the underlying manifold, allowing it to even successfully “extrapolate” in regions with few data points but high true density. 4.2 Density estimation on OCR data In order to compare the performance of both algorithms for density estimation on a realworld problem, we estimated the density of one class of the MNIST OCR data set, namely the “2” digit. The available data for this class was divided into 5400 training points, 558 validation points and 1032 test points. Hyper-parameters were tuned on the validation set. The results are summarized in Table 2, using the performance measures introduced above (average negative log-likelihood). Note that the improvement with respect to Parzen windows is extremely large and of course statistically significant. Table 2: Density estimation of class ’2’ in the MNIST data set. Standard errors in parenthesis. Algorithm Parameters used validation ANLL test ANLL Parzen     % -197.27 (4.18) -197.19 (3.55) MParzen "    , = ,     % -696.42 (5.94) -695.15 (5.21) 4.3 Classification performance To obtain a probabilistic classifier with a density estimator we train an estimator  *   .<   for each class  , and apply Bayes’ rule to obtain    <    C '   D  C  D F   C '   F D  C  F D . When measuring the quality of a probabilistic classifier    <  , we used the negative conditional log likelihood: ANCLL   %     # %        <   , with the examples   /   (correct class, input) from a test set. This method was applied to both the Parzen and the Manifold Parzen density estimators, which were compared with state-of-the-art Gaussian SVMs on the full USPS data set. The original training set (7291) was split into a training (first 6291) and validation set (last 1000), used to tune hyper-parameters. The classification errors for all three methods are compared in Table 3, where the hyper-parameters are chosen based on validation classification error. The log-likelihoods are compared in Table 4, where the hyper-parameters are chosen based on validation ANCLL. Hyper-parameters for SVMs are the box constraint  and the Gaussian width  . MParzen has the lowest classification error and ANCLL of the three algorithms. Table 3: Classification error obtained on USPS with SVM, Parzen windows and Manifold Parzen windows classifiers. Algorithm validation error test error parameters SVM 1.2% 4.68%     ,  Parzen 1.8% 5.08%     MParzen 0.9% 4.08% "  ,   ,     Table 4: Comparative negative conditional log likelihood obtained on USPS. Algorithm valid ANCLL test ANCLL parameters Parzen 0.1022 0.3478     MParzen 0.0658 0.3384 "  # , = # ,     # 5 Conclusion The rapid increase in computational power now allows to experiment with sophisticated non-parametric models such as those presented here. They have allowed to show the usefulness of learning the local structure of the data through a regularized covariance matrix estimated for each data point. By taking advantage of local structure, the new kernel density estimation method outperforms the Parzen windows estimator. Classifiers built from this density estimator yield state-of-the-art knowledge-free performance, which is remarkable for a not discriminatively trained classifier. Besides, in some applications, the accurate estimation of probabilities can be crucial, e.g. when the classes are highly imbalanced. Future work should consider other alternative methods of estimating the local covariance matrix, for example as suggested here using a weighted estimator, or taking advantage of prior knowledge (e.g. the Tangent distance directions). References [1] P. Vincent and Y. Bengio. K-local hyperplane and convex distance nearest neighbor algorithms. In T.G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems, volume 14. The MIT Press, 2002. [2] E. Parzen. On the estimation of a probability density function and mode. Annals of Mathematical Statistics, 33:1064–1076, 1962. [3] G.E. Hinton, M. Revow, and P. Dayan. Recognizing handwritten digits using mixtures of linear models. In G. Tesauro, D.S. Touretzky, and T.K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 1015–1022. MIT Press, Cambridge, MA, 1995. [4] M.E. Tipping and C.M. Bishop. Mixtures of probabilistic principal component analysers. Neural Computation, 11(2):443–482, 1999. [5] Z. Ghahramani and G.E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, Dpt. of Comp. Sci., Univ. of Toronto, 21 1996. [6] Z. Ghahramani and M. J. Beal. Variational inference for Bayesian mixtures of factor analysers. In Advances in Neural Information Processing Systems 12, Cambridge, MA, 2000. MIT Press. [7] P. Y. Simard, Y. A. LeCun, J. S. Denker, and B. Victorri. Transformation invariance in pattern recognition — tangent distance and tangent propagation. Lecture Notes in Computer Science, 1524, 1998. [8] D. Keysers, J. Dahmen, and H. Ney. A probabilistic view on tangent distance. In 22nd Symposium of the German Association for Pattern Recognition, Kiel, Germany, 2000. [9] J. Dahmen, D. Keysers, M. Pitz, and H. Ney. Structured covariance matrices for statistical image object recognition. In 22nd Symposium of the German Association for Pattern Recognition, Kiel, Germany, 2000. [10] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, Dec. 2000. [11] Y. Whye Teh and S. Roweis. Automatic alignment of local representations. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15. The MIT Press, 2003. [12] V. de Silva and J.B. Tenenbaum. Global versus local approaches to nonlinear dimensionality reduction. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15. The MIT Press, 2003. [13] M. Brand. Charting a manifold. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15. The MIT Press, 2003. [14] R. D. Short and K. Fukunaga. The optimal distance measure for nearest neighbor classification. IEEE Transactions on Information Theory, 27:622–627, 1981. [15] J. Myles and D. Hand. The multi-class measure problem in nearest neighbour discrimination rules. Pattern Recognition, 23:1291–1297, 1990. [16] J. Friedman. Flexible metric nearest neighbor classification. Technical Report 113, Stanford University Statistics Department, 1994. [17] T. Hastie and R. Tibshirani. Discriminant adaptive nearest neighbor classification and regression. In David S. Touretzky, Michael C. Mozer, and Michael E. Hasselmo, editors, Advances in Neural Information Processing Systems, volume 8, pages 409–415. The MIT Press, 1996. [18] A.J. Inzenman. Recent developments in nonparametric density estimation. Journal of the American Statistical Association, 86(413):205–224, 1991.
2002
53
2,258
Parametric Mixture Models for Multi-Labeled Text Naonori Ueda Kazumi Saito NTT Communication Science Laboratories 2-4 Hikaridai, Seikacho, Kyoto 619-0237 Japan {ueda,saito}@cslab.kecl.ntt.co.jp Abstract We propose probabilistic generative models, called parametric mixture models (PMMs), for multiclass, multi-labeled text categorization problem. Conventionally, the binary classification approach has been employed, in which whether or not text belongs to a category is judged by the binary classifier for every category. In contrast, our approach can simultaneously detect multiple categories of text using PMMs. We derive efficient learning and prediction algorithms for PMMs. We also empirically show that our method could significantly outperform the conventional binary methods when applied to multi-labeled text categorization using real World Wide Web pages. 1 Introduction Recently, as the number of online documents has been rapidly increasing, automatic text categorization is becoming a more important and fundamental task in information retrieval and text mining. Since a document often belongs to multiple categories, the task of text categorization is generally defined as assigning one or more category labels to new text. This problem is more difficult than the traditional pattern classification problems, in the sense that each sample is not assumed to be classified into one of a number of predefined exclusive categories. When there are L categories, the number of possible multi-labeled classes becomes 2L. Hence, this type of categorization problem has become a challenging research theme in the field of machine learning. Conventionally, a binary classification approach has been used, in which the multicategory detection problem is decomposed into independent binary classification problems. This approach usually employs the state-of-the-art methods such as support vector machines (SVMs) [9][4] and naive Bayes (NB) classifiers [5][7]. However, since the binary approach does not consider a generative model of multi-labeled text, we think that it has an important limitation when applied to the multi-labeled text categorization. In this paper, using independent word-based representation, known as Bag-of-Words (BOW) representation [3], we present two types of probabilistic generative models for multi-labeled text called parametric mixture models (PMM1, PMM2), where PMM2 is a more flexible version of PMM1. The basic assumption under PMMs is that multi-labeled text has a mixture of characteristic words appearing in singlelabeled text that belong to each category of the multi-categories. This assumption leads us to construct quite simple generative models with a good feature: the objective function of PMM1 is convex (i.e., the global optimum solution can be easily found). We present efficient learning and prediction algorithms for PMMs. We also show the actual benefits of PMMs through an application of WWW page categorization, focusing on those from the “yahoo.com” domain. 2 Parametric Mixture Models 2.1 Multi-labeled Text According to the BOW representation, which ignores the order of word occurrence in a document, the nth document, dn, can be represented by a word-frequency vector, xn = (xn 1, . . . , xn V ), where xn i denotes the frequency of word wi occurrence in dn among the vocabulary V =< w1, . . . , wV >. Here, V is the total number of words in the vocabulary. Next, let yn = (yn 1 , . . . , yn L) be a category vector for dn, where yn l takes a value of 1(0) when dn belongs (does not belong) to the lth category. L is the total number of categories. Note that L categories are pre-defined and that a document always belongs to at least one category (i.e., P l yl > 0). In the case of multi-class and single-labeled text, it is natural that x in the lth category should be generated from a multinomial distribution: P(x|l) ∝QV i=1(θl,i)xi Here, θl,i ≥0 and PV i=1 θl,i = 1. θl,i is a probability that the ith word wi appears in a ducument belonging to the lth class. We generalize this to multi-class and multi-labeled text as: P(x|y) ∝ VY i=1 (ϕi(y))xi , where ϕi(y) ≥0 and V X i=1 ϕi(y) = 1. (1) Here, ϕi(y) is a class-dependent probability that the ith word appears in a document belonging to class y. Clearly, it is impractical to independently set a multinomial parameter vector to each distinct y, since there are 2L −1 possible classes. Thus, we try to efficiently parameterize them. 2.2 PMM1 In general, words in a document belonging to a multi-category class can be regarded as a mixture of characteristic words related to each of the categories. For example, a document that belongs to both “sports” and “music” would consist of a mixture of characteristic words mainly related to both categories. Let θl = (θl,1, . . . , θl,V ). The above assumption indicates that ϕ(y)(= (ϕ1(y), . . . , ϕV (y))) can be represented by the following parametric mixture: ϕ(y) = L X l=1 hl(y)θl, where hl(y) = 0 for l such that yl = 0. (2) Here, hl(y)(> 0) is a mixing proportion (PL l=1 hl(y) = 1). Intuitively, hl(y) can also be interpreted as the degree to which x has the lth category. Actually, by experimental verification using about 3,000 real Web pages, we confirmed that the above assumption was reasonable. Based on the parametric mixture assumption, we can construct a simple parametric mixture model, PMM1, in which the degree is uniform: hl(y) = yl/ PL l′=1 yl′. For example, in the case of L = 3, ϕ((1, 1, 0)) = (θ1 + θ2)/2 and ϕ((1, 1, 1)) = (θ1 + θ2 + θ3)/3. Substituting Eq. (2) into Eq. (1), PMM1 can be defined by P(x|y, Θ) ∝ VY i=1 PL l=1 ylθl,i PL l′=1 yl′ !xi . (3) A set of unknown model paramters in PMM1 is Θ = {θl}L l=1. Of course, multi-category text may sometimes be weighted more toward one category than to the rest of the categories among multiple categories. However, being averaged over all biases, they could be canceled and therefore PMM1 would be reasonable. This motivates us to construct PMM1. PMMs are different from usual distributional mixture models in the sense that the mixing is performed in a parameter space, while the latter several distributional components are mixed. Since the latter models assume that a sample is generated from one component, they cannot represent “multiplicity.” On the other hand, PMM1 can represent 2L −1 multi-category classes with only L parameter vectors. 2.3 PMM2 In PMM1, shown in Eq. (2), ϕ(y) is approximated by {θl}, which can be regarded as the “first-order” approximation. We consider the second order model, PMM2, as a more flexible model, in which parameter vectors of duplicate-category, θl,m, are also used to approximate ϕ(y). ϕ(y) = L X l=1 L X m=1 hl(y)hm(y)θl,m, where θl,m = αl,mθl + αm,lθm. (4) Here, αl,m is a non-negative bias parameter satisfying αl,m + αm,l = 1, ∀l, m. Clearly, αl,l = 0.5. For example, in the case of L = 3, ϕ((1, 1, 0)) = {(1+2α1,2)θ1+ (1+2α2,1)θ2}/4, ϕ((1, 1, 1)) = {(1+2(α1,2 +α1,3))θ1 +(1+2(α2,1 +α2,3))θ2 +(1+ 2(α3,1 + α3,2))θ3}/9. In PMM2, unlike in PMM1, the category biases themselves can be estimated from given training data. Based on Eq. (4), PMM2 can be defined by P(x|y; Θ) ∝ VY i=1 (PL l=1 PL m=1 ylymθl,m,i PL l=1 yl PL m=1 ym )xi (5) A set of unknown parameters in PMM2 becomes Θ = {θl, αl,m}L,L l=1,m=1. 2.4 Related Model Very recently, as a more general probabilistic model for multi-latent-topics text, called Latent Dirichlet Allocation (LDA), has been proposed [1]. However, LDA is formulated in an “unsupervised” manner. Blei et al. also perform single-labeled text categorization using LDA in which individual LDA is fitted to each class. Namely, they do not explain how to model the observed class labels y in LDA. In contrast, our PMMs can efficiently model class y, depending on other classes through the common basis vectors. Moreover, based on the PMM assumtion, models much simpler than LDA can be constructed as mentioned above. Moreover, unlike in LDA, it is feasible to compute the objective functions for PMMs exactly as shown below. 3 Learning & Prediction Algorithms 3.1 Objective functions Let D = {(xn, yn)}N n=1 denote the given training data (N labeled documents). The unknown parameter Θ is estimated by maximizing posterior p(Θ|D). Assuming that P(y) is independent of Θ, ˆΘmap = arg maxΘ{log P(xn|yn, Θ) + log p(Θ)}. Here, p(Θ) is prior over the parameters. We used the following conjugate priors (Dirichlet distributions) over θl and αl,m as: p(Θ) ∝QL l=1 QV i=1 θξ−1 l,i for PMM1 and p(Θ) ∝(QL l=1 QV i=1 θξ−1 l,i )(QL l=1 QL m=1 αζ−1 l,m ) for PMM2. Here, ξ and ζ are hyperparameters and in this paper we set ξ = 2 and ζ = 2, each of which is equivalent to Laplace smoothing for θl,i and αl,m, respectively. Consequently, the objective function to find ˆΘmap is given by J(Θ; D) = L(Θ; D) + (ξ −1) L X l=1 V X i=1 log θl,i + (ζ −1) L X l=1 L X m=1 log αl,m. (6) Of course, the third term on the RHS of Eq. (6) is just ignored for PMM1. The likelihood term, L, is given by PMM1 : L(Θ; D) = N X n=1 V X i=1 xn,i log L X l=1 hn l θl,i, (7) PMM2 : L(Θ; D) = N X n=1 V X i=1 xn,i log L X l=1 L X m=1 hn l hn mθl,m,i. (8) Note that θl,m,i = αl,mθl,i + αm,lθm,i. 3.2 Update formulae The optimization problem given by Eq. (6) cannot be solved analytically; therefore some iterative method needs to be applied. Although the steepest ascend algorithms involving Newton’s method are available, here we derive an efficient algorithm in a similar manner to the EM algorithm [2]. First, we derive parameter update formulae for PMM2 because they are more general than those for PMM1. We then explain those for PMM1 as a special case. Suppose that Θ(t) is obtained at step t. We then attmpt to derive Θ(t+1) by using Θ(t). For convenience, we define gn l,m,i and λl,m,i as follows. gn l,m,i(Θ) = hn l hn mθl,m,i  L X l=1 L X m=1 hn l hn mθl,m,i, (9) λl,m,i(θl,m) = αl,mθl,i/θl,m,i, λm,l,i(θl,m) = αm,lθm,i/θl,m,i. (10) Noting that PL l=1 PL m=1 gn l,m,i(Θ) = 1, L for PMM2 can be rewritten as L(Θ; D) = X n,i xn,i{ X l,m gn l,m,i(Θ(t))} log{(hn l hn mθl,m,i hn l hnmθl,m,i ) X l′,m′ hn l′hn m′θl′,m′,i} = X n,i xn,i X l,m gn l,m,i(Θ(t)) log hn l hn mθn l,m,i − X n,i xn,i X l,m gn l,m,i(Θ(t)) log gn l,m,i(Θ).(11) Moreover, noting that λl,m,i(θl,m) + λm,l,i(θl,m) = 1, we rewrite the first term on the RHS of Eq. (11) as X n,i xn,i X l,m gn l,m,i(Θ(t))  λl,m,i(θ(t) l,m) log{(αl,mθl,i αl,mθl,i )hn l hn mθl,m,i} +λm,l,i(θ(t) l,m) log{(αm,lθm,i αm,lθm,i )hn l hn mθl,m,i}  . (12) From Eqs.(11) and (12), we obtain the following important equation: L(Θ; D) = U(Θ|Θ(t)) −T (Θ|Θ(t)). (13) Here, U and T are defined by U(Θ|Θ(t)) = X n,i,l,m xn,i gn l,m,i(Θ(t)) n λl,m,i(θ(t) l,m) log hn l hn mαl,mθl,i +λm,l,i(θ(t) l,m) log hn l hn mαm,lθm,i o , (14) T (Θ|Θ(t)) = X n,i,l,m xn,i gn l,m,i(Θ(t)) n log gn l,m,i(Θ) + λl,m,i(θ(t) l,m) log λl,m,i(θl,m) +λm,l,i(θ(t) l,m) log λm,l,i(θl,m) o . (15) From Jensen’s inequality, T (Θ|Θ(t)) ≤T (Θ(t)|Θ(t)) holds. Thus we just maximize U(Θ|Θ(t))+log P(Θ) w.r.t. Θ to derive the parameter update formula. Noting that θl,m,i ≡θm,l,i and qn l,m,i ≡qn m,l,i, we can derive the following formulae: θ(t+1) l,i = 2 PN n=1 xn i PL m=1 qn l,m,i(Θ(t))λl,m,i(Θ(t)) + ξ −1 2 PV i=1 PN n=1 xn i PL m=1 qn l,m,i(Θ(t))λl,m,i(Θ(t)) + V (ξ −1) , ∀l, i, (16) α(t+1) l,m = PN n=1 PV i=1 xn i qn l,m,i(Θ(t))λl,m,i(Θ(t)) + (ζ −1)/2 PV i=1 PN n=1 xn i qn l,m,i(Θ(t)) + ζ −1 , ∀l, m ̸= l. (17) These parameter updates always converge to a local optimum of J given by Eq. (6). In PMM1, since unknown parameter is just {θl}, by modifying Eq. (9) as gn l,i(Θ) = hn l θl,i PL l=1 hn l θl,i , (18) and rewriting Eq. (7) in a similar manner, we obtain L(Θ; D) = X n,i xn,i X l gn l,i(Θ(t)) log hn l θl,i − X n,i xn,i X l gn l,i(Θ(t)) log gn l,i(Θ). (19) In this case, U becomes a simpler form as U(Θ|Θ(t)) = N X n=1 V X i=1 xn,i L X l=1 gn l,i(Θ(t)) log hn l θl,i. (20) Therefore, maximizing U(Θ|Θ(t))+(ξ −1) PL l=1 PV i=1 log θl,i w.r.t. Θ under the constraint P i θl,i = 1, ∀l, we can obtain the following update formula for PMM1: θ(t+1) li = PN n=1 xn,ign l,i(Θ(t)) + ξ −1 PV i=1 PN n=1 xn,ign l,i(Θ(t)) + V (ξ −1) , ∀l, i. (21) Remark: The parameter update given by Eq. (21) of PMM1 always converges to the global optimum solution. Proof: The Hessian matrix, H, of the objective function, J, of PMM1 becomes H = ΦT ∂2J(Θ; D) ∂Θ∂ΘT Φ = d2J(Θ + κΦ; D) dκ2 κ=0 = − X n,i xn i P l hn i φli P l hn i θli 2 −(ξ −1) X l,i φli θli 2 . (22) Here, Φ is an arbitrary vector in the Θ space. Noting that xn i ≥0, ξ > 1 and Φ ̸= 0, H is negative definite; therefore J is a strictly convex function of Θ. Moreover, since the feasible region defined by J and constraints P i θl,1 = 1, ∀l is a convex set, the maximization problem here becomes a convex programming problem and has a unique global solution. Since Eq. (21) always increases J at each iteration, the learning algorithm given above always converges to the global optimum solution, irrespective of any initial parameter value. 3.3 Prediction Let ˆΘ denote the estimated parameter. Then, applying Bayes’ rule, the optimum category vector y∗for x∗of a new document is defined as: y∗ = arg maxy P(y|x∗; ˆΘ) under a uniform class prior assumption. Since this maximization problem belongs to the zero-one integer problem (i.e., NP-hard problem), an exhaustive search is prohibitive for a large L. Therefore, we solve this problem approximately with the help of the following greedy-search algorithm. That is, first, only one yl1 value is set to 1 so that P(y|x∗; ˆΘ) is maximized. Then, for the remaining elements, only one yl2 value, which mostly increases P(y|x∗; ˆΘ), is set to 1 under a fixed yl1 value. This procedure is repeated until P(y|x∗; ˆΘ) cannot increase any further. This algorithm successively determines an element in y to increase the posterior probability until its value does not improve. This is very efficient because it requires the calculation of the posterior probability at most L(L + 1)/2 times, while the exhaustive search needs 2L −1 times. 4 Experiments 4.1 Automatic Web Page Categorization We tried to categorize real Web pages linked from the “yahoo.com” domain1. More specifically, Yahoo consists of 14 top-level categories (i.e., “Arts & Humanities,” “Business & Economy,” “Computers & Internet,” and so on), and each category is classified into a number of second-level subcategories. By focusing on the secondlevel categories, we can make 14 independent text categorization problems. We used 11 of these 14 problems2. In those 11 problems, mininum (maximum) values of L and V were 21 (40) and 21924 (52350), respectively. About 30∼45% of the pages are multi-labeled over the 11 problems. To collect a set of related Web pages for each problem, we used a software robot called ”GNU Wget (version 1.5.3). A text multi-label can be obtained by following its hyperlinks in reverse toward the page of origin. We compared our PMMs with the convetional methods: naive Bayes (NB), SVM, k-nearest neighbor (kNN), and three-layer neural networks (NN). We used linear SVMlight (version 4.0), tuning the C (penalty cost) and J (cost-factor for negative and positive samples) parameters for each binary classification to improve the SVM results [6]3. In addition, it is worth mentioning that when performing the SVM, each xn was normalized to be PV i=1 xn i = 1 because discrimination is much easier in the V −1-dimensional simplex than in the original V dimensional space. In other words, classification is generally not determined by the number of words on the page; actually, normalization could also significantly improve the performance. 1This domain is a famous portal site and most related pages linked from the domain are registered by site recommendation and therefore category labels would be reliable. 2We could not collect enough pages for three categories due to our communication network security. However, we believe that 11 independent problems are sufficient for evaluating our method. 3Since the ratio of the number of positive samples to negative samples per category was quite small in our web pages, SVM without the J option provided poor results. Table 1: Performance for 3000 test data using 2000 training data. No. NB SVM kNN NN PMM1 PMM2 1 41.6 (1.9) 47.1 (0.3) 40.0 (1.1) 43.3 (0.2) 50.6 (1.0) 48.6 (1.0) 2 75.0 (0.6) 74.5 (0.8) 78.4 (0.4) 77.4 (0.5) 75.5 (0.9) 72.1 (1.2) 3 56.5 (1.3) 56.2 (1.1) 51.1 (0.8) 53.8 (1.3) 61.0 (0.4) 59.9 (0.6) 4 39.3 (1.0) 47.8 (0.8) 42.9 (0.9) 44.1 (1.0) 51.3 (2.8) 48.3 (0.5) 5 54.5 (0.8) 56.9 (0.5) 47.6 (1.0) 54.9 (0.5) 59.7 (0.4) 58.4 (0.6) 6 66.4 (0.8) 67.1 (0.3) 60.4 (0.5) 66.0 (0.4) 66.2 (0.5) 65.1 (0.3) 7 51.8 (0.8) 52.1 (0.8) 44.4 (1.1) 49.6 (1.3) 55.2 (0.5) 52.4 (0.6) 8 52.6 (1.1) 55.4 (0.6) 53.3 (0.5) 55.0 (1.1) 61.1 (1.4) 60.1 (1.2) 9 42.4 (0.9) 49.2 (0.7) 43.9 (0.6) 45.8 (1.3) 51.4 (0.7) 49.9 (0.8) 10 41.7 (10.7) 65.0 (1.1) 59.5 (0.9) 62.2 (2.3) 62.0 (5.1) 56.4 (6.3) 11 47.2 (0.9) 51.4 (0.6) 46.4 (1.2) 50.5 (0.4) 54.2 (0.2) 52.5 (0.7) We employed the cosine similarity for kNN method (see [8] for more details). As for NNs, an NN consists of V input units and L output units for estimating a category vector from each frequency vector. We used 50 hidden units. An NN was trained to maximize the sum of cross-entropy functions for target and estimated category vectors of training samples, together with a regularization term consisting of a sum of squared NN weights. Note that we did not perform any feature transformations such as TFIDF (for an example, see e.g., [8]) because we wanted to evaluate the basic performance of each detection method purely. We used the F-measure as the performance measure which is defined as the weighted harmonic average of two well-known statistics: precision, P, and recall, R. Let yn = (yn 1 , . . . , yn L) and ˆyn = (ˆyn 1 , . . . , ˆyn L) be actual and predicted category vectors for xn, respectively. Subsequently, the Fn = 2PnRn/(Pn + Rn), where Pn = PL l=1 yn l ˆyn l / PL l=1 ˆyn l and Rn = PL l=1 yn l ˆyn l / PL l=1 yn l . We evaluated the performance by ¯F = 1 3000 P3000 n=1 Fn using 3000 test data independent of the training data. Although micro- and macro-averages can be used, we think that the samplebased F-measure is the most suitable for evaluating the generalization performance, since it is natural to consider the i.i.d. assumption for documents. 4.2 Results For each of the 11 problems, we used five pairs of training and test data sets. In Table 1 (Table 2) we compared the mean of the ¯F values over five trials by using 2000 (500) training documents. Each number in parenthesis in the Tables denotes the standard deviation of the five trials. PMMs took about five minutes for training (2000 data) and only about one minute for the test (3000 data) on 2.0-Ghz Pentium PC, averaged over the 11 problmes. The PMMs were much faster than the k-NN and NN. In the binary approach, SVMs with optimally tuned parameters produced rather better results than the NB method. The performance by SVMs, however, was inferior to those by PMMs in almost all problems. These experimental results support the importance of considering generative models of multi-category text. When the training sample size was 2000, kNN provided comparable results to the NB method. On the other hand, when the training sample size was 500, the kNN method obtained results similar to or slightly better than those of SVM. However, in both cases, PMMs significantly outperformed kNN. We think that the memorybased approach is limited in its generalization ability for multi-labeled text categorization. The results of well-regularized NN were fair, although it took an intolerable amount of training time, indicating that flexible discrimination would not be necessary for Table 2: Performance for 3000 test data using 500 training data. No. NB SVM kNN NN PMM1 PMM2 1 21.2 (1.0) 32.5 (0.5) 34.7 (0.4) 33.8 (0.4) 43.9 (1.0) 43.2 (0.8) 2 73.9 (0.7) 73.8 (1.2) 75.6 (0.6) 74.8 (0.9) 75.2 (0.4) 69.7 (8.9) 3 46.1 (2.9) 44.9 (1.9) 44.1 (1.2) 45.1 (1.0) 56.4 (0.3) 55.4 (0.5) 4 15.2 (0.9) 33.6 (0.5) 37.1 (1.0) 33.8 (1.1) 41.8 (1.2) 41.9 (0.7) 5 34.1 (1.6) 42.7 (1.3) 43.9 (1.0) 45.3 (0.9) 53.0 (0.3) 53.1 (0.6) 6 50.2 (0.3) 56.0 (1.0) 54.4 (0.9) 57.2 (0.7) 58.9 (0.9) 59.4 (1.0) 7 22.1 (0.8) 32.1 (0.5) 37.4 (1.1) 33.9 (0.8) 46.5 (1.3) 45.5 (0.9) 8 32.7 (4.4) 38.8 (0.6) 48.1 (1.3) 43.1 (1.0) 54.1 (1.5) 53.5 (1.5) 9 17.6 (1.6) 32.5 (1.0) 35.3 (0.4) 31.6 (1.7) 40.3 (0.7) 41.0 (0.5) 10 40.6 (12.3) 55.0 (1.1) 53.7 (0.6) 55.8 (4.0) 57.8 (6.5) 57.9 (5.9) 11 34.2 (2.2) 38.3 (4.7) 40.2 (0.7) 40.9 (1.2) 49.7 (0.9) 49.0 (0.5) discriminating high-dimensional, sparse-text data. The results obtained by PMM1 were better than those by PMM2, which indicates that a model with a fixed αl,m = 0.5 seems sufficient, at least for the WWW pages used in the experiments. 5 Concluding Remarks We have proposed new types of mixture models (PMMs) for multi-labeled text categorization, and also efficient algorithms for both learning and prediction. We have taken some important steps along the path, and we are encouraged by our current results using real World Wide Web pages. Moreover, we have confirmed that studying the generative model for multi-labeled text is beneficial in improving the performance. References [1] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. to appear Advances in Neural Information Processing Systems 14. MIT Press. [2] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1-38. 1977. [3] S. T. Dumais, J. Platt, D. Heckerman, & M. Sahami. Inductive learning algorithms and representations for text categorization. In Proc. of ACM-CIKM’98, 1998. [4] T. Joachims. Text categorization with support vector machines: Learning with many relevant features. In Proc. of the European Conference on Machine Learning, 137-142, Berlin, 1998. [5] D. Lewis & M. Ringuette. A comparison of two learning algorithms for text categorization. In Third Anual Symposium on Document Analysis and Information Retrieval, 81-93. 1994. [6] K. Morik, P. Brockhausen, and T. Joachims. Combining statistical learning with knowledge-based approach. A case study in intensive care monitoring. In Proc. of International Conference on Machine Learning (ICML’99), 1999. [7] K. Nigam, A. K. McCallum, S. Thrun, & T. Mitchell. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39:103-134, 2000. [8] Y. Yang & J. Pederson. A comparative study on feature selection in text categorization. In Proc of International Conference on Machine Learning, 412-420, 1997. [9] V. N. Vapnik. Statistical learning theory. John Wiley & Sons, Inc., New York. 1998.
2002
54
2,259
Transductive and Inductive Methods for Approximate Gaussian Process Regression Anton Schwaighofer1 2 1 TU Graz, Institute for Theoretical Computer Science Inffeldgasse 16b, 8010 Graz, Austria http://www.igi.tugraz.at/aschwaig Volker Tresp2 2 Siemens Corporate Technology CT IC4 Otto-Hahn-Ring 6, 81739 Munich, Germany http://www.tresp.org Abstract Gaussian process regression allows a simple analytical treatment of exact Bayesian inference and has been found to provide good performance, yet scales badly with the number of training data. In this paper we compare several approaches towards scaling Gaussian processes regression to large data sets: the subset of representers method, the reduced rank approximation, online Gaussian processes, and the Bayesian committee machine. Furthermore we provide theoretical insight into some of our experimental results. We found that subset of representers methods can give good and particularly fast predictions for data sets with high and medium noise levels. On complex low noise data sets, the Bayesian committee machine achieves significantly better accuracy, yet at a higher computational cost. 1 Introduction Gaussian process regression (GPR) has demonstrated excellent performance in a number of applications. One unpleasant aspect of GPR is its scaling behavior with the size of the training data set N. In direct implementations, training time increases as O  N 3  , with a memory footprint of O  N 2  . The subset of representer method (SRM), the reduced rank approximation (RRA), online Gaussian processes (OGP) and the Bayesian committee machine (BCM) are approaches to solving the scaling problems based on a finite dimensional approximation to the typically infinite dimensional Gaussian process. The focus of this paper is on providing a unifying view on the methods and analyze their differences, both from an experimental and a theoretical point of view. For all of the discussed methods, we also examine asymptotic and actual runtime and investigate the accuracy versus speed trade-off. A major difference of the methods discussed here is that the BCM performs transductive learning, whereas RRA, SRM and OGP methods perform induction style learning. By transduction1 we mean that a particular method computes a test set dependent model, i.e. it exploits knowledge about the location of the test data in its approximation. As a consequence, the BCM approximation is calculated when the inputs to the test data are known. In contrast, inductive methods (RRA, OGP, SRM) build a model solely on basis of information from the training data. In Sec. 1.1 we will briefly introduce Gaussian process regression (GPR). Sec. 2 presents the various inductive approaches to scaling GPR to large data, Sec. 3 follows with transductive approaches. In Sec. 4 we give an experimental comparison of all methods and an analysis of the results. Conclusions are given in Sec. 5. 1.1 Gaussian Process Regression We consider Gaussian process regression (GPR) on a set of training data D   x i  yi  N i  1, where targets are generated from an unknown function f via y i f  xi  ei with independent Gaussian noise ei of variance σ2. We assume a Gaussian process prior on f  xi  , meaning that functional values f  xi  on points  xi  N i  1 are jointly Gaussian distributed, with zero mean and covariance matrix (or Gram matrix) K N. KN itself is given by the kernel (or covariance) function k      , with K N ij k  xi  x j  . The Bayes optimal estimator ˆf  x  E  f  x  D  takes on the form of a weighted combination of kernel functions [4] on training points x i ˆf  x  N ∑ i  1 wik  x  xi  (1) The weight vector w  w1   wN  is the solution to the system of linear equations  KN  σ21  w y (2) where 1 denotes a unit matrix and y  y1   yN   . Mean and covariance of the GP prediction f  on a set of test points x 1   x T can be written conveniently as E  f  D  K  Nw and cov  f  D  K  K  N  KN  σ21  1  K  N   (3) with K  N ij k  x i  x j  . Eq. (2) shows clearly what problem we may expect with large training data sets: The solution to a system of N linear equations requires O  N 3  operations, and the size of the Gram matrix K N may easily exceed the memory capacity of an average work station. 2 Inductive Methods for Approximate GPR 2.1 Reduced Rank Approximation (RRA) Reduced rank approximations focus on ways of efficiently solving the system of linear equations Eq. (2), by replacing the kernel matrix K N with some approximation ˜KN. Williams and Seeger [12] use the Nystr¨om method to calculate an approximation to the first B eigenvalues and eigenvectors of K N. Essentially, the Nystr¨om method performs an eigendecomposition of the B  B covariance matrix K B, obtained from a set of B basis points selected at random out of the training data. Based on the eigendecomposition of K B, 1Originally, the differences between transductive and inductive learning where pointed out in statistical learning theory [10]. Inductive methods minimize the expected loss over all possible test sets, whereas transductive methods minimize the expected loss for one particular test set. one can compute approximate eigenvalues and eigenvectors of K N. In a special case, this reduces to KN ˜KN KNB  KB   1  KNB   (4) where KB is the kernel matrix for the set of basis points, and K NB is the matrix of kernel evaluations between training and basis points. Subsequently, this can be used to obtain an approximate solution ˜w of Eq. (1) via matrix inversion lemma in O  NB2  instead of O  N3  . 2.2 Subset of Representers Method (SRM) Subset of representers methods replace Eq. (1) by a linear combination of kernel functions on a set of B basis points, leading to an approximate predictor ˜f  x  B ∑ i  1 βik  x  xi  (5) with an optimal weight vector β  σ2KB   KNB   KNB   1  KNB   y (6) Note that Eq. (5) becomes exact if the kernel function allows a decomposition of the form k  xi  x j  Ki B  KB   1  K j B   . In practical implementation, one may expect different performance depending on the choice of the B basis points x1   xB. Different approaches for basis selection have been used in literature, we will discuss them in turn. Obviously, one may select the basis points at random (SRM Random) out of the training set. While this produces no computational overhead, the prediction outcome may be suboptimal. In the sparse greedy matrix approximation (SRM SGMA, [6]) a subset of B basis kernel functions is selected such that all kernel functions on the training data can be well approximated by linear combinations of the selected basis kernels2. If proximity in the associated reproducing kernel Hilbert space (RKHS) is chosen as the approximation criterion, the optimal linear combination (for a given basis set) can be computed analytically. Smola and Sch¨olkopf [6] introduce a greedy algorithm that finds a near optimal set of basis functions, where the algorithm has the same asymptotic complexity O  NB2  as the SRM Random method. Whereas the SGMA basis selection focuses only on the representation power of kernel functions, one can also design a basis selection scheme that takes into account the full likelihood model of the Gaussian process. The underlying idea of the greedy posterior approximation algorithm (SRM PostApp, [7]) is to compare the log posterior of the subset of representers method and the full Gaussian process log posterior. One thus can select basis functions in such a fashion that the SRM log posterior best approximates 3 the full GP log posterior, while keeping the total number of basis functions B minimal. As for the case of SGMA, this algorithm can be formulated such that its asymptotic computational complexity is O  NB2  , where B is the total number of basis functions selected. 2.3 Online Gaussian Processes Csat´o and Opper [2] present an online learning scheme that focuses on a sparse model of the posterior process that arises from combining a Gaussian process prior with a general 2This method was not developed particularly for GPR, yet we expect this basis selection scheme to be superior to a purely random choice. 3However, Rasmussen [5] noted that Smola and Bartlett [7] falsely assume that the additive constant terms in the log likelihood remain constant during basis selection. likelihood model of data. The posterior process is assumed to be Gaussian and is modeled by a set of basis vectors. Upon arrival of a new data point, the updated (possibly nonGaussian) posterior process is being projected to the closest (in a KL-divergence sense) Gaussian posterior. If this projection induces an error above a certain threshold, the newly arrived data point will be included in the set of basis vectors. Similarly, basis vectors with minimum contribution to the posterior process may be removed from the basis set. 3 Transductive Methods for Approximate GPR In order to derive a transductive kernel classifier, we rewrite the Bayes optimal prediction Eq. (3) as follows: E  f  D  K  K   K  N cov  y f   1  K  N    1 K  N cov  y f    1y (7) Here, cov  y f   is the covariance obtained when predicting training observations y given the functional values f  at the test points: cov  y f   KN  σ21   K  N    K    1K  N (8) Mind that this matrix can be written down without actual knowledge of f  . Examining Eq. (7) reveals that the Bayes optimal prediction of Eq. (3) can be expressed as a weighted sum of kernel functions on test points. In Eq. (7), the term cov  y f    1y gives a weighting of training observations y: Training points which cannot be predicted well from the functional values of the test points are given a lower weight. Data points which are “closer” to the test points (in the sense that they can be predicted better) obtain a higher weight than data which are remote from the test points. Eq. (7) still involves the inversion of the N  N matrix cov  y f    1 and thus does not make a practical method. By using different approximations for cov  y f    1, we obtain different transductive methods, which we shall discuss in the next sections. Note that in a Bayesian framework, transductive and inductive methods are equivalent, if we consider matching models (the true model for the data is in the family of models we consider for learning). Large data sets reveal more of the structure of the true model, but for computational reasons, we may have to limit ourselves to models with lower complexity. In this case, transductive methods allow us to focus on the actual region of interest, i.e. we can build models that are particularly accurate in the region where the test data lies. 3.1 Transductive SRM For large sets of test data, we may assume cov  y f   to be a diagonal matrix cov  y f   σ21, meaning that test values f  allow a perfect prediction of training observations (up to noise). With this approximation, Eq. (7) reduces to the prediction of a subset of representers method (see Sec. 2.2) where the test points are used as the set of basis points (SRM Trans). 3.2 Bayesian Committee Machine (BCM) For a smaller number of test data, assuming a diagonal matrix for cov  y f   (as for the transductive SRM method) seems unreasonable. Instead, we can use the less stringent assumption of cov  y f   being block diagonal. After some matrix manipulations, we obtain the following approximation for Eq. (7) with block diagonal cov  y f   : ˆE  f  D  C  1 M ∑ i  1 cov  f  Di  1E  f  Di  (9) C cov  f  D   1   M  1   K    1  M ∑ i  1 cov  f  Di   1 (10) This is equivalent to the Bayesian committee machine (BCM) approach [8]. In the BCM, the training data D are partitioned into M disjoint sets D 1   DM of approximately same size (“modules”), and M GPR predictors are trained on these subsets. In the prediction stage, the BCM calculates the unknown responses f  at a set of test points x 1 x T at once. The prediction E  f  Di  of GPR module i is weighted by the inverse covariance of its prediction. An intuitively appealing effect of this weighting scheme is that modules which are uncertain about their predictions are automatically weighted less than modules that are certain about their predictions. Very good results were obtained with the BCM with random partitioning [8] into subsets Di. The block diagonal approximation of cov  y f   becomes particularly accurate, if each Di contains data that is spatially separated from other training data. This can be achieved by pre-processing the training data with a simple k-means clustering algorithm, resulting in an often drastic reduction of the BCM’s error rates. In this article, we always use the BCM with clustered data. 4 Experimental Comparison In this section we will present an evaluation of the different approximation methods discussed in Sec. 2 and 3 on four data sets. In the ABALONE data set [1] with 4177 examples, the goal is to predict the age of Abalones based on 8 inputs. The KIN8NM data set 4 represents the forward dynamics of an 8 link all-revolute robot arm, based on 8192 examples. The goal is to predict the distance of the end-effector from a target, given the twist angles of the 8 links as features. KIN40K represents the same task, yet has a lower noise level than KIN8NM and contains 40 000 examples. Data set ART with 50000 examples was used extensively in [8] and describes a nonlinear map with 5 inputs with a small amount of additive Gaussian noise. For all data sets, we used a squared exponential kernel of the form k  x i  x j  exp   1 2d2  xi  x j  2  , where the kernel parameter d was optimized individually for each method. To allow a fair comparison, the subset selection methods SRM SGMA and SRM PostApp were forced to select a given number B of basis functions (instead of using the stopping criteria proposed by the authors of the respective methods). Thus, all methods form their predictions as a linear combination of exactly B basis functions. Table 1 shows the average remaining variance5 in a 10-fold cross validation procedure on all data sets. For each of the methods, we have run experiments with different kernel width d. In Table 1 we list only the results obtained with optimal d for each method. On the ABALONE data set (very high level of noise), all of the tested methods achieved almost identical performance, both with B 200 and B 1000 basis functions. For all other data sets, significant performance differences were observed. Out of the inductive 4From the DELVE archive http://www.cs.toronto.edu/˜delve/ 5remaining variance  100  MSEmodel MSEmean , where MSEmean is the MSE obtained from using the mean of training targets as the prediction for all test data. This gives a measure of performance that is independent of data scaling. Abalone KIN8NM KIN40K ART Method 200 1000 200 1000 200 1000 200 1000 SRM PostApp 42 81 42 81 13 79 7 84 9 49 2 36 3 91 1 12 SRM SGMA 42 83 42 81 21 84 8 70 18 32 4 25 5 62 1 79 SRM Random 42 86 42 82 22 34 9 01 18 77 4 39 5 87 1 79 RRA Nystr¨om 42 98 41 10 N/A N/A N/A N/A N/A N/A Online GP 42 87 N/A 16 49 N/A 10 36 N/A 5 37 N/A BCM 42 86 42 81 10 32 8 31 2 81 0 83 0 27 0 20 SRM Trans 42 93 42 79 21 95 9 79 16 47 4 25 5 15 1 64 Table 1: Remaining variance, obtained with different GPR approximation methods on four data sets, with different number of basis functions selected (200 or 1000). Remaining variance is given in per cent, averaged over 10-fold cross validation. Marked in bold are results that are significantly better (with a significance level of 99% or above in a paired t-test) than any of the other methods methods (SRM SGMA, SRM Random, SRM PostApp, RRA Nystr ¨om) best performance was always achieved with SRM PostApp. Using the results in a paired t-test showed that this was significant at a level of 99% or above. Online Gaussian processes6 typically performed slightly worse than SRM PostApp. Furthermore, we observed certain problems with the RRA Nystr¨om method. On all but the ABALONE data set, weights ˜w took on values in the range of 103 or above, leading to poor performance. For this reason, the results for RRA Nystr¨om were omitted from Table 1. Further comments on these problems will be given in Sec. 4.2. Comparing induction and transduction methods, we see that the BCM performs significantly better than any inductive method in most cases. Here, the average MSE obtained with the BCM was only a fraction (25-30%) of the average MSE of the best inductive method. By a paired t-test we confirmed that the BCM is significantly better than all other methods on the KIN40K and ART data sets, with significance level of 99% or above. On the KIN8NM data set (medium noise level) we observed a case where SRM PostApp performed best. We attribute this to the fact that k-means clustering was not able to find well separated clusters. This reduces the performance of the BCM, since the block diagonal approximation of Eq. (8) becomes less accurate (see Sec. 3.2). Mind that all transductive methods necessarily lose their advantage over inductive methods, when the allowed model complexity (that is, the number of basis functions) is increased. We further noticed that, on the KIN40K and ART data sets, SRM Trans consistently outperformed SRM Random, despite of SRM Trans being the most simplistic transductive method. The difference in performance was only small, yet significant at a level of 99%. As mentioned above, we did not make use of the stopping criterion proposed for the SRM PostApp method, namely the relative gap between SRM log posterior and the log posterior of the full Gaussian process model. In [7], the authors suggest that the gap is indicative of the generalization performance of the SRM model and use a gap of 2 5% in their experiments. In contrast, we did not observe any correlation between the gap and the generalization performance in our experiments. For example, selecting 200 basis points out of the KIN40K data set gave a gap of 1%, indicating a good fit. As shown in Table 1, a significantly better error was achieved with 1000 basis functions (giving a gap of 3 5  10  4). Thus, it remains open how one can automatically choose an appropriate basis set size B. 6Due to the numerically demanding approximations, runtime of the OGP method for B  1000 is rather long. We thus only list results for B  200 basis functions. Memory consumption Computational cost Runtime Method Initialization Prediction Initialization Prediction KIN40K Exact GPR O  N2  O  N  O  N3  O  N  N/A RRA Nystr¨om O  NB  O  N  O  NB2  O  N  4 min SRM Random O  NB  O  B  O  NB2  O  B  3 min SRM Trans 3 min SRM SGMA 7 h SRM PostApp 11 h Online GP O  B2  O  B  O  NB2  O  B  est. 150 h BCM — O  N  B2  — O  NB  30 min Table 2: Memory consumption, asymptotic computational cost and actual runtime for different GP approximation methods with N training data points and B basis points, B  N. For the BCM, we assume here that training and test data are partitioned into modules of size B. Asymptotic cost for predictions show the cost per test point. The actual runtime is given for the KIN40K data set, with 36000 training examples, 4000 test patterns and B 1000 basis functions for each method. 4.1 Computational Cost Table 2 shows the asymptotic computational cost for all approximation methods we have described in Sec. 2 and 3. The subset of representers methods (SRM) show the most favorable cost for the prediction stage, since the resulting model consists only of B basis functions with their associated weight vector. Table 2 also lists the actual runtime 7 for one (out of 10) cross validation runs on the KIN40K data set. Here, methods with the same asymptotic complexity exhibit runtimes ranging from 3 minutes to 150 hours. For the SRM methods, most of this time is spent for basis selection (SRM PostApp and SRM SGMA). We thus consider the slow basis selection as the bottleneck for SRM methods when working with larger number of basis functions or larger data sets. 4.2 Problems with RRA Nystr¨om As mentioned in Sec. 4, we observed that weights ˜w in RRA Nystr¨om take on values in the range of 103 or above on data sets KIN8NM, KIN40K and ART. This can be explained by considering the perturbation of linear systems. RRA Nystr ¨om solves Eq. (2) with an approximate ˜KN instead of KN, thus calculating an approximate ˜w instead of the true w. Using matrix perturbation theory, we can show that the relative error of the approximate ˜w is bounded by  ˜w  w   w   max i λi  ˜λi ˜λi  σ2 (11) where λi and ˜λi denote eigenvalues of KN resp. ˜KN. A closer look at the Nystr¨om approximation [11] revealed that already for moderately complex data sets, such as KIN8NM, it tends to underestimate eigenvalues of the Gram matrix, unless a very high number of basis points is used. If in addition a rather low noise variance is assumed, we obtain a very high value for the error bound in Eq. (11), confirming our observations in the experiments. Methods to overcome the problems associated with the Nystr¨om approximation are currently being investigated [11]. 7Runtime was logged on Linux PCs with AMD Athlon 1GHz CPUs, with all methods implemented in Matlab and optimized with the Matlab profiler. 5 Conclusions Our results indicate that, depending on the computational resources and the desired accuracy, one may select methods as follows: If the major concern is speed of prediction, one is well advised to use the subset of representers method with basis selection by greedy posterior approximation. This method may be expected to give results that are significantly better than other (inductive) methods. While being painfully slow during basis selection, the resulting models are compact, easy to use and accurate. Online Gaussian processes achieve a slightly worse accuracy, yet they are the only (inductive) method that can easily be adapted for general likelihood models, such as classification and regression with nonGaussian noise. A generalization of the BCM to non-Gaussian likelihood models has been presented in [9]. On the other hand, if accurate predictions are the major concern, one may expect best results with the Bayesian committee machine. On complex low noise data sets (such as KIN40K and ART) we observed significant advantages in terms of prediction accuracy, giving an average mean squared error that was only a fraction (25-30%) of the error achieved by the best inductive method. For the BCM, one must take into account that it is a transduction scheme, thus prediction time and memory consumption are larger than those of SRM methods. Although all discussed approaches scale linearly in the number of training data, they exhibit significantly different runtime in practice. For the experiments we had done in this paper (running 10-fold cross validation on given data) the Bayesian committee machine is about one order of magnitude slower than an SRM method with randomly chosen basis; SRM with greedy posterior approximation is again an order of magnitude slower than the BCM. Acknowledgements Anton Schwaighofer gratefully acknowledges support through an Ernst-von-Siemens scholarship. References [1] Blake, C. and Merz, C. UCI repository of machine learning databases. 1998. [2] Csat´o, L. and Opper, M. Sparse online gaussian processes. Neural Computation, 14(3):641– 668, 2002. [3] Leen, T. K., Dietterich, T. G., and Tresp, V., eds. Advances in Neural Information Processing Systems 13. MIT Press, 2001. [4] MacKay, D. J. Introduction to Gaussian processes. In C. M. Bishop, ed., Neural Networks and Machine Learning, vol. 168 of NATO Asi Series. Series F, Computer and Systems Sciences. Springer Verlag, 1998. [5] Rasmussen, C. E. Reduced rank Gaussian process learning, 2002. Unpublished Manuscript. [6] Smola, A. and Sch¨olkopf, B. Sparse greedy matrix approximation for machine learning. In P. Langely, ed., Proceedings of ICML00. Morgan Kaufmann, 2000. [7] Smola, A. J. and Bartlett, P. Sparse greedy gaussian process regression. In [3], pp. 619–625. [8] Tresp, V. A Bayesian committee machine. Neural Computation, 12(11):2719–2741, 2000. [9] Tresp, V. The generalized bayesian committee machine. In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 130–139. Boston, MA USA, 2000. [10] Vapnik, V. N. The nature of statistical learning theory. Springer Verlag, 1995. [11] Williams, C. K., Rasmussen, C. E., Schwaighofer, A., and Tresp, V. Observations on the Nystr¨om method for Gaussian process prediction. Tech. rep., Available from the authors’ web pages, 2002. [12] Williams, C. K. I. and Seeger, M. Using the nystr¨om method to speed up kernel machines. In [3], pp. 682–688.
2002
55
2,260
Adapting Codes and Embeddings for Polychotomies Gunnar R¨atsch, Alexander J. Smola RSISE, CSL, Machine Learning Group The Australian National University Canberra, 0200 ACT, Australia Gunnar.Raetsch, Alex.Smola  @anu.edu.au Sebastian Mika Fraunhofer FIRST Kekulestr. 7 12489 Berlin, Germany mika@first.fhg.de Abstract In this paper we consider formulations of multi-class problems based on a generalized notion of a margin and using output coding. This includes, but is not restricted to, standard multi-class SVM formulations. Differently from many previous approaches we learn the code as well as the embedding function. We illustrate how this can lead to a formulation that allows for solving a wider range of problems with for instance many classes or even “missing classes”. To keep our optimization problems tractable we propose an algorithm capable of solving them using twoclass classifiers, similar in spirit to Boosting. 1 Introduction The theory of pattern recognition is primarily concerned with the case of binary classification, i.e. of assigning examples to one of two categories, such that the expected number of misassignments is minimal. Whilst this scenario is rather well understood, theoretically as well as empirically, it is not directly applicable to many practically relevant scenarios, the most prominent being the case of more than two possible outcomes. Several learning techniques naturally generalize to an arbitrary number of classes, such as density estimation, or logistic regression. However, when comparing the reported performance of these systems with the de-facto standard of using two-class techniques in combination with simple, fixed output codes to solve multi-class problems, they often lack in terms of performance, ease of optimization, and/or run-time behavior. On the other hand, many methods have been proposed to apply binary classifiers to multiclass problems, such as Error Correcting Output Codes (ECOC) [6, 1], Pairwise Coupling [9], or by simply reducing the problem of discriminating  classes to  “one vs. the rest” dichotomies. Unfortunately the optimality of such methods is not always clear (e.g., how to choose the code, how to combine predictions, scalability to many classes). Finally, there are other problems similar to multi-class classification which can not be solved satisfactory by just combining simpler variants of other algorithms: multi-label problems, where each instance should be assigned to a subset of possible categories, and ranking problems, where each instance should be assigned a rank for all or a subset of possible outcomes. These problems can, in reverse order of their appearance, be understood as more and more refined variants of a multi-variate regression, i.e. two-class  multi-class  multi-label  ranking  multi-variate regression Which framework and which algorithm in there one ever chooses, it is usually possible to make out a single scheme common to all these: There is an encoding step in which the input data are embedded into some “code space” and in this space there is a code book which allows to assign one or several labels or ranks respectively by measuring the similarity between mapped samples and the code book entries. However, most previous work is either focused on finding a good embedding given a fixed code or just optimizing the code, given a fixed embedding (cf. Section 2.3). The aim of this work is to propose (i) a multi-class formulation which optimizes the code and the embedding of the training sample into the code space, and (ii) to develop a general ranking technique which as well specializes to specific multi-class, multi-label and ranking problems as it allows to solve more general problems. As an example of the latter consider the following model problem: In chemistry people are interested in mapping sequences to structures. It is not yet known if there is an one-to-one correspondence and hence the problem is to find for each sequence the best matching structures. However, there are only say a thousand sequences the chemists have good knowledge about. They are assigned, with a certain rank, to a subset of say a thousand different structures. One could try to cast this as a standard multi-class problem by assigning each training sequence to the structure ranked highest. But then, there will be classes to which only very few or no sequences are assigned and one can obviously hardly learn using traditional techniques. The machine we propose is (at least in principle) able to solve problems like this by reflecting relations between classes in the way the code book is constructed and at the same time trying to find an embedding of the data space into the code space that allows for a good discrimination. The remainder of this paper is organized as follows: In Section 2 we introduce some basic notions of large margin, output coding and multi-class classification. Then we discuss the approaches of [4] and [21] and propose to learn the code book. In Section 3 we propose a rather general idea to solve resulting multi-class problems using two-class classifiers. Section 4 presents some preliminary experiments before we conclude. 2 Large Margin Multi-Class Classification Denote by the sample space (not necessarily a metric space), by  the space of possible labels or ranks (e.g.       for multi-class problems where  denotes the number of classes, or    for a ranking problem), and let  be a training sample of size  , i.e.     ! with "      #  . Output Coding It is well known (see [6, 1] and references therein) that multi-class problems can be solved by decomposing a polychotomy into $ dichotomies and solving these separately using a two-class technique. This can be understood as assigning to each class % a binary string &  % ' ( )  +* of length , which is called a code word. This results in an -., binary code matrix. Now each of the , columns of this matrix defines a partitioning of  classes into two subsets, forming binary problems for which a classifier is trained. Evaluation is done by computing the output of all , learned functions, forming a new bit-string, and then choosing the class % such that some distance measure between this string and the corresponding row of the code matrix is minimal, usually the Hamming distance. Ties can be broken by uniformly selecting a winning class, using prior information or, where possible, using confidence outputs from the basic classifiers.1 Since the codes for each class must be unique, there are /102 354 76 8 3 *  (for 9;: 8 * ) possible code matrices to choose from. One possibility is to choose the codes to be errorcorrecting (ECOC) [6]. Here one uses a code book with e.g. large Hamming distance between the code words, such that one still gets the correct decision even if a few of the classifiers err. However, finding the code that minimizes the training error is NP-complete, even for fixed binary classifiers [4]. Furthermore, errors committed by the binary classifiers are not necessarily independent, significantly reducing the effective number of wrong bits that one can handle [18, 19]. Nonetheless ECOC has proven useful and algorithms for finding a good code (and partly also finding the corresponding classifiers) have been 1We could also use ternary codes, i.e. <>=@? ACB)AD? E , allowing for “don’t care”classes. proposed in e.g. [15, 7, 1, 19, 4]. Noticeably, most practical approaches suggest to drop the requirement of binary codes, and instead propose to use continuous ones. We now show how predictions with small (e.g. Hamming) distance to their appropriated code words can be related to a large margin classifier, beginning with binary classification. 2.1 Large Margins Dichotomies Here a large margin classifier is defined as a mapping  with the property that    , or more specifically      with       , where  is some positive constant [20]. Since such a positive margin may not always be achievable, one typically maximizes a penalized version of the maximum margin, such as      where        (        ".       and   (1) Here   is a regularization term, !  is a regularization constant and  denotes the class of functions under consideration. Note that for     we could rewrite the condition    "  (   also as  #     ( C(  # 0 (  #     (   0   (   (and likewise for    (  ). In other words, we can express the margin as the difference between the distance of    from the target  and the target (  . Polychotomies While this insight by itself is not particularly useful, it paves the way for an extension of the notion of the margin to multi-class problems: denote by $ a distance measure and by &  % @ * , %        ( , &%(' is the length of the code) target vectors corresponding to class % . Then we can define the margin  *)     of an observation  and class  with respect to ) ,+- * as  .)   #   !/1032 465 87 $  &  %   )@ ( $  &     )   #  (2) This means that we measure the minimal relative difference in distance between ) , the correct target &    and any other target &  %  (cf. [4]). We obtain accordingly the following optimization problem: minimize  ,    9 )  where $  &  %   )   #( $  &     )@  #  (   (3) for all %;:     < and ) = . For the time being we chose  as a reference margin — an adaptive means of choosing the reference margin can be implemented using the > -trick, which leads to an easier to control regularization parameter [16]. 2.2 Distance Measures Several choices of $ are possible. However, one can show that only $  &> )  -?D& (@) ? 0 0 and related functions will lead to a convex constraint on ) : Lemma 1 (Difference of Distance Measures) Denote by $  &> )  ) *  *= a symmetric distance measure. Then the only case where $  &> )  (&A  &CB )  is convex in ) for all & &6B occurs if $  &> )  <$D  &   $E *)   &GFIH ) , where H  *KJ * is symmetric. Proof Convexity in ) implies that L 0 E  $  &> )  ( $  &MB1 )   is positive semidefinite. This is only possible if L 0 E $  &> )  is a function of ) only. The latter, however, implies that the only joint terms in & and ) must be of linear nature in ) . Symmetry, on the other hand, implies that the term must be linear in & , too, which proves the claim. Lemma 1 implies that any distance functions other than the ones described above will lead to optimization problems with potentially many local minima, which is not desirable. However, for quadratic $ we will get a convex optimization problem (assuming suitable   ) and then there are ways to efficiently solve (3). Finally, re-defining &  %    &  %  H means that it is sufficient to consider only $  &> )   ?D& ( ) ? 0 . We obtain $  &  %   )@   ( $  &     )@   ?D&  %  ? 0 ( ?D&    ? 0  8 &  %  F )@  ( 8 &    F )@   (4) Note, that if the code words have the same length, the difference of the projections of ) onto different code words determines the margin. We will indeed later consider a more convenient case: $  &  %   )@    &  %  F )   , which will lead to linear constraints only and allows us to use standard optimization packages. However, there are no principal limitations about using the Euclidean distance. If we choose &  %  to be an error-correcting code, such as those in [6, 1], one will often have ,   . Hence, we use fewer dimensions than we have classes. This means that during optimization we are trying to find  functions  4    $  &  %   )   # , %  )      , from an , dimensional subspace. In other words, we choose the subspace and perform regularization by allowing only a smaller class of functions. By appropriately choosing the subspace one may encode prior knowledge about the problem. 2.3 Discussion and Relation to Previous Approaches Note that for &  %   (   8 4  we have that (4) is equal 8 &  %  F )@  ( 8 &    F )@    8 7    ( 4   and hence the problem of multi-class classification reverts to the problem of solving  binary classification problems of one vs. the remaining classes. Then our approach turns out to be very similar to the idea presented in [21] (except for some additional slack-variables). A different approach was taken in [4]. Here, the function ) is held fix and the code & is optimized. In their approach, the code is described as a vector in a kernel feature space and one obtains in fact an optimization problem very similar to the one in [21] and (3) (again, the slack-variables are defined slightly different). Another idea which is quite similar to ours was also presented at the conference [5]. The resulting optimization problem turns out to be convex, but with the drawback, that one can either not fully optimize the code vectors or not guarantee that they are well separated. Since these approaches were motivated by different ideas (one optimizing the code, the other optimizing the embedding), this shows that the role of the code &  %  and the embedding function ) is interchangeable if the function or the code, respectively, is fixed. Our approach allows arbitrary codes for which a function ) is learned. This is illustrated in Figure 1. The position of the code words (=“class centers”) determine the function ) . The position of the centers relative to each other may reflect relationships between the classes (e.g. classes “black” & “white” and “white” & “grey” are close). Figure 1: Illustration of embedding idea: The samples are mapped from the input space into the code space  via the embedding function ) , such that samples from the same class are close to their respective code book vector (crosses on the right). The spatial organization of the code book vectors reflects the organization of classes in the space. 2.4 Learning Code & Embedding This leaves us with the question of how to determine a “good” code and a suitable ) . As we can see from (4), for fixed ) the constraints are linear in & and vice versa, yet we have nonconvex constraints, if both ) and & are variable. Finding the global optimum is therefore computationally infeasible when optimizing ) and & simultaneously (furthermore note that any rotation applied to & and ) will leave the margin invariant, which shows the presence of local minima due to equivalent codes). Instead, we propose the following method: for fixed code & optimize over ) , and subsequently, for fixed ) , optimize over & , possibly repeating the process. The first step follows [4], i.e. to learn the code for a fixed function. Both steps separately can be performed fairly efficient (since the optimization problems are convex; cf. Lemma 1). This procedure is guaranteed to decrease the over all objective function at every step and converges to a local minimum. We now show how a code maximizing the margin can be found. To avoid a trivial solution (we can may virtually increase the margin by rescaling all & by some constant), we add  4  ?D&  %  ? 0 0 to the objective function. It can be shown that one does not need an additional regularization constant in front of this term, if the distance is linear on both arguments. If one prefers sparse codes, one may use the   -norm instead. In summary, we obtain the following convex quadratic program for finding the codes which can be solved using standard optimization techniques: minimize D 4          4  ?D&  %  ? 0 0 subject to  &   ( &  % # F )     (   for all ".       and % :    (5) The technique for finding the embedding will be discussed in more detail in Section 3. Initialization To obtain a good initial code, we may either take recourse to readily available tables [17] or we may use a random code, e.g. by generating vectors uniformly distributed on the , -dimensional sphere. One can show that the probability that there exists two such vectors (out of  ) that have a smaller distance than  is bounded by  0  9 (   9  0 2 *    8) (proof given in the full paper). Hence, with probability greater than  0 the random code vectors have distances greater than 8     from each other.2 3 Column Generation for Finding the Embedding There are several ways to setup and optimize the resulting optimization problem (3). For instance in [21, 4] the class of functions is the set of  hyperplanes in some kernel feature space and the regularizer   is the sum of the  0 -norms of the hyperplane normal vectors. In this section we consider a different approach. Denote by    ! " +  $# %  )    '&  a class of basis functions and let ))(  +* " -, ". /" '01 22 . We choose the regularizer   to be the   -norm on the expansion coefficients. We are interested in solving: / 03203/10435 687!9-: ;  <.7!9>= * "  , "    ?  subject to  &  %  ( &    F 6     ( ?   ".     #    :  %  )      (6) To derive a column generation method [12, 2] we need the dual optimization problem, or more specifically its constraints: A   4 @ , "  )    # and   :  %  )      ,    @ 7  5 4  A   4  &    ( &  % # F A"   CB )-%  )    & (7) 2However, also note that this is quite a bit worse than the best packing, which scales with DCE    rather than D E    . This is due a the union-bound argument in the proof, which requires us to sum over the probability that all DGFHD.= ?IKJ.L pairs have more than M distance. and  7  5 4  A  4 B   , "  )    # . The idea of column generation is to start with a restricted master problem, namely without the variables (i.e &   ). Then one solves the corresponding dual problem (7) and then finds the hypothesis that corresponds to a violated constraint (and also one primal variable). This hypothesis is included in the optimization problem, one resolves and finds the next violated constraint. If all constraints of the full problem are satisfied, one has reached optimality. We now construct a hypothesis set  from a scalar valued base-class   @ " +  where %       &  , which has particularly nice properties for our purposes. The idea is to extend @ " by multiplication with vectors   * :      +  *    *  ?  ?   )    Since there are infinitely many functions in this set  , we have an infinite number of constraints in the dual optimization problem. By using the described column generation technique one can, however, find the solution of this semi-infinite programming problem [13]. We have to identify the constraint in (7), which is maximally violated, i.e. one has to find a “partitioning”  and a hypothesis @ " with maximal    7  5 4  A  4 @ "    &     ( &  %  F    F @ "  (8) for appropriate @ " . Maximizing (8) with respect to ?  ?    is easy for a given @ " : for   , one chooses    2  @ "  ; if   8 , then   @ "  ? @ " ? 0 and for   one chooses the minimizing unit vector. However, finding and  simultaneously is a difficult problem, if not all @ " are known in advance (see also [15]). We propose to test all previously used hypotheses to find the best  . As a second step one finds the hypothesis @ " that maximizes  F @ " . Only if one cannot find a hypothesis that violates a constraint, one employs the more sophisticated techniques suggested in [15]. If there is no hypothesis  left that corresponds to a violated constraint, the dual optimization problem is optimal. In this work we are mainly interested in the case   , since then     >* and the problem of finding @ " simplifies greatly. Then we can use another learning algorithm that minimizes or approximately minimizes the training error of a weighted training set (rewrite (8)). This approach has indeed many similarities to Boosting. Following the ideas in [14] one can show that there is a close relationship between our technique using the trivial code and the multi-class boosting algorithms as e.g. proposed in [15]. 4 Extensions and Illustration 4.1 A first Experiment In a preliminary set of experiments we use two benchmark data sets from the UCI benchmark repository: glass and iris. We used our column generation strategy as described in Section 3 in conjunction with the code optimization problem to solve the combined optimization problem to find the code and the embedding. We used ,   . The algorithm has only one model parameters ( ). We selected it by  -fold cross validation on the training data. The test error is determined by averaging over five splits of training and test data. As base learning algorithm we chose decision trees (C4.5) which we only use as two-class classifier in our column generation algorithm. On the glass data set we obtained an error rate of        . In [1] an error of  was reported for SVMs using a polynomial kernel. We also computed the test error of multiclass decision trees and obtained       error. Hence, our hybrid algorithm could relatively improve existing results by ! . On the iris data we could achieve an error rate of "     8  # and could slightly improve the result of decision trees (          ). However, SVMs beat our result with 8  error [1]. We conjecture that this is due to the properties of decision trees which have problems generating smooth boundaries not aligned with coordinate axes. So far, we could only show a proof of concept and more experimental work is necessary. It is in particular interesting to find practical examples, where a non-trivial choice of the code (via optimization) helps simplifying the embedding and finally leads to additional improvements. Such problems often appear in Computer Vision, where there are strong relationships between classes. Preliminary results indicate that one can achieve considerable improvements when adapting codes and embeddings [3]. 0.5 1 1.5 2 2.5 3 0.5 1 1.5 2 2.5 3 Figure 2: Toy example for learning missing classes. Shown is the decision boundary and the confidence for assigning a sample to the upper left class. The training set, however, did not contain samples from this class. Instead, we used (9) with the information that each example besides belonging to its own class with confidence two also belongs to the other classes with confidence one iff its distance to the respective center is less than one. 4.2 Beyond Multi-Class So far we only considered the case where there is only one class to which an example belongs to. In a more general setting as for example the problem mentioned in the introduction, there can be several classes, which possibly have a ranking. We have the sets     %   % 0  # %   % 0     D#   , which contain all pairs of “relations” between the positive classes. The set    contains all pairs of positive and negative classes of an example. minimize   A   A  B  '   5&  %  * "  , "           ?       ? B     ? &  %   ? 0 0 subject to  &  %   ( &  % 0 # F ) 6     ( ?    for all ".       and  %   % 0     &  %   ( &  % 0 # F ) 6     ( ? B  for all ".       and  %   % 0     (9) where ) 6      * "  , ". A"   and 7 ! A" +  *#%  )    &  . In this formulation one tries to find a code & and an embedding ) , such that for each example the output wrt. each class this example has a relation with, reflects the order of this relations (i.e. the examples get ranked appropriately). Furthermore, the program tries to achieve a “large margin” between relevant and irrelevant classes for each sample. Similar formulations can be found in [8] (see also [11]). Optimization of (9) is analogous to the column generation approach discussed in Section 3. We omit details due to constraints on space. A small toy example, again as a limited proof of concept, is given in Figure 2. Connection to Ranking Techniques Ordinal regression through large margins [10] can be seen as an extreme case of (9), where we have as many classes as observations, and each pair of observations has to satisfy a ranking relation     (   "   (   " , if   is to be preferred to  " . This formulation can of course also be understood as a special case of multi-dimensional regression. 5 Conclusion We proposed an algorithm to simultaneously optimize output codes and the embedding of the sample into the code book space building upon the notion of large margins. Furthermore, we have shown, that only quadratic and related distance measures in the code book space will lead to convex constraints and hence convex optimization problems whenever either the code or the embedding is held fixed. This is desirable since at least for these sub-problems there exist fairly efficient techniques to compute these (of course the combined optimization problem of finding the code and the embedding is not convex and has local minima). We proposed a column generation technique for solving the embedding optimization problems. It allows the use of a two-class algorithm, of which there exists many efficient ones, and has connection to boosting. Finally we proposed a technique along the same lines that should be favorable when dealing with many classes or even empty classes. Future work will concentrate on finding more efficient algorithms to solve the optimization problem and on more carefully evaluating their performance. Acknowledgements We thank B. Williamson and A. Torda for interesting discussions. References [1] E.L. Allwein, R.E. Schapire, and Y. Singer. Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113–141, 2000. [2] K.P. Bennett, A. Demiriz, and J. Shawe-Taylor. A column generation algorithm for boosting. In P. Langley, editor, Proc. 17th ICML, pages 65–72, San Francisco, 2000. Morgan Kaufmann. [3] B. Caputo and G. R¨atsch. Adaptive codes for visual categories. November 2002. Unpublished manuscript. Partial results presented at NIPS’02. [4] K. Crammer and Y. Singer. On the learnability and design of output codes for multiclass problems. In N. Cesa-Bianchi and S. Goldberg, editors, Proc. Colt, pages 35–46, San Francisco, 2000. Morgan Kaufmann. [5] O. Dekel and Y. Singer. Multiclass learning by probabilistic embeddings. In NIPS, vol. 15. MIT Press, 2003. [6] T.G. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of Aritifical Intelligence Research, 2:263–286, 1995. [7] V. Guruswami and A. Sahai. Multiclass learning, boosing, and error-correcting codes. In Proc. of the twelfth annual conference on Computational learning theory, pages 145–155, New York, USA, 1999. ACM Press. [8] S. Har-Peled, D. Roth, and D. Zimak. Constraint classification: A new approach to multiclass classification and ranking. In NIPS, vol. 15. MIT Press, 2003. [9] T.J. Hastie and R.J. Tibshirani. Classification by pairwise coupling. In M.I. Jordan, M.J. Kearnsa, and S.A. Solla, editors, Advances in Neural Information Processing Systems, vol. 10. MIT Press, 1998. [10] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In A. J. Smola, P. L. Bartlett, B. Sch¨olkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 115–132, Cambridge, MA, 2000. MIT Press. [11] R. Jin and Z. Ghahramani. Learning with multiple labels. In NIPS, vol. 15. MIT Press, 2003. [12] S. Nash and A. Sofer. Linear and Nonlinear Programming. McGraw-Hill, New York, NY, 1996. [13] G. R¨atsch, A. Demiriz, and K. Bennett. Sparse regression ensembles in infinite and finite hypothesis spaces. Machine Learning, 48(1-3):193–221, 2002. Special Issue on New Methods for Model Selection and Model Combination. [14] G. R¨atsch, M. Warmuth, S. Mika, T. Onoda, S. Lemm, and K.-R. M¨uller. Barrier boosting. In Proc. COLT, pages 170–179, San Francisco, 2000. Morgan Kaufmann. [15] R.E. Schapire. Using output codes to boost multiclass learning problems. In Machine Learning: Proceedings of the 14th International Conference, pages 313–321, 1997. [16] B. Sch¨olkopf, A. Smola, R.C. Williamson, and P.L. Bartlett. New support vector algorithms. Neural Computation, 12:1207 – 1245, 2000. [17] N. Sloane. Personal homepage. http://www.research.att.com/˜njas/. [18] W. Utschick. Error-Correcting Classification Based on Neural Networks. Shaker, 1998. [19] W. Utschick and W. Weichselberger. Stochastic organization of output codes in multiclass learning problems. Neural Computation, 13(5):1065–1102, 2001. [20] V.N. Vapnik and A.Y. Chervonenkis. A note on one class of perceptrons. Automation and Remote Control, 25, 1964. [21] J. Weston and C. Watkins. Multi-class support vector machines. Technical Report CSD-TR-9804, Royal Holloway, University of London, Egham, 1998.
2002
56
2,261
Topographic Map Formation by Silicon Growth Cones Brian Taba and Kwabena Boahen Department of Bioengineering University of Pennsylvania Philadelphia, P A 19104 {blaba, kwabena}@neuroengineering.upenn.edu Abstract We describe a self-configuring neuromorphic chip that uses a model of activity-dependent axon remodeling to automatically wire topographic maps based solely on input correlations. Axons are guided by growth cones, which are modeled in analog VLSI for the first time. Growth cones migrate up neurotropin gradients, which are represented by charge diffusing in transistor channels. Virtual axons move by rerouting address-events. We refined an initially gross topographic projection by simulating retinal wave input. 1 Neuromorphic Systems Neuromorphic engineers are attempting to match the computational efficiency of biological systems by morphing neurocircuitry into silicon circuits [1]. One of the most detailed implementations to date is the silicon retina described in [2]. This chip comprises thirteen different cell types, each of which must be individually and painstakingly wired. While this circuit-level approach has been very successful in sensory systems, it is less helpful when modeling largely unelucidated and exceedingly plastic higher processing centers in cortex. Instead of an explicit blueprint for every cortical area, what is needed is a developmental rule that can wire complex circuits from minimal specifications. One candidate is the famous "cells that fire together wire together" rule, which strengthens excitatory connections between coactive presynaptic and postsynaptic cells. We implemented a self-rewiring scheme of this type in silicon, taking our cue from axon remodeling during development. 2 Growth Cones During development, the brain wires axons into a myriad of topographic projections between regions. Axonal projections initially organize independent of neural activity, establishing a coarse spatial order based on gradients of substrate-bound molecules laid down by local gene expression. These gross topographic projections are refined and maintained by subsequent neuronal spike activity, and can reroute post II A B Figure 1: A. Postsynaptic activity is transmitted to the next layer (up arrows) and releases neurotropin into the extracellular medium (down arrows). B. Presynaptic activity excites postsynaptic dendrites (up arrows) and triggers neurotropin uptake by active growth cones (down arrows). Each growth cone samples the neurotropin concentration at several spatial locations, measuring the gradient across the axon terminal. Growth cones move toward higher neurotropin concentrations. C. Axons that fire at the same time migrate to the same place. themselves if their signal source changes. In such cases, axons abandon obsolete territory and invade more promising targets [3]. An axon grows by adding membrane and microtubule segments to its distal tip, an amoeboid body called a growth cone. Growth cones extend and retract fingers of cytoplasm called filopodia, which are sensitive to local levels of guidance chemicals in the surrounding medium. Candidate guidance chemicals include BDNF and NO, whose release can be triggered by action potentials in the target neuron [4]. Our learning rule is based on an activity-derived diffusive chemical that guides growth cone migration. In our model, this neurotropin is released by spiking neurons and diffuses in the extracellular medium until scavenged by glia or bound by growth cones (Figure lA). An active growth cone compares amounts of neurotropin bound to each of its filopodia in order to measure the local gradient (Figure IB). The growth cone then moves up the gradient, dragging the axon behind it. Since neurotropin is released by postsynaptic activity and axon migration is driven by presynaptic activity, this rule translates temporal coincidence into spatial coincidence (Figure 1 C). For topographic map formation, this migration rule requires temporal correlations in the presynaptic plane to reflect neighborhood relations. We supply such correlations by simulating retinal waves, spontaneous bursts of action potentials that sweep across the ganglion cell layer in the developing mammalian retina. Retinal waves start at random locations and spread over a limited domain before fading away, eventually tiling the entire retinal plane [5]. Axons participating in the same retinal A ~ , , , \ , , j=~==~~~~~~~~~·VG~' NK VGCK iJC , >>> -'-' u a: ... E x Xmit X B Figure 2: A. Chip block diagram. Axon terminal (AT) and neuron (N) circuits are arrayed hexagonally, surrounded by a continuous charge-diffusing lattice. An active axon terminal (AT x,y) excites the three adjacent neurons and its growth cone samples neurotropin from four adjacent lattice nodes. The growth cone sends the measured gradient direction off-chip (VGCx,y)' An active postsynaptic neuron (Nx,y) releases neurotropin into the six surrounding lattice nodes and sends its spike off-chip. B. System block diagram. Presynaptic neurons send spikes to the lookup table (LUT), which routes them to axon terminal coordinates (AT) on-chip. Chip output filters through a microcontroller (f.lC) that translates gradient measurements (VGC) into LUT updates (ilAT). Postsynaptic activity (N) may be returned to the LUT as recurrent excitation and also passed on to the next stage of the system. wave migrate to the same postsynaptic neighborhood, since neurotropin concentration is maximized when every cell that fires at the same time releases neurotropin at the same place. To prevent all of the axons from collapsing onto a single postsynaptic target, we enforce a strictly constant synaptic density. We have a fixed number of synaptic sites, each of which can be occupied by one and only one presynaptic afferent. An axon terminal moves from one synaptic site to another by swapping places with the axon already occupying the desired location. Learning occurs only in the point-topoint wiring diagram; synaptic weights are identical and unchanging. 3 System Architecture We have fabricated and tested a first-generation neurotropin chip, Neurotrope 1, that implements retrograde transmission of a diffusive factor from postsynaptic neurons to presynaptic afferents (Figure 2A). The 11.5 mm2 chip was fabricated through MOSIS using the TSMC 0.35f.lm process, and includes a 40 x 20 array of growth cones interleaved with a 20 x 20 array of neurons. The chip receives and transmits 1Vdd Vdd Vdd : Vdd -------------------------------------, Vdd Vdd -4 M11 : Vi release I JL' : M12 r-samp eO : -:- Viuptake M1 Figure 3: Neurotropin circuit diagram. Postsynaptic activity gates neurotropin release (left box) and presynaptic activity gates neurotropin uptake (right box). spike coordinates encoded as address-events, permitting ready interface with other spike-based chips that obey this standard [6]. Virtual wiring [7] is realized with a look-up table (LUT) stored in a separate content-addressable memory (CAM) that is controlled by an Ubi com SX52 microcontroller (Figure 2B). The core of the chip consists of an array of axon terminals that target a second array of neurons, all surrounded by a monolithic pFET channel laid out as a hexagonal lattice, representing a two-dimensional extracellular medium. An activated axon terminal generates postsynaptic potentials in all the fixed-radius dendritic arbors that span its location, as modeled by a diffusor network [8]. Once the membrane potential crosses a threshold, the neuron fires, transmitting its coordinates off-chip and simultaneously releasing neurotropin, represented as charge spreading within the lattice. N eurotropin diffuses spatially until removed by either an activityindependent leak current or an active axon terminal. An axon terminal senses the local extracellular neurotropin gradient by draining charge from its own node on the hexagonal lattice and from the three immediately adjacent nodes. Charge from the four locations is integrated on independent capacitors, which race to cross threshold first. The winner of this latency competition transmits a set of coordinates that uniquely identify the location and direction of the measured gradient. We use the neuron circuit described in [9] to integrate neurotropin as well as dendritic potentials. Coordinates transmitted off-chip thus fall into two categories: neuron spikes that are routed through the LUT, and gradient directions that are used to update entries in the LUT. An axon migrates simply by looking up the entry in the table corresponding to the site it wants to occupy and swapping that address with that of its current location. Subsequent spikes are routed to the new coordinates. Thus, although the physical axon terminal circuits are immobilized in silicon, the virtual axons are free to move within the postsynaptic plane. 3.1 Neurotropin circuit Neurotropin in the extracellular medium is represented by charge in the hexagonal charge-diffusing lattice Ml (Figure 3). VCDL sets the maximum amount of charge MI can hold. The total charge in Ml is determined by circuits that implement 11 12 13 10 1 Vm - sp Vm - sp Vm - sp - s031 , C1* C2* C3* Figure 4: Latency competition circuit diagram. A growth cone integrates neurotropin samples from its own location (right box) and the three neighboring locations (left three boxes). The first location to accumulate a threshold of charge resets its three competitors and signals its identity off-chip. activity-dependent neurotropin release and uptake. In addition, MIl and M12 provide a path for activity-independent release and uptake. Postsynaptic activity triggers neurotropin release, as implemented by the circuit in the left box of Figure 3. Spikes from any of the three neighboring postsynaptic neurons pull Cspost to ground, opening M7 and discharging C/post through M4 and M5. As C/post falls, M6 opens, establishing a transient path from Vdd to M1 that injects charge into the hexagonal lattice. Upon termination of the postsynaptic spike, Cspost and C/post are recharged by decay currents through M2 and M3. Vppost and V/postout are chosen such that Cspost relaxes faster than C/post. permitting C/post to integrate several postsynaptic spikes and facilitate charge injection if spikes arrive in a burst rather than singly. V/postin determines the contribution of an individual spike to the facilitation capacitor C/post. Presynaptic activity triggers neurotropin uptake, as implemented by the circuit in the right box of Figure 3. Charge is removed from the hexagonal lattice by a facilitation circuit similar to that used for postsynaptic release. A presynaptic spike targeted to the axon terminal pulls C spre to ground through M24. C spre. in turn, drains charge from C/pre through M21 and M22. C/pre removes charge from the hexagonal lattice through M14, up to a limit set by M13, which prevents the hexagonal lattice from being completely drained in order to avoid charge trapping. Current from M14 is divided between five possible sinks. Depending on presynaptic activation, up to four axon terminals may sample a fraction of this current through M 15-18; the remainder is shunted to ground through M 19 in order to prevent a single presynaptic event from exerting undue influence on gradient measurements. The current sampled by the axon terminal at its own site is gated by ~sampleo, which is pulled low by a presynaptic spike through M26 and subsequently recovers through M25. Identical circuits in the other axon terminals generate signals ~sample], ~sample2' and ~sample3. Sample currents la, h hand 13 are routed to latency competition circuits in the four adjacent axon terminals. Figure 5: Retinal stimulus and cortical attractor. A. Randomly centered patches of active retinal cells (left) excite cortical targets (right). B. Density plot of a single mobile growth cone initialized in a static topographic projection. Histograms bin column (0'=3.27) and row (0'=3.79) coordinates observed (n=800). 3.2 Latency competition circuit Each axon terminal measures the local neurotropin gradient by sampling a fraction of the neurotropin present at its own site, location 0, and the three immediately adjacent nodes on the hexagonal lattice, locations 1-3. Charge drained from the hexagonal lattice at these four sites is integrated on a separate capacitor for each location. The first capacitor to reach the threshold voltage wins the race, resetting itself and all of its competitors and signaling its victory off-chip. In the circuit that samples neurotropin from location 1 (left box of Figure 4), charge pulses 1J arrive through diode Ml and accumulate on capacitor CJ in an integrateand-fire circuit described in [9]. Upon crossing threshold this circuit transmits a swap request ~sol, resets its three competitors by using M6 to pull the shared reset line GRST high, and disables M4 to prevent GRST from using M3 to reset CJ • The swap request ~sol remains low until acknowledged by sil, which discharges CJ through M2. During the time that ~sol is low, the other three capacitors are shunted to ground by GRST, preventing late arrivals from corrupting the declared gradient measurement before it has been transmitted off-chip. C] being reset releases GRST to relax to ground through M24 with a decay time determined by Vgrst• C] is also reset if the neighboring axon terminal initiates a swap. GRSTil is pulled low if either the axon terminal at location 1 decides to move to location 0 or the axon terminal at location 0 decides to move to location 1. The accumulated neurotropin samples at both locations become obsolete after the exchange, and are therefore discarded when GRST is pulled high through MS. Identical circuits sample neurotropin from locations 2 and 3 (center two boxes of Figure 4). If Co (right box of Figure 4) wins the latency competition, the axon terminal decides that its current location is optimal and therefore no action is required. In this case, no off-chip communication occurs and Co immediately resets itself and its three rivals. Thus, the location 0 circuit is identical to those of locations 1-3 except that the inverted spike is fed directly back to the reset transistor M20 instead of to a communication circuit. Also, there is no GRSTiO transistor since there is no swap partner. 4 Results We drove the chip with a sequence of randomly centered patches of presynaptic activity meant to simulate retinal waves. Each patch consisted of 19 adjacent presynaptic cells: a randomly selected presynaptic cell and its nearest, next-nearest, Presynaptic Postsynapti c + 12000 patches ~ . '~.~'~ .. 20 .0 +.J 0 · .. "" ~~ c III 17 .5 ~ Q) 15 .0 c 2 = Q) cL (jj C <0 ~o ~ 7.5 c 0--------1. 0 0 . .0 ~o 5.0 \'t 0--------1. 0 <0 ~. 2.5 1) .-... cL ~<:i ~t:J 2k 4k 6k 8k 10k 12k A B C Number of patches Figure 6: Topographic map evolution. A. Initial maps. Axon terminals in the postsynaptic plane (right) are dyed according to the presynaptic coordinates of their cell body (left). Top row: Coarse initial map. Bottom row: Perfect initial map. B. Postsynaptic plane after 12000 patch presentations. C. Map error in units of average postsynaptic distance between axon terminals of presynaptic neighbors. Top line: refinement of coarse initial map; bottom line: relaxation of perfect initial map. and third-nearest presynaptic neighbors on a hexagonal grid (Figure 5A). Every patch participant generated a burst of 8192 spikes, which were routed to the appropriate axon terminal circuit according to the connectivity map stored in the CAM. About 100 patches were presented per minute. To establish an upper performance bound, we initialized the system with a perfectly topographic projection and generated bursts from the same retinal patch, holding all growth cones static except for the one projected from the center of the patch, which was free to move over the entire cortical plane. Over 800 min, the single mobile growth cone wandered within the cortical area of the patch (Figure 5B), suggesting that the patch radius limits maximum sustainable topography even in the ideal case. To test this limit empirically, we generated an initial connectivity map by starting with a perfectly topographic projection and executing a sequence of (N/2)2 swaps between a randomly chosen axon terminal and one of its randomly chosen postsynaptic neighbors, where N is the number of axon terminals used. We opted for a fanout of 1 and full synaptic site occupancy, so 480 presynaptic cells projected axons to 480 synaptic sites. (One side of the neuron array exhibited enhanced excitability, apparently due to noise on the power rails, so the 320 synaptic sites on that side were abandoned.) The perturbed connectivity map preserved a loose global bias, representing the formation of a coarse topographic projection from activityindependent cues. This new initial map was then allowed to evolve according to the swap requests generated by the chip. After approximately 12000 patches, a refined topographic projection reemerged (Figure 6A,B). To investigate the dynamics of topographic refinement, we defined the error for a single presynaptic cell to be the average of the postsynaptic distances between the axon terminals projected by the cell body and its three immediate presynaptic neighbors. A cell in a perfectly topographic projection would therefore have unit error. The error drops quickly at the beginning of the evolution as local clumps of correlated axon terminals crystallize. Further refinement requires the disassembly of locally topographic crystals that happened to nucleate in a globally inconvenient location. During this later phase, the error decreases slowly toward an asymptote. To evaluate this limit we seeded the system with a perfect projection and let it relax to a sustainable degree of topography, which we found to have an error of about 10 units (Figure 6C). 5 Discussion Our results demonstrate the feasibility of a spike-based neuromorphic learning system based on principles of developmental plasticity. This neurotropin chip lends itself readily to more ambitious multichip systems incorporating silicon retinae that could be used to automatically wire ocular dominance columns and orientationselectivity maps when driven by spatiotemporal correlations among neurons of different origin (e.g. left eye/right eye) or type (ON/OFF). A related model of chemical-driven developmental plasticity posits an activitydependent competition for a local sustenance factor, or neurotrophin. Axon weights saturate at neurotrophin-rich locations and vanish at neurotrophin-starved locations, pruning a dense initial arbor until only the final circuit remains [10]. By contrast, in our chemotaxis model, a handful of growth cone-guided wires rearrange themselves by moving through locations at which they had no initial presence. These two mechanisms could plausibly complement each other: noisy gradient measurements establish an initial axonal arbor that can then be pruned to eliminate outliers and refine local topography. We can use a similar approach to improve our silicon maps. Acknowledgments We would like to thank K. Hynna and K. Zaghloul for assistance with fabrication and testing. This project was funded in part by the David and Lucille Packard Foundation and the NSF/BITS program (EIA0130822). B.T. received support from the Dolores Zohrab Liebmann Foundation. References [1] C. Mead (1990) Neuromorphic electronic systems. IEEE Proc, 78(10): 1629-1636. [2] K.A. Zagh1ou1 (2002) A silicon implementation of a novel model for retinal processing. PhD thesis, University of Pennsylvania. [3] M. Sur and C.A. Leamy (2001) Development and plasticity of cortical areas and networks. Nat Rev Neurosci, 2:251-262. [4] E.J. Huang and L.F. Reichardt (2001) Neurotrophins: roles in neuronal development and function. Annu Rev Neurosci, 24:677-736. [5] M.B. Feller, D.A. Butts, H.L. Aaron, D.S. Rokhsar, and C.J. Shatz (1997) Dynamic processes shape spatiotemporal properties of retinal waves. Neuron, 19:293-306. [6] K.A. Boahen (2000) Point-to-point connectivity between neuromorphic chips using address-events. IEEE Transactions on Circuits and Systems II, 47:416-434. [7] J.G. Elias (1993) Artificial dendritic trees. Neural Comp, 5:648-663. [8] K.A. Boahen and A.G. Andreou (1991) A contrast-sensitive silicon retina with reciprocal synapses. Advances in Neural Information Processing Systems 4, J.E. Moody and R.P. Lippmann, eds., pp 764-772, Morgan Kaufman, San Mateo, CA. [9] E. Culurciello, R. Etienne-Cummings, and K. Boahen (2001) Arbitrated address event representation digital image sensor. IEEE International Solid State Circuits Conference, pp 92-93. [10] T. Elliott and N.R. Shadbolt (1999) A neurotrophic model of the development of the retinogeniculocortical pathway induced by spontaneous retinal waves. J Neurosci, 19:7951-7970.
2002
57
2,262
Analysis of Information in Speech based on MANOVA Sachin s. Kajarekarl and Hynek Hermansky l,2 1 Department of Electrical and Computer Engineering OGI School of Science and Engineering at OHSU Beaverton, OR 2International Computer Science Institute Berkeley, CA { sachin,hynek} @asp.ogi.edu Abstract We propose analysis of information in speech using three sources - language (phone), speaker and channeL Information in speech is measured as mutual information between the source and the set of features extracted from speech signaL We assume that distribution of features can be modeled using Gaussian distribution. The mutual information is computed using the results of analysis of variability in speech. We observe similarity in the results of phone variability and phone information, and show that the results of the proposed analysis have more meaningful interpretations than the analysis of variability. 1 Introduction Speech signal carries information about the linguistic message, the speaker, the communication channeL In the previous work [1, 2], we proposed analysis of information in speech as analysis of variability in a set of features extracted from the speech signal. The variability was measured as covariance of the features, and analysis was performed using using multivariate analysis of variance (MANOVA). Total variability was divided into three types of variabilities, namely, intra-phone (or phone) variability, speaker variability, and channel variability. Effect of each type was measured as its contribution to the total variability. In this paper, we extend our previous work by proposing an information-theoretic analysis of information in speech. Similar to MANOVA, we assume that speech carries information from three main sources- language, speaker, and channeL We measure information from a source as mutual information (MI) [3] between the corresponding class labels and features. For example, linguistic information is measured as MI between phone labels and features. The effect of sources is measured in nats (or bits). In this work, we show it is easier to interpret the results of this analysis than the analysis of variability. In general, MI between two random variables X and Y can be measured using three different methods [4]. First, assuming that X and Y have a joint Gaussian distribution. However, we cannot use this method because one of the variables - a set of class labels - is discrete. Second, modeling distribution of X or Y using parametric form, for example, mixture of Gaussians [4]. Third, using non-parametric techniques to estimate distributions of X and Y [5]. The proposed analysis is based on the second method, where distribution of features is modeled as a Gaussian distribution. Although it is a strong assumption, we show that results of this analysis are similar to the results obtained using the third method [5]. The paper is organized as follows. Section 2 describes the experimental setup. Section 3 describes MAN OVA and presents results of MAN OVA. Section 4 proposes information theoretic approach for analysis of information in speech and presents the results. Section 5 compares these results with results from the previous study. Section 6 describes the summary and conclusions from this work. 2 Experimental Setup In the previous work [1, 2], we have analyzed variability in the features using three databases - HTIMIT, OGI Stories and TIMIT. In this work, we present results of MAN OVA using OGI Stories database; mainly for the comparison with Yang's results [5, 6]. English part of OGI Stories database consists of 207 speakers, speaking for approximately 1 minute each. Each utterance is transcribed at phone level. Therefore, phone is considered as a source of variability or source of information. The utterances are not labeled separately by speakers and channels, so we cannot measure speaker and channel as separate sources. Instead, we assume that different speakers have used different channels and consider speaker+channel as a single source of variability or a single source of information. Figure 1 shows a commonly used time-frequency representation of energy in speech signal. The y-axis represents frequency, x-axis represents time, and the darkness of each element shows the energy at a given frequency and time. A spectral vector is defined by the number of points on the y-axis, S(w, tm). In this work, this vector contains 15 points on Bark spectrum. The vector is estimated at every 10 ms using a 25 ms speech segment. It is labeled by the phone and the speaker and channel label of the corresponding speech segment. A temporal vector is defined by a sequence of points along time at a given frequency, S(wn, t). In this work, it consists of 50 points each in the past and the future with respect to the current observation and the observation itself. As the spectral vectors are computed every 10 ms, the temporal vector represents 1 sec of temporal information. The temporal vectors are labeled by the phone and the speaker and channel label of the current speech segment. In this work, the analysis is performed independently using spectral and temporal vectors. 3 MANOVA Multivariate analysis of variance (MANOVA) [7] is used to measure the variation in the data, {X E Rn }, with respect to two or more factors. In this work, we use two factors - phone and speaker+channel. The underline model of MAN OVA is (1) where, i = 1"" ,p, represents phones, j = 1"" Be, represents speakers and channels. This equation shows that any feature vector, X ijk , can be approximated using a sum of X.. , the mean of the data; Xi., mean of the phone i; Xij., mean of the speaker and channel j, and phone i; and Eij k, an error in this approximation. Using ~r- l0ms Ii III IHII JJ I 1M I~I 11 IH~ifl~" I 'I II II II ,UII n II I I • i 1II1 I 11111 I I1I11 • Il: I [lll~ 1 11;1, II ~i I III [ III " uJlJ ~ 'I I LL 1! ~ 'I~ il [IIi' ' I ~ IJ' n l! '"" "'I 'II~'I 1111 u tJ:i I ~ [I 1'1 IIIIIII~ 111 1'111 III 1111 1111 J II, "iii r 1111 II I ilJU I I ... Spectral Vector (Spectral Domain) ---:JIIIII ___ _ Temporal Vector (Temporal Domain) Figure 1: Time-frequency representation of logarithmic energies from speech signal this model, the total covariance can be decomposed as follows ~total = ~p + ~s c + ~re sidual (2) where """' N i (X. _ X )t (X. - X ) ~ N .. .. .. .. ~s c LL Nij t (X, - X ) (X, - X ) N ZJ .. ZJ z. i j ~r esidual 1""",,,"",,,"", t N ~ ~ ~(Xijk - Xij) (Xijk Xij) i j k and, N is the data size and Nijk refers to the number of samples associated with the particular combination of factors (indicated by the subscript). The covariance terms are computed as follows. First, all the feature vectors (X) belonging to each phone i are collected and their mean (Xi) is computed. The covariance of these phone means, ~p, is the estimate of phone variability. Next, the data for each speaker and channel j within each phone i is collected and the mean of the data (Xij ) is computed. The covariance of the means of different speakers averaged over all phones, ~s c, is the estimate of speaker variability. All the variability in the data is not explained using these sources. The unaccounted sources, such as context and coarticulation, cause variability in the data collected from one speaker speaking one phone through one channel. The covariance within each phone, speaker, and channel is averaged over all phones, speakers, and channels, and the resulting covariance, ~r e sidual, is the estimate of residual variability. 3.1 Results Results of MAN OVA are interpreted at two levels - feature element and feature vector. Results for each feature element are shown in Figure 2. Table 1 shows the results using the complete feature vector. The contribution of different sources is calculated as trace (~sourc e )ltrace(~total). Note that this measure cannot be used to compare variabilities across feature-sets with different number of features. Therefore, we cannot directly compare contribution of variabilities in time and frequency domains. For comparison, contribution of sources in temporal domain is calculated Table 1: Contribution of sources in spectral and temporal domains o contribution source pectral Domain Temporal Domain phone 35.3 4.0 speaker+channel 41.1 30.3 as trace(EtI',sourceE) /trace(EtI',totaIE) , where ElOl x 15 is a matrix of 15 leading eigenvectors of I',total. In spectral domain, the highest phone variability is between 4-6 Barks. The highest speaker and channel variability is between 1-2 Barks where phone variability is the lowest. In temporal domain, phone variability spreads for approximately 250 ms around the current phone. Speaker and channel variability is almost constant except around the current frame. This deviation is explained by the difference in the phonetic context among the phone instances across different speakers. Thus, features for speakers within a phone differ not only because of different speaker characteristics but also different phonetic contexts. This deviation is also seen in the speaker and channel information in the proposed analysis. In the overall results for each domain, spectral domain has higher variability due to different phones than temporal domain. It also has higher speaker and channel variability than temporal domain. The disadvantage of this analysis is that it is difficult to interpret the results. For example, how much phone variability is needed for perfect phone recognition? and is 4% of phone variability in temporal domain significant? In order to answer these questions, we propose an information theoretic analysis. Phone Speaker +Channel 7r,----------------------~ 6 ~ , 5 ' " ' .. O L-----~----~------~ 5 10 15 Frequency (Critical Band Index) 7 ,---~----~--------~ Q) 6 5 g 4 , .. ,. .. , _ ,_ , _ , _ , _ , _ ' - , - ' <1l ... ; '§3 > 2 -250 0 250 Time (ms) Figure 2: Results of analysis of variability 4 Information-theoretic Analysis Results of MAN OVA can not be directly converted to MI because the determinant of source and residual covariances do not add to the determinant of total covariance. Therefore, we propose a different formulation for the information theoretic analysis as follows. Let {X E Rn} be a set of feature vectors, with probability distribution p(X). Let h(X) be the entropy of X. Let Y = {Y1 , ... , Ym} be a set of different factors and each Yi be a set of classes within each factor. For example, we can assume that Y1 = {yf} represents phone factor and each yf represent a phone class. Lets assume that X has two parts; one completely characterized by Y and another part, Z, characterized by N(X) ""' N(O, Jnxn ), where J is the identity matrix. Let J (X; Y) be the MI between X and Y. Assuming that we consider all the possible factors for our analysis, J(X;Y) = J(X;Y1, ... ,Ym) = h(X)-h(X/Yl , ... ,Ym) = h(X)-h(Z) = D(PIIN) , where D() is the kullback-liebler distance [3] between distributions P and N. Using the chain-rule, the left hand side can be expanded as follows, m J(X; Y1,·.·, Yn ) = J(X; Yd + J(X; Y2 /Yd + l: J(X; Yi/Yi- l"'" Y2 , Yd· (3) i=3 If we assume that there are only two factors Y1 and Y2 used for the analysis, then this equation is similar to the decomposition performed using MAN OVA (Equation 2). The term on the left hand side is entropy of X which is the total information in X that can be explained using Y. This is similar to the left-hand side term in MANOVA that describes the total variability. On the right hand side, first term is similar to the phone variability, second term is similar to the speaker variability, and the last term which calculates the effect of unaccounted factors (Y3 , ... , Ym ) is similar to the residual variability. First and second terms on the right hand side of Equation 3 are computed as follows. J(X; Yd = h(X) - h(X/Yd (4) J(X; Y2 /Yd = h(X/Yd - h(X/Y1, Y2 ). (5) h () terms are estimated using parametric approximation to the total and conditional distribution It is assumed that the total distribution of features is a Gaussian distribution with covariance ~. Therefore, h (X) = ~ log (2net I~I. Similarly, we assume that the distribution of features of different phones (i) is a Gaussian distribution with covariances ~i' Therefore, h(X/Y1) = ~ l: p (y~)Iog (2net I~il (6) yiCYi Finally, we assume that the distribution of features of different phones spoken by different speakers is also a Gaussian distribution with covariances ~ij. Therefore, h(X/Y1,Y2 ) = ~ l: p(yLYOlog(2netl~ijl (7) y;CY1,y~CY2 Substituting equations 6 and 7 in equations 4 and 5, we get J(X ' Y;)=~lo I~I . , 1 2 g IT. 1~ ' IP(Yil Yi CY; , (8) 1 IT· I~'IP(Y;) J(X;Y2 /Yd = -log YicY;' . j 2 IT i j I ~i IP(Yi 'Y2) Yl CY1 'Y2 CY2 (9) Phone Speaker +Channel Ul 1.5 ca c :2: \ \ " 0.5 -' ,- ,- ,- ,- ,_ ,_ ,_ 1- '" 5 10 15 Frequency (Critical Band Index) 0.6 ,----~--~-~-------, 0.5 , i , \ :2: 0.2 , .. ,_, .. ,_,' ,;.... ,. - .' ..... , - , _., 0.1 -250 0 250 Time (ms) Figure 3: Results of information-theoretic analysis Table 2: Mutual information between features and phone and speaker and channel labels in spectral and temporal domains source phone speaker+ channel 4 .1 Results Figure 3 shows the results of information-theoretic analysis in spectral and temporal domain. These results are computed independently for each feature element. In spectral domain, phone information is highest between 3-6 Barks. Speaker and channel information is lowest in that range and highest between 1-2 Barks. Since OGI Stories database was collected over different telephones, speaker+ channel information below 2 Barks ( :=::: 200 Hz ) is due to different telephone channels. In temporal domain, the highest phone information is at the center (0 ms). It spreads for approximately 200 ms around the center. Speaker and channel information is almost constant across time except near the center. Note that the nature of speaker and channel variability also deviates from the constant around the current frame. But, at the current frame, phone variability is higher than speaker and channel variability. The results of analysis of information show that, at the current frame, phone information is lower than speaker and channel information. This difference is explained by comparing our MI results with results from Yang et. al. [6] in the next section. Table 2 shows the results for the complete feature vector. Note that there are some practical issues in computing determinants in Equation 4 and 5. They are related to data insufficiency, specifically, in temporal domain where the feature vector is 101 points and there are approximately 60 vectors per speaker per phone. We observe that without proper conditioning of covariances, the analysis overestimates MI (l(X ; Yl , Y2 ) > H(Yl , Y2 )). This is addressed using the condition number to limit the number of eigenvalues used in the calculation of determinants. Our hypothesis is that in presence of insufficient data, only few leading eigen vectors are properly estimated. We have use condition number of 1000 to estimate determinant of ~ and ~i, and condition number of 100 to estimate the determinant of ~ij. The results show that phone information in spectral domain is 1.6 nats. Speaker and channel information is 0.5 nats. In temporal domain, phone information is about 1.2 nats. Speaker and channel information is 5.9 nats. Comparison of results from spectral and temporal domains shows that spectral domain has higher phone information than temporal domain. Temporal domain has higher speaker and channel information than spectral domain. Using these results, we can answer the questions raised in Section 3. First question was how much phone variability is needed for perfect phone recognition? The answer to the question is H(Yd, because the maximum value of leX; Yd is H(Yd· We compute H(Yl ) using phone priors. For this database, we get H(Yl ) = 3.42 nats, that means we need 3.42 nats of information for perfect phone recognition. Question about significance of phone information in temporal domain is addressed by comparing it with information-less MI level. The information-less MI is computed as MI between the current phone label and features at 500 ms in the past or in the future. From our results, we get information-less MI equal to 0.0013 nats considering feature at 500 ms in the past, and 0.0010 nats considering features at 500 ms in the futurel . The phone information in temporal domain is 1.2 bits that is greater than both the levels. Therefore it is significant. 5 Results in Perspective In the proposed analysis, we estimated MI assuming Gaussian distribution for the features. This assumption is validated by comparing our results with the results from a study by Yang, et. al.,[6], where MI was computed without assuming any parametric model for the distribution of features. Note that only entropies can be directly compared for difference in the estimation technique [3]. However, MI using Gaussian assumption can be equal to, less or more than the actual MI. In the comparison of our results with Yang's results, we consider only the nature of information observed in both studies. The difference in actual MI levels across the two studies is related to the difference in the estimation techniques. In spectral domain, Yang's study showed higher phone information between 3-8 Barks. The highest phone information was observed at 4 Barks. Higher speaker and channel information was observed around 1-2 Barks. In temporal domain, their study showed that phone information spreads for approximately 200 ms around the current time frame. Comparison of results from this analysis and our analysis shows that nature of phone information is similar in both studies. Nature of speaker and channel information in spectral domain is also similar. We could not compare the speaker and channel information in temporal domain because Yang's study did not present these results. In Section 4.1, we observed difference in the nature of speaker and channel variability, and speaker and channel information at Ii =5 Barks. Comparing MI levels from our study to those from Yang's study, we observe that Yang's results show that speaker and channel information at 5 Barks is less that the corresponding phone information. This is consistent with results of analysis of variability, but not with lInformation-less MI calculated using Yang et. al. is 0.019 bits the proposed analysis of information. As mentioned before, this difference is due to difference in the density estimation techniques used for computing MI. In the future work, we plan to model the densities using more sophisticated techniques, and improve the estimation of speaker and channel information. 6 Conclusions We proposed analysis of information in speech using three sources of information - language (phone), speaker and channel. Information in speech was measured as MI between the class labels and the set of features extracted from speech signal. For example, linguistic information was measured using phone labels and the features. We modeled distribution of features using Gaussian distribution. Thus we related the analysis to previous proposed analysis of variability in speech. We observed similar results for phone variability and phone information. The speaker and channel variability and speaker and channel information around the current frame was different. This was shown to be related to the over-estimation of speaker and channel information using unimodal Gaussian model. Note that the analysis of information was proposed because its results have more meaningful interpretations than results of analysis of variability. For addressing the over-estimation, we plan to use more complex models ,such as mixture of Gaussians, for computing MI in the future work. Acknowledgments Authors thank Prof. Andrew Fraser from Portland State University for numerous discussions and helpful insights on this topic. References [1] S. S. Kajarekar, N. Malayath and H. Hermansky, "Analysis of sources of variability in speech," in Proc. of EUROSPEECH, Budapest, Hungary, 1999. [2] S. S. Kajarekar, N. Malayath and H. Hermansky, "Analysis of speaker and channel variability in speech," in Proc. of ASRU, Colorado, 1999. [3] T. M. Cover and J. A. Thomas, Elements of Information theory, John Wiley & Sons, Inc., 1991. [4] J. A. Bilmes, "Maximum Mutual Information Based Reduction Strategies for Cross-correlation Based Joint Distribution Modelling ," in Proc. of ICASSP, Seattle, USA, 1998. [5] H. Hermansky H. Yang, S. van Vuuren, "Relevancy of Time-Frequency Features for Phonetic Classification Measured by Mutual Information," in ICASSP '99, Phoenix, Arizona, USA, 1999. [6] H. H. Yang, S. Sharma, S. van Vuuren and H. Hermansky, "Relevance of TimeFrequency Features for Phonetic and Speaker-Channel Classification," Speech Communication, Aug. 2000. [7] R. V. Hogg and E. A. Tannis, Statistical Analysis and Inference, PRANTICE HALL, fifth edition, 1997.
2002
58
2,263
Discriminative Binaural Sound Localization Ehud Ben-Reuven and Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel udi@benreuven.com, singer@cs.huji.ac.il Abstract Time difference of arrival (TDOA) is commonly used to estimate the azimuth of a source in a microphone array. The most common methods to estimate TDOA are based on finding extrema in generalized crosscorrelation waveforms. In this paper we apply microphone array techniques to a manikin head. By considering the entire cross-correlation waveform we achieve azimuth prediction accuracy that exceeds extrema locating methods. We do so by quantizing the azimuthal angle and treating the prediction problem as a multiclass categorization task. We demonstrate the merits of our approach by evaluating the various approaches on Sony’s AIBO robot. 1 Introduction In this paper we describe and evaluate several algorithms to perform sound localization in a commercial entertainment robot. The physical system being investigated is composed of a manikin head equipped with a two microphones and placed on a manikin body. This type of systems is commonly used to model sound localization in biological systems and the algorithms used to analyze the signal are usually inspired from neurology. In the case of an entertainment robot there is no need to be limited to a neurologically inspired model and we will use combination of techniques that are commonly used in microphone arrays and statistical learning. The focus of the work is the task of localizing an unknown stationary source (compact in location and broad in spectrum). The goal is to find the azimuth angle of the source relative to the head. A common paradigm to approximately find the location of a sound source employs a microphone array and estimates time differences of arrival (TDOA) between microphones in the array (see for instance [1]). In a dual-microphone array it is usually assumed that the difference in the two channels is limited to a small time delay (or linear phase in frequency domain) and therefore the cross-correlation is peaked at the the time corresponding to the delay. Thus, methods that search for extrema in cross-correlation waveforms are commonly used [2]. The time delay approach is based on the assumption that the sound waves propagate along a single path from the source to the microphone and that the microphone response of the two channels for the given source location is approximately the same. In order for this to hold, the microphones should be identical, co-aligned, and, near each other relative to the source. In addition there should not be any obstructions between or near the microphones. The time delay assumption fails in the case of a manikin head: the microphone are antipodal and in addition the manikin head and body affect the response in a complex way. In our system the distance to the supporting floor was also significant. Our approach for overcoming these difficulties is composed of two stages. First, we perform signal processing based on the generalized cross correlation transform called Phase Transform (PHAT) also called Cross Power Spectrum Phase (CPSP). This signal processing removes to a large extent variations due the sound source. Then, rather than proceeding with peak-finding we employ discriminative learning methods by casting the azimuth estimation as a multiclass prediction problem. The results achieved by combining the two stages gave improved results in our experimental setup. This paper is organized as follows. In Sec. 2 we describe how the signal received in the two microphones was processed to generate accurate features. In Sec. 3 we outline the supervised learning algorithm we used. We then discuss in Sec. 4 approaches to combined predictions from multiple segments. We describe experimental results in Sec. 5 and conclude with a brief discussion in Sec. 6. 2 Signal Processing Throughout the paper we denote signals in the time domain by lower case letters and in the frequency domain by upper case letters. We denote the convolution operator between two signals by and the correlation operator by  . The unknown source signal is denoted by  and thus its spectrum is  . The source signal passes through different physical setup and is received at the right and left microphones. We denote the received signals by  and  . We model the different physical media, the signal passes through, as two linear systems whose frequency response is denoted by   and   . In addition the signals are contaminated with noise that may account for non-linear effects such as room reverberations (see for instance [3] for more detailed noise models). Thus, the received signals can be written in the time and frequency domain as,              (1)               (2) Since the source signal is typically non-stationary we break each training and test signal into segments and perform the processing described in the sequel based on short-time Fourier transform. Let  be the number of segments a signal is divided into and  the number of samples in a single segment. Each is multiplied by a Hanning window and padded with zeros to smooth the end-of-segment effects and increase the resolution of the short-time Fourier transform (see for instance [8]). Denote by    and   the left and right signal-segments after the above processing. Based on the properties of the Fourier transform, the local cross-correlation between the two signals can be computed efficiently by the inverse Fourier transform, denoted   , of the product of the spectrum of    and the complex conjugate of the spectrum of   , !          #"   %$   &  (3) Had the difference between the two signals been a mere time delay due to the different location of the microphones, the cross correlation would have obtained its maximal value at a point which corresponds to the time-lag between the received signals. However, since the source signal passes through different physical media the short-time cross-correlation does not necessarily obtain a large value at the time-lag index. It is therefore common (see for instance [1]) to multiply the spectrum of the cross-correlation by a weighting function in order to compensate for the differences in the frequency responses obtained at the two microphones. Denoting the spectral shaping function for the ' th segment by (  , the generalization cross-correlation from Eq. (3) is, !    )*   +  " (     $    & . For “plain” cross-correlation, (  ,.-0/ is equal to 1 at each (discrete) frequency - . In our tests we found that a globally-equalized cross-correlation gives better results. The transform is obtained by setting, (  ,.-0/ 1325476 where 486 is the average over all measurements and both channels of 9  ,:-0/ 9 ; . Finally, for PHAT the weight for the spectral point - is, (  ,.-0/ 1 9    ,:-</ $    ,:-</ 9  To further motivate and explain the PHAT weighting scheme, we build on the derivation in [5] and expand the PHAT assuming that the noise is zero. In PHAT the spectral value at frequency point - (prior to the inverse Fourier transform) is, (  ,.-0/    ,.-0/ $    ,:-</    ,.-0/ $    ,.-0/ 9    ,.-0/ $    ,.-0/ 9  (4) Inserting Eq. (1) and Eq. (2) into Eq. (4) without noise we get, (  ,:-0/    ,.-0/ $    ,.-0/    ,:-</   ,:-</ $    ,.-0/ $   ,.-0/ 9    ,.-0/   ,.-0/ $    ,.-0/ $   ,.-0/ 9   $   9   9:9   9  (5) −5 −4 −3 −2 −1 0 1 2 3 4 5 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 −5 −4 −3 −2 −1 0 1 2 3 4 5 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05 Figure 1: Average waveform with standard deviation for   after performing PHAT (top) and the equalized crosscorrelation (bottom). Therefore, assuming the noise is zero, PHAT eliminates the contribution of the unknown source  and the entire waveform of PHAT is only a function of the physical setup. If all other physical parameters are constant, the PHAT waveform (as well as its peak location) is a function of the azimuth angle  of the sound source relative to the manikin head. This is of course an approximation and the presence of noise and changes in the environment result in a waveform that deviates from the closed-form given in Eq. (5). In Fig. 1 we show the empirical average of the waveform for PHAT and for the equalized cross-correlation, the vertical bars represent an error of 1 . In both cases, the location of the maximal correlation is clearly at  as expected. Nonetheless, the high variance, especially in the case of the equalized cross-correlation imply that classification of individual segments may often be rather difficult. In practice, we found that it suffices to take only the energetic portion of the generalized cross-correlation waveforms by considering only time lags of through samples. In what follows we will take this part to be the waveform. Formally, the feature vector of the ' th segment is defined as,  , !  , / 87 !  , / / (6) were was set to be bigger than the maximal lag in samples between the two channels, 2 where  is the head diameter and  is speed of sound. Summarising, the signal processing we perform is based on short time Fourier transform of the signals received at the two microphones. From the two spectrums we then compute the generalized cross-correlation using one of the three weighting schemes described above and taking only  *1 samples of the resulting waveforms as the feature vectors. We now move our focus to classification of a single segment. 3 Single Segment Classification Traditional approaches to sound localization search for the the position of the extreme value in the generalized cross-correlation waveform that were derived in Sec. 2. While being intuitive, this approach is prone to noise. Peak location can be considered as a reduction in dimensionality, from  1 to 1 , of the feature vectors  , however we have shown in Eq. 5 that the entire waveform of PHAT can be used as a feature vector to localise the source. Indeed, in Sec. 5 we report experimental results which show that peak-finding is significantly inferior to methods that we now describe, that uses the entire waveform. In all techniques, peak-location and waveform, we used supervised learning to build a model of the data using a training set and then used a test set to evaluate the learned model. In a supervised learning setting, we have access to labelled examples and the goal is to find a mapping from the instance domain (the peak-location or waveforms in our setting) to a response variable (the azimuth angle). Since the angle is a continuous variable the first approach that comes to mind is using a linear or non-linear regressors. However, we found that regression algorithms such as Widrow-Hoff [10] yielded inferior results. Instead of treating the learning problem as a regression problem, we quantized the angle and converted the sound localization problem into a multiclass decision problem. Formally, we bisected the interval   into non-overlapping intervals   78   where     <2   <2 / and   172 , 1 / . We now can transform the real-valued angle of the ' th segment,   , into a discrete variable   1 78 ! where   #" iff  $  %& . After this quantization, the training set is composed of instance-label pairs '    )( +*  and the first task is to find a classification rule from the peak-location or waveforms space into  1 77 , . We will first describe the method used for peak-location and then we will describe two discriminative methods to classify the waveform. The first is based on a multiclass version of the Fisher linear discriminant [7] and is very simple to implement. The second employs recent advances in statistical learning and can be used in an online fashion. It thus can cope, to some extent with changes, in the environment such as moving elements that change the reverberation properties of the physical media. Peak location classification: Due to the relative low sampling frequency ( -/. 10 " 21 ) spline interpolation was used to improve the peak location. In microphone arrays it is common to translate the peak-location to an estimate of the source azimuth using a geometric formula. However, this was found to be inappropriate due to the internal reverberations generated by the manikin head. We thus used the classification method describe in [4]. The peak location data was modelled using a separate histogram for each direction " . For a given direction  %& , all the training measurements  for which    %& are used to build a single histogram:  , ! 9 " / 43 65 798:* %   !  , ! *1 /  /   where ;<  is 1 if ; is true and  otherwise,  is the size of the bin in the histogram,  ;  = >?=   , and 9 9 is the number of bins. An estimate of the probability density function was taken to be the normalized histogram step function: @ A , 9 " /  , !CB  !  , ! *1 /  / 9 " / 25 % where  % is the number of training measurements for which   #" . In order to classify new test data we simply compute the likelihood of the observed measurement under each distribution and choose the class attaining the maximal likelihood (ML) score with respect to the distribution defined by the histogram, @  D+EFHGID:J % @ A , 9 " /  (7) Multiclass Fisher discriminant: Generalising the Fisher discriminant for binary classification problems to multiclass settings, each class is modelled as a multivariate normal distribution. To do so we divide the training set into subsets where the " th subset corresponds to measurements from azimuth in  %& . The density function of the " th class is A , 9 " / 1 K , :; /  9 L % 9+M JNPO 1  , RQ % / S L   % , Q % /UT where S is the transpose of , '  1 is its dimensionality, Q % denotes the mean of the normal distribution, and L % the covariance matrix. Each mean and covariance matrix are set to be the maximum likelihood estimates, @ Q % 1  %WV 5 7 8 * % YX @ L % 1  % 1 V 65 7 8 * % ,  @ Q % / S,  @ Q % /  New test waveforms were then classified using the ML formula, Eq. 7. The advantage of Fisher linear discriminant is that it is simple and easy to implement. However, it degenerates if the training data is non-stationary, as often is the case in sound localization problems due to effects such as moving objects. We therefore also designed, implemented and tested a second discriminative methods based on the Perceptron. Online Learning using Multiclass Perceptron with Kernels: Despite, or because of, its age the Perceptron algorithm [9] is a simple and effective algorithm for classification. We chose the Perceptron algorithm for its simplicity, adaptability, and ease in incorporating Mercer kernels described below. The Perceptron algorithm is a conservative online algorithm: it receives an instance, outputs a prediction for the instance, and only in case it made a prediction mistake the Perceptron update its classification rule which is a hyperplane. Since our setting requires building a multiclass rule, we use the version described in [6] which generalises the Perceptron to multiclass settings. We first describe the general form of the algorithm and then discuss the modifications we performed in order to adapt it to the sound localization problem. To extend the Perceptron algorithm to multiclass problem we maintain hyperplanes (one per class) denoted  87  . The algorithm works in an online fashion working on one example at a time. On the ' th round, the algorithm gets a new instance  and set the predicted class to be the index of the hyperplane attaining the largest inner-product with the input instance, @   D+EF GID:J % %    If the algorithm made a prediction error, that is @      , it updates the set of hyperplanes. In [6] a family of possible update schemes was given. In this work we have used the so called uniform update which is very simple to implement and also attained very good results. The uniform update moves the hyperplane corresponding to the correct label 78 in the direction of  and all the hyperplanes whose inner-products were larger than 78   away from  . Formally, let    " 9 "    X %   7 8    We update the hyperplanes as follows, % %   "    =  8 =  "    (8) and if "    C   then we keep % intact. This update of the hyperplanes is performed only on rounds on which there was a prediction error. Furthermore, on such rounds only a subset of the vectors is updated and thus the algorithm is called ultraconservative. The multiclass Perceptron algorithm is guaranteed to converge to a perfect classification rule if the data can be classified perfectly by an unknown set of hyperplanes. When the data cannot be classified perfectly then an alternative competitive analysis can be applied. The problem with above algorithm is that it allows only linear classification rules. However, linear classifiers may not suffice to obtain in many applications, including the sound localization application. We therefore incorporate kernels into the multiclass Perceptron. A kernel is an inner-product operator B    where  is the instance space (for instance, PHAT waveforms). An explicit way to describe is via a mapping  B   from  to an inner-products space   such that , ' ' /  , ' /   , ' / . Common kernels are RBF kernels and polynomial kernels which take the form ,  / ,   / . Any learning algorithm that is based on inner-products with a weighted sum of vectors can be converted to a kernel-based version by explicitly keeping the weighted combination of vectors. In the case of the multiclass Perceptron we replace the update from Eq. 8 with a “kernelized” version, % %   ,  / "    =  8 =  ,  / "     (9) Since we cannot compute  ,  / explicitly we instead perform bookkeeping of the weights associated with each  ,  / and compute a inner-products using the kernel functions. For instance, the inner-product of a vector 3    ,  / with a new instance  is   3    ,  /   ,  / 3   ,   / . Algorithm Err  PHAT + Poly Kernels, D=5           PHAT + Fisher            PHAT + Peak-finding            Equalized CrossCor + Peak-finding           Table 1: Summary of results of sound localization methods for a single segment. In our experiments we found that polynomial kernel of degree  yielded the best results. The results are summarised in Table 1. We defer the discussion of the results to Sec. 5. 4 Multi-segment Classification The accuracy of a single segment classifier is too low to make our approach practical. However, if the source of sound does not move for a period of time, we can accumulate evidence from multiple segments in order to increase the accuracy. Due to the lack of space we only outline the multi-segment classification procedure for the Fisher discriminant and compare it to smoothing and averaging techniques used in the signal processing community. In multi-segment classification we are given  waveforms for which we assume that the source angle did not change in this period, i.e.,  6    ,  1 78  . Each small window was processed independently to give a feature vector 6  . We then converted the waveform feature vector into a probability estimate for each discrete angle direction, A , 6  9  %& / using the Fisher discriminant. We next assumed that the probability estimates for consecutive windows are independent. This is of course a false assumption. However, we found that methods which compensate for the dependencies did not yield substantial improvements. The probability density function of the entire window is therefore @ A ,    !  9  %& / #" ! 6 *  @ A , 6  9  %& / and the ML estimation for @   is @    !$ GID:J&%'(*) @ A ,    !  9  %& /  We compared the Maximum Likelihood decision under the independence assumption with the following commonly used signal processing technique. We averaged the power spectrum and cross power spectrum of the different windows and only then we proceeded to compute the generalized cross correlation waveform, !     " ( ,+.   $   0/ & where +   is the average over the measurements in the same window, + 1   ! 3 ! 6 *  1 6   The averaged weight function for the PHAT waveform is now (  ,.-0/ 129 +2   ,:-</ $    ,:-</ / 9  When using averaged power spectrum it is also possible to define a smoothed coherent transform (SCOT) [1]. The weight vector in this case is identical to the PHAT weight in the single segment case, (  ,.-0/ 1243 +    ,:-0/ $    ,:-0/ / +    ,:-0/ $    ,:-0/ / . Finally, we applied the classification techniques for the single segments on the resulting (smoothed or averaged) waveform. 5 Experimental Results In this section we report and discuss results of experiments that we performed with the various learning algorithms for single-segments and multiple segments. Measurements where made using the Sony ERS-210 AIBO robot. The sampling frequency was fixed to . 1)0 " 21 and the robot’s uni-directional microphone without automatic level control was used. The robot was laid on a concrete floor in a regular office room, the room reverberations was 50687:9   0<4 . A loudspeaker, playing speech data from multiple speakers, was placed 1 ; in front of the robot and ; above its plane, the background noise was  =<>9@?  ACB . A PC connected through a wireless link to the robot directed its head relative to the speaker. The location of the sound source was limited to be in front of the head (   78  ) at a fixed constant elevation and in jumps of 1 . Therefore, the number of classes, , for training is 1 . An illustration of the system is given in Fig. 2. Algorithm Err  Max. Likl. PHAT + Fisher            SCOT + Fisher              Smoothed PHAT + Fisher           Smoothed PHAT + Peak-finding           SCOT + Peak-finding           Table 2: Summary of results of sound localization methods for multiple segments. Further technical details can be obtained from http://udi.benreuven.com. (MATLAB is a trademark of Mathworks, Inc. and AIBO is a trademark of Sony and its affiliates.) For each head direction ?   segments of data were collected. Each segment is 1)0; 84  long. The segments were collected with a partial overlap of 1 ; 4  . For each direction, the measurements were divided into equal amounts of train and test measurements. The total number of segments per class,  % , is   . Therefore, altogether there were   %     segments for training and the same amount for evaluation. An FFT of size  1  was used to generate un-normalized cross-correlations, equalized cross-correlations, and PHAT waveforms. From the transformed waveforms 1 1 samples where taken (  in Eq. 6). Extrema locations in histograms were found using 9 9 ? 1 bins. We used two evaluation measures for comparing the different algorithms. The first, denoted + !3! , is the empirical classification error that counts the number of times the predicted (discretized) angle was different than the true angle, that is, + !3!  ( 3 ( +*     @     . The second evaluation measure, denoted   , is the average absolute difference between the predicted angle and the true angle,    ( 3 ( *  9 @     9 . It should be kept in mind that the test data was obtained from the same direction set as the training data. Therefore,   is an appropriate evaluation measure of the errors in our experimental setting. However, alternative evaluation methods should be devised for general recordings when the test signal is not confined to a finite set of possible directions. Figure 2: Acquisition system overview. The accuracy results with respect to both measures on the test data for the various representations and algorithms are summarized in Table 1. It is clear from the results that traditional methods which search for extrema in the waveforms are inferior to the discriminative methods. As a by-product we confirmed that equalized crosscorrelations is inferior to PHAT modelling for high SNR with strong reverberations, similar results were reported in [11]. The two discriminative methods achieve about the same results. Using the Perceptron algorithm with degree  achieves the best results but the difference between the Perceptron and the multiclass Fisher discriminant is not statistically significant. It is worth noting again that we also tested linear regression algorithms. Their performance turns to be inferior to the discriminative multiclass approaches. A possible explanation is that the multiclass methods employ multiple hyperplanes and project each class onto a different hyperplane while linear regression methods seek a single hyperplane onto which example are projected. Although Fisher’s discriminant and the Perceptron algorithm exhibit practically the same performance, they have different merits. While Fisher’s discriminant is very simple to implement and is space efficient the Perceptron is capable to adapt quickly and achieves high accuracy even with small amounts of training data. In Fig 3 we compare the error rates of Fisher’s discriminant and the Perceptron on subsets of the training data. The Perceptron clearly outperforms Fisher’s discriminant when the number of training examples is less than   but once about  examples are provided the two algorithms are indistinguishable. This suggests that online algorithms may be more suitable when the sound source is stationary only for short periods. 1000 2000 3000 4000 5000 6000 39 40 41 42 43 44 45 46 47 48 Number of Examples Error Rate Perceptron Fisher Figure 3: Error rates of Fisher’s discriminant and the Perceptron for various training sizes. Last we compared multi-segment results. Multisegment classification was performed by taking  ? 1 consecutive measurements over a window of  0; 4 during which the source location remained fix. In Table 2 we report classification results for the various multi-segment techniques. (Since the Perceptron algorithm used a very large number of kernels we did not implement a multi-segment classification using the Perceptron. We are currently conducting research on space-efficient kernel-based methods for multi-segment classification.) Here again, the best performing method is Fisher’s discriminant that combines the scores directly without averaging and smoothing leads the pack. The resulting prediction accuracy of Fisher’s discriminant is good enough to make the solution practical so long as the sound source is fixed and the recording conditions do not change. 6 Discussion We have demonstrated that by using discriminative methods highly accurate sound localization is achievable on a small commercial robot equipped with a binaural hearing that are placed inside a manikin head. We have confirmed that PHAT is superior to plain crosscorrelation. For classification using multiple segments classifying the entire PHAT waveform gave better results than various techniques that smooth the power spectrum over the segments. Our current research is focused on efficient discriminative methods for sound localization in changing environments. References [1] C. H. Knapp and G. C. Carter. The generalized correlation method for estimation of time delay. IEEE Transactions on ASSP, 24(4):320-327,1976. [2] M. Omologo and P. Svaizer. Acoustic event localization using a crosspowerspectrum phase based technique. Proceedings of ICASSP1994, Adelaide, Australia, 1994. [3] T. Gustafsson and B.D. Rao. Source Localization in Reverberant Environments: Statistical Analysis. Submitted to IEEE Trans. on Speech and Audio Processing, 2000. [4] N. Strobel and R. Rabenstein. Classification of Time Delay Estimates for Robust Speaker Localization ICASSP, Phoenix, USA, March 1999. [5] J. Benesty Adaptive eigenvalue decomposition algorithm for passive acoustic source localization J. Acoust. Soc. Am. 107 (1), January 2000 [6] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. In Proc. of the 14th Annual Conf. on Computational Learning Theory, 2001. [7] R. O. Duda, P. E. Hart. Pattern Classification. Wiley, 1973. [8] B. Porat. A course in Digital Signal Processing. Wiley, 1997. [9] F. Rosenblatt. The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65:386–407, 1958. [10] B. Widrow and M. E. Hoff. Adaptive switching circuits. 1960 IRE WESCON Convention Record, pages 96–104, 1960. [11] P. Aarabi, A. Mahdavi. The Relation Between Speech Segment Selectivity and Time-Delay Estimation Accuracy. In Proc. of IEEE Conf. on Acoustics Speech and Signal Processing, 2002.
2002
59
2,264
Going Metric: Denoising Pairwise Data Volker Roth Julian Laub Informatik III, University of Bonn Roemerstr 164, 53117 Bonn, Germany roth©cs.uni-bonn.de Fraunhofer FIRST.IDA Kekulestr. 7, 12489 Berlin, Germany jlaub©first.fhg.de Joachim M. Buhmann Informatik III, University of Bonn Roemerstr 164, 53117 Bonn, Germany jb©cs.uni-bonn.de Abstract Klaus-Robert Miiller Fraunhofer FIRST.IDA, 12489 Berlin, Germany, University of Potsdam, 14482 Potsdam, Germany klaus©first.fhg.de Pairwise data in empirical sciences typically violate metricity, either due to noise or due to fallible estimates, and therefore are hard to analyze by conventional machine learning technology. In this paper we therefore study ways to work around this problem. First, we present an alternative embedding to multi-dimensional scaling (MDS) that allows us to apply a variety of classical machine learning and signal processing algorithms. The class of pairwise grouping algorithms which share the shift-invariance property is statistically invariant under this embedding procedure, leading to identical assignments of objects to clusters. Based on this new vectorial representation, denoising methods are applied in a second step. Both steps provide a theoretically well controlled setup to translate from pairwise data to the respective denoised metric representation. We demonstrate the practical usefulness of our theoretical reasoning by discovering structure in protein sequence data bases, visibly improving performance upon existing automatic methods. 1 Introduction Unsupervised grouping or clustering aims at extracting hidden structure from data (see e.g. [5]). However, for several major applications, e.g. bioinformatics or imaging, the data is solely available as scores of pairwise comparisons. Pairwise data is in no natural way related to the common viewpoint of objects lying in some "well behaved" space like a vector space. Particularly, pairwise data may violate the triangular inequality. Two cases should be distinguished: (i) The triangle inequality might not be satisfied as a result of noisy measurements (for instance using string alignment algorithms in DNA analysis). (ii) The violation might be an intrinsic feature of the data. This case, for instance, applies to datasets based upon some human judgment, e.g. "X likes Y, Y likes Z =I? X likes Z". Such violations preclude the use of well established machine learning methods, which typically have been formulated for metric data only. This paper proposes an algorithm to metricize and subsequently de noise pairwise data. It uses the so-called constant shift embedding (cf. [14]) for metrization, then constructs a positive semidefinite matrix which can in sequel be used for denoising and clustering purposes. Regarding data-mining or clustering purposes, the most outstanding difference to classical MDS is the following: for the class of pairwise clustering cost functions sharing the shift-invariance property1 the metrization step is loss-free in the sense that the optimal assignments of objects to clusters remain unchanged. The next section introduces techniques for metrization, denoising and clustering pairwise data. This is followed by a section illustrating our methods for real world data such as bacterial GyrE amino acid sequences and sequences from the ProD om data base and a brief discussion. 2 Proximity-based clustering and denoising One of the most popular methods for grouping vectorial data is k-means clustering (see e.g. [1][5]). It derives a set of k prototype vectors which quantize the data set with minimal quantization error. Partitioning proximity data is considered a much harder problem, since the inherent structure of n samples is hidden in n2 pairwise relations. The pairwise proximities can violate the requirements of a distance measure, i.e. they may be non-symmetric and negative, and the triangular inequality does not necessarily hold. Thus, a lossfree embedding into a vector space is not possible, so that grouping problems of this kind cannot be directly transformed into a vectorial representation by means of classical embedding strategies such as multi-dimensional scaling (MDS [4]). Moreover clustering the MDS embedded data-vectors in general yields partitionings different from those obtained by directly solving the pairwise problem, since embedding constraints might be in conflict with the clustering goal. Let us start from a pairwise clustering loss function (see [12]) that combines the properties of additivity, scale- and shift invariance, and statistical robustness HPc = t 2:~= 1 2:7=1 MivMjvDij v=1 2:~= 1 Mlv ' (1) where the data are characterized by the matrix of pairwise dissimilarities Dij . The assignments of objects to clusters are encoded in the binary stochastic matrix M E {O, l}nxk : 2:~=1 Miv = 1. For such cost functions it can be shown [14] that there always exists a set of vectorial data representations-the constant shift embeddings-such that the grouping problem can be equivalently restated in terms of Euclidian distances between these vectors. In order to handle non-symmetric dissimilarities, it should be noticed that HPc is also invariant under symmetrizing transformations: Dij +- 1/2(Dij + Dji). In the following we will thus restrict ourselves to the case of symmetric dissimilarity matrices. Theorem 2.1. [141 Given an arbitrary (possibly non-metric) (n x n) dissimilarity matrix D with zero self-dissimilarities, there exists a transformed matrix fJ such that (i) the matrix fJ can be interpreted as a matrix of squared Euclidian distances IThe term shift-invariance means that the optimal assignments of objects to clusters are not influenced by constant additive shifts of the pairwise dissimilarities (excluding the self-dissimilarities which are assumed to be zero). between a set of vectors {xdi=l' D is derived from D by both symmetrizing and applying the constant shift embedding trick; (ii) the original pairwise clustering problem is equivalent to a k-means problem in this vector space, in the sense that the optimal assignments of objects to clusters {MiV } are identical in both problems. A re-formulation of pairwise clustering as a k-means problem is clearly advantageous: (i) the availability of prototype vectors defines a generic rule for using the learned partitioning in a predictive sense, (ii) we can apply standard noise- and dimensionality-reduction methods in order to both stabilize the estimation procedure and to speed up the grouping itself. Constant shift embedding Let D = (Dij) E jRnxn be the matrix of pairwise squared dissimilarities between n objects. For a generic noisy dataset yfJ5:j 1:. JDik + JDkj so that v15 is non metric. Since";-: is monotonically increasing, ~ Do such that JDij + Do ~ JDik + Do + JDkj + Do V i,j, k = 1,2 ... n. Let D=D+Do(eeT -In) (2) where e = (1, 1, ... 1) T is a n-dimensional column-vector and In the identity matrix. This corresponds to a constant additive shift Dij = Dij + Do for all i i:- j. We look for the minimal constant shift Do such that D satisfy the triangle inequality. In order to make the main result clear, we first need to introduce the notion of a centralized matrix. Let P be an arbitrary matrix and let Q = I ~ee T. Q is the projection matrix on the orthogonal complement of e. Define the centralized P by: pe = QPQ. (3) Let D be fixed and let us decompose D as follows: Dij = Sii + Sjj - 2Sij . (4) This decomposition is motivated by the fact that if D is a squared Euclidian distance between the vectorial data Xi, then Dij = Ilxi - xjl12 = IIxil12 + IIxjl12 - 2x{ Xj' It follows from equation (4) that a constant off-diagonal shift on D corresponds to a constant shift on the diagonal of S. S is not fixed by the choice of D, since we may always change its diagonal elements, yet recover the same D. That is, any matrix of the form (Sij + I/2~Si + I/2~Sj) gives the same distance D as S for arbitrary ~Si's. By simple algebra it can be shown that se = - ~ De, i. e. se is unique. Furthermore D derives from a squared Euclidian distance if and only if s e is positive semi-definite [14]. Let s e = s e - An(se)In, where AnU is the minimal eigenvalue of its argument. Then se is positive semi-definite [14]. These are the main ingredients for proving the following: Theorem 2.2 (Minimal Do). !4J. Do = -2An(se) is the minimal constant such that D = D + Do (ee T - In) derive from squared Euclidian distance. All proofs can be found in [14] . We have thus shown that applying large enough additive shifts to the off-diagonal elements of D results in a matrix se that is positive semi-definite, and can thus be interpreted as a Gram matrix. This means, that in some (n - I)-dimensional Euclidian space there exists a vector representation of the objects, summarized in the "design" matrix X (the rows of X are the feature vectors), such that se = XXT. For the pairwise clustering cost function the optimal assignments of objects to clusters are invariant under the constant-shift embedding procedure, according to theorem 2.1. Hence, the grouping problem can be re-formulated as optimizing the classical k-means criterion in the embedding space. In many applications, however, it is advantageous not to cluster in the full space but to insert some dimension reduction step, that serves the purpose of increasing efficiency and noise reduction. While it is unclear how to denoise for the original pairwise object representations while respecting additivity, scale- and shift invariance, and statistical robustness properties of the clustering criterion, we can easily apply kernel PCA [16] to Be after the constant-shift embedding. Denoising of pairwise data by Constant Shift Embedding For de noising we construct D which derives from "real" points in a vector space, i.e. Be is positive semi-definite. In a first step, we briefly describe, how these real points can be recovered by loss-free kernel PCA [16]: (i) Calculate the centralized kernel matrix se = -~QDQ . (ii) Decompose se = V A VT where V = (Vl,'" vn ) with eigenvectors vi's and A = diag(.A1 , '" .An) with eigenvalues .A1 ~ ... ~ .Ap > .Ap+1 = a ~ .Ap+2 ~ ... ~ .An. (iii) Calculate the n x (n - 2) mapping matrix X~_2 = V':_2 (A~_2)1 /2, where V':_2 = (V1, ... Vp ,Vp+2,··· vn-d and A~ _ 2 = diag(.A1 - .An, ... .Ap .An,.Ap+2.An,'" .An-1 .An) (these are the constantly shifted eigenvalues). The rows of X~_2 contain the vectors {xD (i = 1,2 ... n) in n - 2 dimensional space, whose mutual distances are given by D. When focusing on noise reduction, however, we are rather interested in some approximative reconstructions of the ''real'' vectors. In the PCA framework, one usually discards the directions which correspond to small eigenvalues as noise (c.f. [9]). We can thus obtain a representation in a space of reduced dimension (with the well-defined error of PCA reconstruction) when choosing t < n - 2 in step (iii) of the above algorithm: X* - y.*(A*)1/2 t t t , where i't* consists of the first t column vectors of V':_2 and At is the top txt submatrix of A~ _ 2' The vectors in ~t then differ the least from the vectors in ~n - 2 in the sense of a quadratic error. The advantages of this method in comparison to directly applying classical scaling via MDS are: (i) t can be larger than the number p of positive eigenvalues, (ii) the embedded vectors are the best least squares error approximation to the optimal vectors which preserve the grouping structure. It should be noticed, however, that given the exactly reconstructed vectors in ~n-2 found by loss-free kernel PCA, we could have also applied any other standard methods for dimensionality reduction or visualization, such as projection pursuit [6], local linear embedding (LLE) [15], Isomap [17] or Self-organizing maps [8]. 3 Application on protein sequences 3.1 Bacterial GyrB amino acid sequences We first illustrate our de noising technique on the gyrase subunit B. The dataset consists of 84 amino acid sequences from five genera in Actinobacteria: 1: Corynebacterium, 2: Mycobacterium, 3: Gordonia, 4: Nocardia and 5: Rhodococcus. A detailed description can be found in [7]. This dataset was used in [18] for illustration of marginalized kernels. The authors hinted at the possibility of computing the distance matrix by using BLAST scores [2], noting, however, that these scores could not be converted into positive semidefinite kernels. In our experiment, the sequences have been aligned by the Smith-Waterman algorithm [11] which yields pairwise alignment scores. Using constant shift embedding a positive semidefinite kernel is obtained, leaving the cluster assignment unchanged for shift invariant cost functions. The important step is the denoising. Several projections to lower dimensions have been tested and t = 5 turned out to be a good choice, eliminating the bulk of noise while retaining the essential cluster structure. Figure 1 shows the striking improvement of the distance matrix after denoising. On the left hand side the ideal distance matrix is depicted, consisting solely of O's (black) and l 's (white), reflecting the true cluster membership. In the middle and on the right the original and the denoised distance matrix are shown, respectively. Denoising visibly accentuates the cluster structure in the pairwise data. Since we 10 10 20 20 30 30 40 40 50 50 60 60 70 70 80 80 20 40 60 80 20 40 60 80 20 40 60 80 Figure 1: Distance matrix: On the left the ideal distance matrix reflects the true cluster structure. In the middle and on the right: distance matrix before and after de noising dispose of the true labels, we can quantitatively assess the improvement by denoising. We performed usual k-means clustering, followed by a majority voting to match cluster labeling. For the denoised data we obtained 3 misclassifications (3.61%) whereas we got 17 (20.48%) for the original data. This simple experiment corroborates the usefulness of our embedding and denoising strategy for pairwise data. In order to fulfill the spirit of the theory of constant-shift embedding, the costfunction of the data-mining algorithm subsequent to the embedding needs to be shift invariant. We may by the same token go a step further and apply algorithms for which this condition does not hold. In doing so, however, we give up the mathematical traceability of the error. To illustrate that denoised pairwise data can act as standalone quality data independent of the framework of algorithms based on shift invariant cost functions (and in order to compare to the results obtained in [18]), a linear SVM is trained on 25% of the total data to mutually classify the genera-pairs: 3 - 4, 3 - 5, 4 - 5. Genera 1 and 2 separate errorless and have therefore been omitted. Model selection over the regularization parameter C has been performed by choosing the optimal value out of 10 equally spaced values from [10-4, 102]. The results and have been averaged by a lOOO-fold sampling (cf. table 1). The best values are printed in bold. For the classification of genera 3 - 5 and 4 - 5 we obtain a substantial improvement by denoising. Interestingly this is not the case for genera 3 - 4 which may be due to the elimination of discriminative features by the de noising procedure. The error still is significantly smaller than the error obtained by MCK2 and FK, which is in agreement with the superiority of a structure preserving embedding of Smith-Waterman scores even when left undenoised: FK and MCK are kernels deGenera 3 - 4 3-5 4-5 FK 10.4 10.9 23.1 MCK2 8.48 5.71 11.6 Undenoised 5.06 5.72 7.55 Denoised 5.43 3.83 3.17 Table 1: Comparison of mean test-error of supervised classification by linear SVM of genera with training sample 25 % of the total sample. The results for MCK2 (Marginalized Count Kernel) and FK (Fisher Kernel) is obtained by kernel Fisher discriminant analysis which compares favorably to the SVM in several benchmarks [18]. rived from a generative model, whereas the alignment scores are obtained from a matching algorithm specifically tuned for protein sequences, reflecting much better the underlying structure of protein data. 3.2 Clustering of ProDom sequences The analysis described in this section aims at finding a partition of domain sequences from the ProDom database, [3], that is meaningful w.r.t. structural similarity. In order to measure the quality of the grouping solution, we use the computed solution in a predictive way to assign group labels to SCOP sequences, which have been labeled by experts according to their structure, [10]. The predicted labels are then compared with the "true" SCOP labels. For demonstration purposes, we select the following subset of sequences from prodom2001. 2. srs: among all sequences we choose those which are highly similar to at least one sequence contained in the first four folds of the SCOP database. 2 Between these sequences, we compute pairwise (length-corrected and standardized) Smith-Waterman alignment scores, summarized in the matrix (Sij). These similarities are transformed into dissimilarities by setting Dij := Sii + Sjj - 2Sij . The centralized score matrix SC = -1/2Dc possesses some highly negative eigenvalues, indicating that metric properties are violated. Applying the constant-shift embedding method, a valid Mercer kernel is derived, with an eigenvalue spectrum that shows only a few dominating components over a broad "noise"-spectrum (see figure 2). Extracting the first 16 leading principal components3 leads to a vector representation of the sequences as points in ~16. These points are then clustered by minimizing the k-means cost function within a deterministic annealing framework. The model order was selected by applying a re-sampling based stability analysis, which has been demonstrated to be a suitable model order selection criterion for unsupervised grouping problems in [13]. In order to measure the quality of the grouping solution, all 1158 SCOP sequences from the first four folds are embedded into the 16-dimensional space. The predicted group structure on this test set is then compared with the true SCOP fold-labels. Figure 3 shows both the predicted group membership of these sequences and their true SCOP fold-label in the form of a bar diagram: the sequences are ordered by increasing group label (the lower horizontal bar), and compared with the true fold classification (upper bar). In order to quantify the results, the inferred clusters are 2"Highly similar" here means that the highest alignment score exceeds a predefined threshold. The result is a subset of roughly 2700 ProD om domain sequences. 3Subsampling techniques or deflation can be used to reduce computational load for large-scale problems. We only used a subset of 800 randomly chosen proteins for estimating the 16 leading eigenvectors. <1,) 1200 " " ~ "'OO ij .~ ~) 16lcading cigcnvcctors selcctcd Figure 2: (Partial) eigenvalue spectrum of the shifted score matrix. The data are projected onto the first leading 16 eigenvectors, whereas the remaining principal components are considered to be dominated by noise. '" re-Iabeled (''re-colored'') according to the maximum number of correctly identifiable fold-labels. This procedure allows us to correctly identify the fold label of roughly 94 % of the SCOP sequences. 1158 SCOP sequences from folds 1-4 ~1==::j1" 1 -------~ I ~I(=~~~I I -.... ~ I I SCOP fold label r=11+1. =1 ~ II ___ ..... _IIIIIIJ I ~I(==:_-~ I II Prediction 1 re- Iabeled by majority voting Cluster I Cluster 3 ... Errors Cluster 2 Figure 3: Visualization of cluster membership of the SCOP sequences contained in folds 1-4. Despite this surprisingly high percentage, it is necessary to deeper analyze the biological relevance of the inferred grouping solution. In order to check to what extent the above "over-all" result is influenced by artefacts due to highly related (or even almost identical) SCOP sequences, we repeated the analysis based on the subset of 128 SCOP sequences with less than 50 % sequence identity (PDB50). Predicting the group membership of these 128 sequences and using the same re-Iabeling approach, we can correctly identify 86 % of the fold-labels. This result demonstrates that we have not only found trivial groups of almost identical proteins, but that we have indeed extracted relevant structural information. 4 Discussion and Conclusion This paper provides two main contributions that are highly useful when analyzing pairwise data. First, we employ the concept of constant shift embedding to provide a metric representation of the data. For a certain class of grouping principles sharing a shift-invariance property, this embedding is distortion-less in the sense that it does not influence the optimal assignments of objects to groups. Given the metricized data we can now use common signal (pre-)processing and denoising techniques that are typically only defined for vectorial data. As we investigate the clustering of protein sequences from data bases like GyrB and ProDom, we are given non-metric pairwise proximity information that is strongly deteriorated by the shortcomings of the available alignment procedures. Thus, it is important to apply denoising techniques to the data as a second step before running the actual clustering procedure. We find that the combination of these two processing steps is successful in unraveling protein structure, greatly improving over existing methods (as exemplified for GyrB and ProDom). Future research will be dedicated to further evaluation of the proposed algorithm. We will also explore the perspectives it opens in any field handling pairwise data. Acknowledgments The gyrE amino acid sequences where offered by courtesy of Identification and Classification of Bacteria (ICB) databank team [19]. The authors are partially supported by DFG grants # MU 987/ 1-1 and # BU 914/ 4-1. References [1] A.KJain, M.N. Murty, and P.J. Flynn. Data clustering: a review. ACM Computing Surveys, 31(3):264- 323, 1999. [2] S. F. Altschul, W. Gish, W. Miller, E. W. Myers, and D. J. Lipman. Basic local alignment search tool. J. Mol. Bioi., 215:403 - 410, 1990. [3] F. Corpet, F. Servant, J. Gouzy, and D. Kahn. Prodom and prodom-cg: tools for protein domain analysis and whole genome comparisons. Nucleid Acids Res., 28:267269, 2000. [4] T. F. Cox and M. A. A. Cox. Multidimensional Scaling. Chapman & Hall, London, 2001. [5] R.O. Duda, P.E.Hart, and D.G.Stork. Pattern classification. John Wiley & Sons, second edition, 2001. [6] P. J. Huber. Projection pursuit. The Annals of Statistics, pages 435--475, 1985. [7] H. Kasai, A. Bairoch, K Watanabe, K Isono, and S. Harayama. Construction of the gyrb database for the identification and classification of bacteria. Genome Informatics, pages 13 - 21, 1998. [8] T. Kohonen. Self-Organizing Maps. Springer-Verlag, Berlin, 1995. [9] S. Mika, B. SchOlkopf, A.J. Smola, K-R. Miiller, M. Scholz, and G. Ratsch. Kernel PCA and de- noising in feature spaces. In M.S. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information Processing Systems, volume 11, pages 536542. MIT Press, 1999. [10] A.G. Murzin, S.E. Brenner, T. Hubbard, and C. Chothia. Scop: a structural classification of proteins database for the investigation of sequences and structures. J. Mol. Bioi., 247:536- 540, 1995. [11] W. R. Pearson and D. J. Lipman. Improved tools for biological sequence analysis. Proc. Natl. Acad. Sci, 85:2444 - 2448, 1988. [12] J. Puzicha, T. Hofmann, and J. Buhmann. A theory of proximity based clustering: Structure detection by optimization. Pattern Recognition, 33(4):617- 634, 1999. [13] V. Roth, M. Braun, T. Lange, and J. Buhmann. A resampling approach to cluster validation. In Computational Statistics-COMPSTAT'02, 2002. To appear. [14] V. Roth, J. Laub, M. Kawanabe, and J.M. Buhmann. Optimal cluster preserving embedding of non-metric proximity data. Technical Report IAI-TR-2002-5, University of Bonn, 2002. [15] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323-2326, 2000. [16] B. Schiilkopf, A. Smola, and K-R. Miiller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1319, 1998. [17] J.B. Tenenbaum, V. Silva, and J.C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290:2319- 2323, 2000. [18] K Tsuda, T. Kin, and K Asai. Marginalized kernels for biological sequences. Proc. ISMB, to appear:2002, http://www.cbrc.jp/ tsuda/. [19] K Watanabe, J. Nelson, S. Harayama, and H. Kasai. Icb database: the gyrb database for identification and classification of bacteria. Nucleic Acids Res., 29:344 - 345, 2001.
2002
6
2,265
Fractional Belief Propagation Wim Wiegerinck and Tom Heskes SNN, University of Nijmegen Geert Grooteplein 21, 6525 EZ, Nijmegen, the Netherlands wimw,tom  @snn.kun.nl Abstract We consider loopy belief propagation for approximate inference in probabilistic graphical models. A limitation of the standard algorithm is that clique marginals are computed as if there were no loops in the graph. To overcome this limitation, we introduce fractional belief propagation. Fractional belief propagation is formulated in terms of a family of approximate free energies, which includes the Bethe free energy and the naive mean-field free as special cases. Using the linear response correction of the clique marginals, the scale parameters can be tuned. Simulation results illustrate the potential merits of the approach. 1 Introduction Probabilistic graphical models are powerful tools for learning and reasoning in domains with uncertainty. Unfortunately, inference in large, complex graphical models is computationally intractable. Therefore, approximate inference methods are needed. Basically, one can distinguish between to types of methods, stochastic sampling methods and deterministic methods. One of methods in the latter class is Pearl’s loopy belief propagation [1]. This method is increasingly gaining interest since its successful applications to turbo-codes. Until recently, a disadvantage of the method was its heuristic character, and the absence of a converge guarantee. Often, the algorithm gives good solutions, but sometimes the algorithm fails to converge. However, Yedidia et al. [2] showed that the fixed points of loopy belief propagation are actually stationary points of the Bethe free energy from statistical physics. This does not only give the algorithm a firm theoretical basis, but it also solves the convergence problem by the existence of an objective function which can be minimized directly [3]. Belief propagation is generalized in several directions. Minka’s expectation propagation [4] is a generalization that makes the method applicable to Bayesian learning. Yedidia et al. [2] introduced the Kikuchi free energy in the graphical models community, which can be considered as a higher order truncation of a systematic expansion of the exact free energy using larger clusters. They also developed an associated generalized belief propagation algorithm. In this paper, we propose another direction which yields possibilities to improve upon loopy belief propagation, without resorting to larger clusters. This paper is organized as follows. In section 2 we define the inference problem. In section 3 we shortly review approximate inference by loopy belief propagation and discuss an inherent limitation of this method. This motivates us to generalize upon loopy belief propagation. We do so by formulating a new class of approximate free energies in section 4. In section 5 we consider the fixed point equations and formulate the fractional belief propagation algorithm. In section 6 we will use linear response estimates to tune the parameters in the method. Simulation results are presented in section 7. In section 8 we end with the conclusion. 2 Inference in graphical models Our starting point is a probabilistic model on a set of discrete variables    in a finite domain. The joint distribution  is assumed to be proportional to a product of clique potentials      (1) where each  refers to a subset of the  nodes in the model. A typical example that we will consider later in the paper is the Boltzmann machine with binary units (  !#"%$ ), &('*),+ . / 02143 50   076 -8:9 8  8  (2) where the sum is over connected pairs ;<>=? . The right hand side can be viewed as product of potentials  @0  50 AB' ),+ 3 50   0C6  .ED F!GHD 1 9  6  D FJI D 9 0  0 2 , where K is the set of edges that contain node ; . The typical task that we try to perform is to compute the marginal single node distributions  L . Basically, the computation requires the summation over all remaining variables M4 . In small networks, this summation can be performed explicitly. In large networks, the complexity of computation depends on the underlying graphical structure of the model, and is exponential in the maximal clique size of the triangulated moralized graph [5]. This may lead to intractable models, even if the clusters   are small. When the model is intractable, one has to resort to approximate methods. 3 Loopy belief propagation in Boltzmann machines A nowadays popular approximate method is loopy belief propagation. In this section, we will shortly review of this method. Next we will discuss one of its inherent limitations, which motivates us to propose a possible way to overcome this limitation. For simplicity, we restrict this section to Boltzmann machines. The goal is to compute pair marginals  50  of connected nodes. Loopy belief propagation computes approximating pair marginals N 50  @0  by applying the belief propagation algorithm for trees to loopy graphs, i.e., it computes messages according to O QPR0  0 ST-U G ' ),+ 3 50   0 V W @0  X (3) in which V W @0 are the incoming messages to node ; except from node = , V W 50  S(' ),+Y9    8[Z F GL\ 0 O 8 PA  X (4) If the procedure converges (which is not guaranteed in loopy graphs), the resulting approximating pair marginals are N @0  50 &]' )?+J 3 @0   0 V W 50  ^V W 02  0 7 (5) In general, the exact pair marginals will be of the form 50  50 S_'*),+Y 3 `>a 50   0  W 50   W 02  0 X (6) which has an effective interaction 3 `>a 50 . In the case of a tree, 3 `>a 50  3 @0 . With loops in the graph, however, the loops will contribute to 3 `>a @0 , and the result will in general be different from 3 50 . If we compare (6) with (5), we see that loopy belief propagation assumes 3 `>a 50  3 @0 , ignoring contributions from loops. Now suppose we would know 3 `>a @0 in advance, then a better approximation could be expected if we could model approximate pair marginals of the form N 50  @04& '*),+Y 3 50  50   0[ V W 50  > V W 02 <,0 7 (7) where  50  3 @0  3 `>a @0 . The V W @0 are to be determined by some propagation algorithm. In the next sections, we generalize upon the above idea and introduce fractional belief propagation as a family of loopy belief propagation-like algorithms parameterized by scale parameters       . The resulting approximating clique marginals will be of the form N    &_        Z F  V W < LX (8) where K  is the set of nodes in clique  . The issue of how to set the parameters   is subject of section 6. 4 A family of approximate free energies The new class of approximating methods will be formulated via a new class of approximating free energies. The exact free energy of a model with clique potentials    is   EA -[U        6 -4U   X (9) It is well known that the joint distribution can be recovered by minimization of the free energy   "!# $ % &   V A (10) under the constraint ' U C $ . The idea is now to construct an approximate free energy )(+*,*.-0/ 1  V NA and compute its minimum N . Then N is interpreted as an approximation of . A popular approximate free energy is based on the Bethe assumption, which basically states that N is approximately tree-like, N&   N      N <^ > 32 D F G D  (11) in which K are the cliques  that contain ; . This assumption is exact if the factor graph [6] of the model is a tree. Substitution of the tree-assumption into the free energy leads to the well-known Bethe free energy )4 `6587 ` 9    N   N 4S  U  N           6  U  N      N   6 H$:<; K ;  U G N 2 L  N LX (12) which is to be minimized under normalization constraints ' U  N   _ $ and ' U G N  & $ and the marginalization constraints ' U >= G N    SN   for ;@?K  . It can be shown that minima of the Bethe free energy are fixed points of the loopy belief propagation algorithm [2]. In our proposal, we generalize upon the Bethe assumption, and make the parameterized assumption N S H N        N   32+ G D F G D  (13) in which  X $  ; K% ;9'  Z F G   . The intuition behind this assumption is that we replace each      by a factor N       . The term with single node marginals is constructed to deal with overcounted terms. Substitution of (13) into the free energy leads to the approximate free energy  9    N   N  S  U  N           6    U  N      N    6 H$:  ; K% 9;  U G N ^   N 2 LX (14) which is also parameterized by   . This class of free energies trivially contains the Bethe free energy (    $ ). In addition, it includes the variational mean field free energy, conventionally written as   '  ' U   N    6 ' ' U G N  N as a limiting case for     (implying an effective interaction of strength zero). If this limit is taken in (14), terms linear in  will dominate and act as a penalty term for non-factorial entropies. Consequently, the distributions will be constrained to be completely factorized, N    Z F  N . Under these constraints, the remaining terms reduce to the conventional representation of  . Thirdly, it contains the recently derived free energy to upper bound the log partition function [7]. This one is recovered if, for pair-wise cliques, the  50 ’s are set to the edge appearance probabilities in the so-called spanning tree polytope of the graph. These requirements imply that  50 #$ . 5 Fractional belief propagation In this section we will use the fixed point equations to generalize Pearl’s algorithm to fractional belief propagation as a heuristic to minimize  . Here, we do not worry too much about guaranteed convergence. If convergence is a problem, one can always resort to direct minimization of  using, e.g., Yuille’s CCCP algorithm [3]. If standard belief propagation converges, its solution is guaranteed to be a local minimum of 4 `65 7 ` [8]. We expect a similar situation for  . Fixed point equations from  are derived in the same way as in [2]. We obtain N     ]     9&   Z F    Z F!G \  O    O    2Y   (15) N <^ >    O  <^ >X (16) O  <^ >  N  ^  N   O   7 (17) and we notice that N     has indeed the functional dependency of   as desired in (8). Inspired by Pearl’s loopy belief propagation algorithm, we use the above equations to formulate fractional belief propagation  &   (see Algorithm 1) 1. 1  , i.e. with all   , is equivalent to standard loopy belief propagation Algorithm 1 Fractional Belief Propagation %&   1: initialize( O  <N   F G O  ) 2: repeat 3: for all  do 4: update N  according to (15). 5: update O  , ;@? K  according to (17) using the new N  and the old N . 6: update NA , ;@?K  by marginalization of N  . 7: end for 8: until convergence criterion is met (or maximum number of iterations is exceeded) 9: return N  <N  (or failure) As a theoretical footnote we mention a different (generally more greedy)   -algorithm, which has the same fixed points as %&   . This algorithm is similar to Algorithm 1, except that (1) the update of N  (in line 4) is to be taken with    $ , as in in standard belief propagation and (2) the update of the marginals N (in line 6) is to be performed by minimizing the divergence D 9  EN    Z F  NA E where D *X NAS $   $:    $: U  N 32  (18) with the limiting cases D  X NA&#-U    N and D  EX<NAU N  N    (19) rather than by marginalization (which corresponds to minimizing D  , which is the equal to the usual   divergence). The D ’s are known as the  -divergences [9] where     6 $  and  $  $ . The minimization of the N ’s using D  leads to the well known mean field equations. 6 Tuning using linear response theory Now the question is, how do we set the parameters   ? The idea is as follows, if we could have access to the true marginals    R    , we could optimize   by minimizing, for example,  &  ST CE <N    U          N      (20) in which we labeled N by   to emphasize its dependency on the scale parameters. Unfortunately, we do not have access to the true pair marginals, but if we would have estimates V   that improve upon N   , we can compute new parameters   such that N   is closer to V   . However, with the new parameters the estimates V   will be changed as well, and this procedure should be iterated. In this paper, we use the linear response theory [10] to improve upon N   . For simplicity, we restrict ourselves to Boltzmann machines with binary units. Applying linear response theory to     in Boltzmann machines yields the following linear response estimates for the pair marginals, N  2 @0  50 SN   2N  0  0  6  0  N     9 0 (21) Algorithm 2 Tuning   by linear response 1: initialize( :$     $ ) 2: repeat 3: set step-size   4: compute the linear response estimates N  2 50 as in (21) 5: compute    as in (22). 6: set S 6 $ 7: until convergence criterion is met 8: return N   50  N   In [10], it is argued that if N  ^ 504 is correct up to    , the error in the linear response estimate is    . Linear response theory has been applied previously to improve upon pair marginals (or correlations) in the naive mean field approximation [11] and in loopy belief propagation [12]. To iteratively compute new scaling parameters from the linear response corrections we use a gradient descent like algorithm           . 5021 REN  2  50 <N  50  (22) with a time dependent step-size parameter   . By iteratively computing the linear response marginals, and adapting the scale parameters in the gradient descent direction, we can optimize   , see Algorithm 2. Each linear response estimate can be computed numerically by applying  &   to a Boltzmann machine with parameters  3  9  and  3 29 6 9 0  . Partial derivatives with respect to  50 , required for the gradient in (22), can be computed numerically by rerunning fractional belief propagation with parameters   6  50 . In this procedure the computation cost to update   requires K  6   times the cost of %&   , where K is the number of nodes and  is the number of edges. 7 Numerical results We applied the method to a Boltzmann machine in which the nodes are connected according to a  square grid with periodic boundary conditions. The weights in the model were drawn from the binary distribution 3 @0 ?  ,   with equal probability. Thresholds were drawn according to 9    $  We generated  networks, and compared results of standard loopy belief propagation to results obtained by fractional belief propagation where the scale parameters were obtained by Algorithm 2. In the experiment the step size was set to be    $    $ 6  . The iterations were stopped if the maximum change in $   50 was less than $ 2  , or if the number of iterations exceeded  $ . Throughout the procedure, fractional belief propagations were ran with convergence criterion of maximal difference of $ 2"! between messages in successive iterations (one iteration is one cycle over all weights). In our experiment, all (fractional) belief propagation runs converged. The number of updates of   ranged between 20 and 80. After optimization we found (inverse) scale parameters ranging from $   50$#   to $   50 #  . Results are plotted in figure 1. In the left panel, it can be seen that the procedure can lead to significant improvements. In these experiments, the solutions obtained by optimized     are consistently 10 to 100 times better in averaged   , than the ones obtained by 10 −4 10 −2 10 0 10 −4 10 −2 10 0 < KL( Pij || Q1 ij ) > < KL( Pij || Q c ij ) > BP(1) BP(C) −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 <Xi>ex <Xi>approx BP(1) BP(C) Figure 1: Left: Scatter plots of averaged  between exact and approximated pair marginals obtained by the optimized fractional belief propagation ( %&   ) versus the ones obtained by standard belief propagation ( %H$[ ). Each point in the plot is the result of one instantiation of the network. Right: approximated single-node means for   $[ and optimized %&   against the exact single node means. This plot is for the network where   $[ had the worst performance (i.e. corresponding to the point in the left panel with highest  C! 50 <N  @0  ). standard   $[ . The averaged   is defined as   CE 50 <N 50 S $  . @021 R @0  N @0 ! (23) In the right panel, approximations of single-node means are plotted for the case where   $[ had the worst performance. Here we see that procedure can lead to quite precise estimates of the means, even if the quality of solutions by obtained   $[ is very poor. Here, it should be noticed that the linear response correction does not alter the estimated means [12]. In other words, the improvementin quality of the means is a result of optimized   , and not of the linear response correction. 8 Conclusions In this paper, we introduced fractional belief propagation as a family of approximating inference methods that generalize upon loopy belief propagation without resorting to larger clusters. The approximations are parameterized by scale parameters   , which are motivated to better model the effective interactions due to the effect of loops in the graph. The approximations are formulated in terms of approximating free energies. This family of approximating free energies includes as special cases the Bethe free energy, the mean field free energy, and also the free energy approximation that provides an upper bound on the log partition function, developed in [7]. In order to apply fractional belief propagation, the scale parameters have to be tuned. In this paper, we demonstrated in toy problems for Boltzmann machines that it is possible to tune the scale parameters using linear response theory. Results show that considerable improvements can be obtained, even if standard loopy belief propagation is of poor quality. In principle, the method is applicable to larger and more general graphical models. However, how to make the tuning of scale parameters practically feasible in such models is still to be explored. Acknowledgements We thank Bert Kappen for helpful comments and the Dutch Technology Foundation STW for support. References [1] J. Pearl. Probabilistic Reasoning in Intelligent systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, Inc., 1988. [2] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In NIPS 13. [3] A. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives to belief propagation. Neural Computation, July 2002. [4] T. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT Media Lab, 2001. [5] S.L. Lauritzen and D.J. Spiegelhalter. Local computations with probabilties on graphical structures and their application to expert systems. J. Royal Statistical society B, 50:154–227, 1988. [6] F. Kschischang, B. Frey, and H. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498–519, 2001. [7] W. Wainwright, T. Jaakkola, and S. Willsky. A new class of upper bounds on the log partition function. In UAI-2002, pages 536–543. [8] T. Heskes. Stable fixed points of loopy belief propagation are minima of the Bethe free energy. In NIPS 15. [9] S. Amari, S. Ikeda, and H. Shimokawa. Information geometry of -projection in mean field approximation. In M. Opper and D. Saad, editors, Advanced Mean Field Methods, pages 241– 258, Cambridge, MA, 2001. MIT press. [10] G. Parisi. Statistical Field Theory. Addison-Wesley, Redwood City, CA, 1988. [11] H.J. Kappen and F.B. Rodr´ıguez. Efficient learning in Boltzmann Machines using linear response theory. Neural Computation, 10:1137–1156, 1998. [12] M. Welling and Y.W. Teh. Propagation rules for linear response estimates of joint pairwise probabilities. 2002. Submitted.
2002
60
2,266
Stability-Based Model Selection Tilman Lange, Mikio L. Braun, Volker Roth, Joachim M. Buhmann (lange,braunm,roth,jb)@cs.uni-bonn.de Institute of Computer Science, Dept. III, University of Bonn R¨omerstraße 164, 53117 Bonn, Germany Abstract Model selection is linked to model assessment, which is the problem of comparing different models, or model parameters, for a specific learning task. For supervised learning, the standard practical technique is crossvalidation, which is not applicable for semi-supervised and unsupervised settings. In this paper, a new model assessment scheme is introduced which is based on a notion of stability. The stability measure yields an upper bound to cross-validation in the supervised case, but extends to semi-supervised and unsupervised problems. In the experimental part, the performance of the stability measure is studied for model order selection in comparison to standard techniques in this area. 1 Introduction One of the fundamental problems of learning theory is model assessment: Given a specific data set, how can one practically measure the generalization performanceof a model trained to the data. In supervised learning, the standard technique is cross-validation. It consists in using only a subset of the data for training, and then testing on the remaining data in order to estimate the expected risk of the predictor. For semi-supervised and unsupervised learning, there exist no standard techniques for estimating the generalization of an algorithm, since there is no expected risk. Furthermore, in unsupervised learning, the problem of model order selection arises, i.e. estimating the “correct” number of clusters. This number is part of the input data for supervised and semi-supervised problems, but it is not available for unsupervised problems. We present a common point of view, which provides a unified framework for model assessment in these seemingly unrelated areas of machine learning. The main idea is that an algorithm generalizes well, if the solution on one data set has small disagreement with the solution on another data set. This idea is independent of the amount of label information which is supplied to the problem, and the challenge is to define disagreement in a meaningful way, without relying on additional assumptions, e.g. mixture densities. The main emphasis lies on developing model assessment procedures for semi-supervised and unsupervised clustering, because a definitive answer to the question of model assessment has not been given in these areas. In section 3, we derive a stability measure for solutions to learning problems, which allows us to characterize the generalization in terms of the stability of solutions on different sets. For supervised learning, this stability measure is an upper bound to the 2-fold crossvalidation error, and can thus be understood as a natural extension of cross-validation to semi-supervised and unsupervised problems. For experiments (section 4), we have chosen the model order selection problem in the unsupervised setting, which is one of the relevant areas of application as argued above. We compare the stability measure to other techniques from the literature. 2 Related Work For supervised learning problems, several notions of stability have been introduced ([10], [3]). The focus of these works lies on deriving theoretical generalization bounds for supervised learning. In contrast, this work aims at developing practical procedures for model assessment, which are also applicable in semi- and unsupervised settings. Furthermore, the definition of stability developed in this paper does not build upon the cited works. Several procedures have been proposed for inferring the number of clusters of which we name a few here. Tibshirani et al. [14] propose the Gap Statistic that is applicable to Euclidian data only. Given a clustering solution, the total sum of within-cluster dissimilarities is computed. This quantity computed on the original data is compared with the average over data which was uniformly sampled from a hyper-rectangle containing the original data. The which maximizes the gap between these two quantities is the estimated number of clusters. Recently, resampling-based approaches for model order selection have been proposed that perform model assessment in the spirit of cross validation. These approaches share the idea of prediction strength or replicability as a common trait. The methods exploit the idea that a clustering solution can be used to construct a predictor, in order to compute a solution for a second data set and to compare the computed and predicted class memberships for the second data set. In an early study, Breckenridge [4] investigated the usefulness of this approach (called replication analysis there) for the purpose of cluster validation. Although his work does not lead to a directly applicable procedure, in particular not for model order selection, his study suggests the usefulness of such an approach for the purpose of validation. Our method can be considered as a refinement of his approach. Fridlyand and Dudoit [6] propose a model order selection procedure, called Clest, that also builds upon Breckenridge’s work. Their method employs the replication analysis idea by repeatedly splitting the available data into two parts. Free parameters of their method are the predictor, the measure of agreement between a computed and a predicted solution and a baseline distribution similar to the Gap Statistic. Because these three parameters largely influence the assessment, we consider their proposal more as a conceptual framework than as a concrete model order estimation procedure. In particular, the predictor can be chosen independent of the clustering algorithm which can lead to unreliable results (see section 3). For the experiments in section 4, we used a linear discriminant analysis classifier, the Fowlkes-Mellows index for solution comparison (c.f. [9, 6]) and the baseline distribution of the Gap Statistic. Tibshirani et al. [13] formulated a similar method (Prediction Strength) for inferring the number of clusters which is based on using nearest centroid predictors. Roughly, their measure of agreement quantifies the similarity of two clusters in the computed and in the predicted solution. For inferring a number of clusters, the least similar pair of clusters is taken into consideration. The estimated  is the largest for which the similarity is above some threshold value. Note that the similarity for  is always above this threshold. 3 The Stability Measure We begin by introducing a stability measure for supervised learning. Then, the stability measure is generalized to semi-supervised and unsupervised settings. Necessary modifications for model order selection are discussed. Finally, a scheme for practical estimation of the stability is proposed. Stability and Supervised Learning The supervised learning problem is defined as follows. Let        be a sequence of random variables where   are drawn i.i.d. from some probability distribution ! " . The #%$ are the objects and &#(' ) * are the labels. The task is to find a labeling function +-, $/. ' )0 * which minimizes the expected risk, given by 1  + , 32546 + 7 98 9:  ! " , using only a finite sample of data as input. Here 4 is the so-called loss function. For classification, we take the ;  -loss defined by 4<898>=? A@BC8D E8F=HG   iff 8ID J8F= and ; else. A measure of the stability of the labeling function learned is derived as follows. Note that for three labels 8 , 8= and 8F= = , it holds that @ B 8KD L8F= = GM @ B 8D N8F= GPO @ B 8Q=D N8F= = G , since 8D N8F= = implies 8=D L8 or 8F=D L8F= = . Now let and = be two data sets drawn independently from the same source, and denote the predictor + trained on by +FR . Then, the test risk of + R can be bounded by introducing + RFS  K=  : 1PR S  +)R   TKU WV @ B +)R  =  6D E =  GXM 1PR S  +)R S O  T  U WV @ B +)R  =  6D  +)R S  =  G  (1) We call the second term the stability of the predictor + and denote its expectation by Y  + : Y  + , KZ\[  T  U WV @ B +)R  =  ]D  +)R S  =  GC^  (2) We call the value of Y  + stability cost to stress the fact that Y  ; means perfect stability and large values of Y mean large instability. Taking expectations with respect to and = on both sides yields Z 1  +_R 6`aZ  1PR  +)R 9 M Y  + . If + is obtained by empirical risk minimization over some hypothesis set b , then ZX 1&R  +)R 9 cZXd?efgih)j 1PR  + 9 M dWeFfgih)jkZ  1PR  + 9 Ed?eFfgih)j 1  + , and one obtains Z 1  + R l`mdWeFf gih)j 1  + M Z 1  + R l`nZ  1 R  + R 9 M Y  + 0 (3) By eq. (3), the stability defined in (2) yields an upper bound on the generalization error. It can be shown that there exists a converse upper bound, if the minimum is unique and well-separated, such that Z 1  +_R o.JdWeFfgh)j 1  + implies Y . ; . Note that the stability measures the disagreement between labels on training data and test data, both assigned by + . This asymmetry arises naturally and directly measures the generalization performance of + . Furthermore, the stability can be interpreted as the expected empirical risk of + with respects to the labels computed by itself (compare (1) and (2)). Therefore, stability measures the self-consistency of + . This interpretation is also valid in the semi-supervised and unsupervised settings. Practical evaluation of the stability amounts to 2-fold cross-validation. No improvementcan therefore be expected in this area. However, unlike cross-validation, stability can also be defined in settings where no label information is available. This property of the method will be discussed in the remainder of this section. Semi-supervised Learning Semi-supervised learning problems are defined as follows. The label   of an object  might not be known. This fact is encoded by setting    ; , since ; is not a valid label. At least one labeled point must be given for every class. Furthermore, for the present discussion, we assume that we do not have a fully labeled data set for testing purposes. There exist two alternatives in defining the solution to a semi-supervised learning problem. In the first alternative, the solution is a labeling function + defined on the whole object space $ as in supervised learning. Then, the stability (eq. (2)) can be readily computed and measures the confidence for the (unknown) training error. The second alternative is that the solution is not given by a labeling function on the whole object space, but only by a labeling function on the training set . Labeling functions which are defined on the training set only will be denoted by to stress the difference. The labeling on will be denoted by R , which is only defined on  . As mentioned above, the stability compares labels given to the training data with predicted labels. In the current setting, there are no predicted labels, because is defined on the training set only. One possibility to obtain predicted labels is to introduce a predictor  , which is trained using  R  to predict labels on the new set = . Leaving  as a free parameter, we define the stability for semi-supervised learning as Ysemi  , KZ [  T  U WV @ B     =  ]D  RFS  =  GC^F (4) Of course, the choice of  influences the value of the stability. We need a condition on the prediction step to select  . First note that (4) is the expected empirical risk of  with respect to the data source K lR  . Analogously to supervised learning, the minimal attainable stability dWe  Y semi  measures the extent to which classes overlap, or how consistent the labels are. Therefore,  should be chosen to minimize Y semi  . Unfortunately, the construction of non-asymptoticallyBayes optimal learning algorithms is extremely difficult and, therefore, we should not expect that there exists a universally applicable constructive procedure for automatically building  given an . In practice, some  has to be chosen. This choice will yield larger stability costs, i.e. worse stability, and can therefore not fake stability. Furthermore, it is often possible to construct good predictors in practice. Note that (4) measures the mismatch between the label generator and the predictor  . Intuitively,  can lead to good stability only if the strategy of and  are similar. For unsupervised learning, as discussed in the next paragraph, the choices for various standard techniques are natural. For example, -means clustering suggests to use nearest centroid classification. Minimum spanning tree type clustering algorithms suggest nearest neighbor classifiers, and finally, clustering algorithms which fit a parametric density model should use the class posteriors computed by the Bayes rule for prediction. Unsupervised Learning The unsupervised learning setting is given as the problem of labeling a finite data set  E 0  . The solution  is again a function only defined on  . From this definition, it becomes clear that we again need a predictor as in the second alternative of semi-supervised learning. For unsupervised learning, another problem arises. Since no specific label values are prescribed for the classes, label indices might be permuted from one instance to another, even when the partitioning is identical. For example, keeping the same classes, exchanging the class labels  and  leads to a new partitioning, which is not structurally different. In other words, label values are only known up to a permutation. In view of this non-uniqueness of the representation of a partitioning, we define the permutation relating indices on the first set to the second set by the one which maximizes the agreement between the classes. The stability then reads Y un  ,  Z [ d?e  h   T  U WV @B     =   ]D   S  =  G ^F (5) Note that the minimization has to take place inside the expectation, because the permutation depends on the data K = . In practice, it is not necessary to compute all  permutations, because the problem is solvable by the Hungarian method in    [11]. Model Order Selection The problem of model order selection consists in determining the number of clusters to be estimated, and exists only in unsupervised learning. The range of the stability Y depends on , therefore stability values cannot be compared for different values of . For unsupervised learning, the stability minimized over  is bounded from above by  `  , since for a larger instability, there exists a relabeling which has smaller stability costs. This stability value is asymptotically achieved by the random predictor  which assigns uniformly drawn labels to objects. Normalizing Y by the stability of the random predictor yields values independent of . We thus define the re-normalized stability as Y  un   Y un   Y un    (6) Resampling Estimate of the Stability In practice, a finite data set 9  is given, and the best model should be estimated. The stability is defined in terms of an expectation, which has to be estimated for practical applications. Estimation of Y over a hypothesis set b is feasible if b has finite VC-dimension, since the VC-dimension for estimating Y is the same as for the empirical risk, a fact which is not proved here. In order to estimate the stability, we propose the following resampling scheme: Iteratively split the data set into disjoint halves, and compare the solutions on these sets as defined above for the respective cases. After the model having the smallest value of Y is determined, train this model again on the whole data to obtain the result. Note that it is necessary to split into disjoint subsets, because common points potentially increase the stability artificially. Furthermore, unlike in cross-validation, both sets must have the same size, because both are used as inputs to training algorithms. For semi-supervised and unsupervised learning, the comparison might entail predicting labels on a new set, and for the latter also minimizing over permutation of labels. 4 Stability for Model Order Selection in Clustering: Experimental Results We now provide experimental evidence for the usefulness of our approach to model order selection, which is one of the hardest model assessment problems. First, the algorithms are compared for toy data, in order to study the performance of the stability measure under well-controlled conditions. However, for real-world applications, it does not suffice to be better than competitors, but one has to provide solutions which are reasonable within the framework of the application. Therefore, in a second experiment, the stability measure is compared to the other methods for the problem of clustering gene expression data. Experiments are conducted using a deterministic annealing variant of -means [12] and Path-Based Clustering [5] optimized via an agglomerative heuristic. For all data sets, we average over  ; resamples for     ; . For the Gap Statistic and Clest1  ; random samples are drawn from the baseline. For Clest and Prediction Strength, the number of resamples is chosen the same as for our method. The threshold for Prediction Strength is set to ;   . As mentioned above, the nearest centroid classifier is employed for the purpose of prediction when using -means, and a variant of the nearest neighbor classifier is used for Path-Based Clustering which can be regarded as a combination of Minimum Spanning Tree clustering and Pairwise Clustering [5, 8]. We compare the proposed stability index of section 3 with the Gap Statistic, Clest and with Tibshirani’s Prediction Strength method using two toy data sets and a microarray data set taken from [7]. Table 1 summarizes the estimated number of clusters  of each method. Toy Data Sets The first data set consists of three fairly well separated point clouds, generated from three Gaussian distributions (  points from the first and the second and  ; points from the third were drawn). Note that for some , for example   in figure 1(a), the variance in the stability over different resamples is quite high. This effect is due to the model mismatch, since for   , the clustering of the three classes depends highly on the subset selected in the resampling. This means that besides the absolute value of the stability 1See section 2 for a brief overview over these techniques. Data Set Stability Method Gap Statistic Clest Prediction Strength “true” number 3 Gaussians     3 Rings -means          3 Rings Path-Based          Golub et al. data    ;     or   Table 1: The estimated model orders for the two toy and the microarray data set. costs, additional information about the fit can be obtained from the distribution of the stability costs over the resampled subsets. For this data set, all methods under comparison are able to infer the “true” number of clusters  . Figures 1(d) and 1(a) show the clustered data set and the proposed stability index. For   , the stability is relatively high, which is due to the hierarchical structure of the data set, which enables stable merging of the two smaller sub-clusters. In the ring data set (depicted in figures 1(e) and 1(f)), one can naturally distinguish three ring shaped clusters that violate the modeling assumptions of -means since clusters are not spherically distributed. Here, -means is able to identify the inner circle as a cluster with  . Thus, the stability for this number of clusters is highest (figure 1(b)). All other methods except Clest infer    for this data set with -means. Applying the proposed stability estimator with Path-Based Clustering on the same data set yields highest stability for  , the “correct” number of clusters (figures 1(f) and 1(c)). Here, all other methods fail and estimate    . The Gap Statistic fails here because it directly incorporates the assumption of spherically distributed data. Similarly, the Prediction Strength measure and Clest (in the form we use here) use classifiers that only support linear decision boundaries which obviously cannot discriminate between the three ring-shaped clusters. In all these cases, the basic requirement for a validation scheme is violated, namely that it must not incorporate additional assumptions about the group structure in a data set that go beyond the ones of the clustering principle employed. Apart from that, it is noteworthy that the stability with -means is significantly worse than the one achieved with Path-Based Clustering, which indicates that the latter is the better choice for this data set. Application to Microarray Data Recently, several authors have investigated the possibility of identifying novel tumor classes based solely on gene expression data [7, 2, 1]. Golub et al. [7] studied in their analysis the problem of classifying and clustering acute leukemias. The important question of inferring an appropriate model order remains unaddressed in their article and prior knowledge is used instead. In practice however, such knowledge is often not available. Acute leukemias can be roughly divided into two groups, acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL) where the latter can furthermore be subdivided into B-cell ALL and T-cell ALL. Golub et al. used a data set of 72 leukemia samples (25 AML, 47 ALL of which 38 are B-cell ALL samples)2. For each sample, gene expression was monitored using Affymetrix expression arrays. We apply the preprocessing steps as in Golub et al. resulting in a data set consisting of 3571 genes and 72 samples. For the purpose of cluster analysis, the feature set was additionally reduced by only retaining the 100 genes with highest variance across samples. This step is adopted from [6]. The final data set consists of 100 genes and 72 samples. We have performed cluster analysis using -means and the nearest centroid rule. Figure 2 shows 2Available at http://www-genome.wi.mit.edu/cancer/ 2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 index number of clusters (a) The stability index for the Gaussians data set with  -means. 2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 index number of clusters (b) The stability index for the three-ring data set with  -means Clustering. 2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 0.5 index number of clusters (c) The stability index for the three-ring data set with Path-Based Clustering. −4 −2 0 2 4 6 8 −6 −4 −2 0 2 4 6 8 (d) Clustering solution on the full data set for  . −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 (e) Clustering solution on the full data set for  . −5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5 (f) Clustering solution on the full data set for  . Figure 1: Results of the stability index on the toy data (see section 4). the corresponding stability curve. For  , we estimate the highest stability. We expect that clustering with  separates AML, B-cell ALL and T-cell ALL samples from each other. With respect to the known ground-truthlabels,  C   of the samples (66 samples) are correctly classified (the Hungarian method is used to map the clusters to the ground-truth). Of the competitors, only Clest is able to infer the “correct” number of cluster  while the Gap Statistic largely overestimates the number of clusters. The Prediction strength does not provide any reasonable result as it estimates    . Note, that for   similar stability is achieved. We cluster the data set again for   and compare the result with the ALL – AML labeling of the data. Here,    of the samples (62 samples) are correctly identified. We conclude that our method is able to infer biologically relevant model orders. At the same time, a is suggested that leads to high accuracy w.r.t. the ground-truth. Hence, our re-analysis demonstrates that we could have recovered a biologically meaningful grouping in a completely unsupervised manner. 5 Conclusion The problem of model assessment was addressed in this paper. The goal was to derive a common framework for practical assessment of learning models. Starting with defining a stability measure in the context of supervised learning, this measure was generalized to semi-supervised and unsupervised learning. The experiments concentrated on model or2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 index number of clusters Figure 2: Resampled stability for the leukemia dataset vs. number of classes (see sec. 4). der selection for unsupervised learning, because this is the area where the need for widely applicable model assessment strategies is highest. On toy data, the stability measure outperforms other techniques, when their respective modeling assumptions are violated. On real-world data, the stability measure compares favorably to the best of the competitors. Acknowledgments. This work has been supported by the German Research Foundation (DFG), grants #Buh 914/4, #Buh 914/5. References [1] A. A. Alizadeh et al. Distinct types of diffuse large b-cell lymphoma identified by gene expression profiling. Nature, 403:503 – 511, 2000. [2] M. Bittner et al. Molecular classification of cutaneous malignant melanoma by gene expression profiling. Nature, 406(3):536 – 540, 2000. [3] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2:499–526, 2002. [4] J. Breckenridge. Replicating cluster analysis: Method, consistency and validity. Multivariate Behavioural research, 1989. [5] B. Fischer, T. Z¨oller, and J. M. Buhmann. Path based pairwise data clustering with application to texture segmentation. In LNCS Energy Minimization Methods in Computer Vision and Pattern Recognition. Springer Verlag, 2001. [6] J. Fridlyand and S. Dudoit. Applications of resampling methods to estimate the number of clusters and to improve the accuracy of a clustering method. Technical Report 600, Statistics Department, UC Berkeley, September 2001. [7] T.R. Golub et al. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science, pages 531 – 537, October 1999. [8] T. Hofmann and J. M. Buhmann. Pairwise data clustering by deterministic annealing. IEEE PAMI, 19(1), January 1997. [9] A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice-Hall, Inc., 1988. [10] Michael J. Kearns and Dana Ron. Algorithmic stability and sanity-check bounds for leave-oneout cross-validation. In Computational Learing Theory, pages 152–162, 1997. [11] H.W. Kuhn. The hungarian method for the assignment problem. Naval Res. Logist. Quart., 2:83–97, 1955. [12] K. Rose, E. Gurewitz, and G. C. Fox. A deterministic annealing approach to clustering. Pattern Recognition Letters, 11(9):589 – 594, 1990. [13] R. Tibshirani, G. Walther, D. Botstein, and P. Brown. Cluster validation by prediction strength. Technical report, Statistics Department, Stanford University, September 2001. [14] R. Tibshirani, G. Walther, and T. Hastie. Estimating the number of clusters via the gap statistic. Technical report, Statistics Department, Stanford University, March 2000.
2002
61
2,267
Hidden Markov Model of Cortical Synaptic Plasticity: Derivation of the Learning Rule Michael Eisele W. M. Keck Center for Integrative Neuroscience San Francisco, CA 94143-0444 eisele@phy.ucsf.edu Kenneth D. Miller W. M. Keck Center for Integrative Neuroscience San Francisco, CA 94143-0444 ken@phy.ucsf.edu Abstract Cortical synaptic plasticity depends on the relative timing of pre- and postsynaptic spikes and also on the temporal pattern of presynaptic spikes and of postsynaptic spikes. We study the hypothesis that cortical synaptic plasticity does not associate individual spikes, but rather whole firing episodes, and depends only on when these episodes start and how long they last, but as little as possible on the timing of individual spikes. Here we present the mathematical background for such a study. Standard methods from hidden Markov models are used to define what “firing episodes” are. Estimating the probability of being in such an episode requires not only the knowledge of past spikes, but also of future spikes. We show how to construct a causal learning rule, which depends only on past spikes, but associates pre- and postsynaptic firing episodes as if it also knew future spikes. We also show that this learning rule agrees with some features of synaptic plasticity in superficial layers of rat visual cortex (Froemke and Dan, Nature 416:433, 2002). 1 Introduction Cortical synaptic plasticity agrees with the Hebbian learning principle: Neurons that fire together, wire together. But many features of cortical plasticity go beyond this simple principle, such as the dependence on spike-timing or the nonlinear dependence on spike frequency (see [1] or [2] for review). Studying these features may produce a better understanding of which neurons wire together in the neocortex. Previous models of cortical synaptic plasticity [3]-[5] differed in their details, but they agreed that nonlinear learning rules are needed to model cortical plasticity. In linear learning rules, the weight change induced by a presynatic spike would depend only on the postsynaptic spikes, but not on all the other presynaptic spikes. In the cortex, by contrast, the contribution from a presynaptic spike is stronger when it occurs alone than when it occurs right after another presynaptic spike [5]. Similar results hold for postsynaptic spikes. Consequently, the weight change depends in a complex way on the whole temporal pattern of pre- and postsynaptic spikes. Even though this nonlinear dependence can be modeled phenomenologically [3]-[5], its biological function remains unknown. We will not propose such a function here, but reduce this complex dependence to a few principles, whose pre post spikes firing episodes pre post LTP LTD LTP or LTD LTP LTD LTP time A B C D 2 e (1) = 1 1 0 a 1 e (1)> 0 1 - a a e (1) = 0 1 - a a = 1 0 2 20 12 01 01 20 Figure 1: A: Usually, models of cortical synaptic plasticity associate pre- and postsynaptic spikes directly. They produce long-term potentiation (LTP) when the presynaptic spike (pre) precedes the postsynaptic spike (post), and long-term depression (LTD) if the order is reversed. When several pre- and postsynaptic spikes are interleaved in time, the outcome depends in a complicated way on the whole spike pattern (LTP or LTD). B: In our model, pre- and postsynaptic spikes are paired only indirectly. Each spike train is used to estimate when firing episodes start and end. C: These firing episodes are then associated, with LTP being induced if the presynaptic firing episode starts before the postsynaptic one and LTD if the order is reversed and if the episodes are short. D: Hidden Markov model used to estimate when firing episodes occur. function may be easier to understand in future studies. 2 Basic learning principle The basic principle behind our model is illustrated in fig. 1. We propose that the learning rule does not associate pre- and postsynaptic spikes directly, but rather uses them to estimate whether the pre- or postsynaptic neuron is currently in a period of rapid firing (’firing episode’) or a period of little or no firing. It then associates the firing episodes. When the per- and postsynaptic firing episodes overlap, the synapse is strengthened or weakened depending on which one started first, but independent of the precise temporal patterns of spikes within a firing episode. As a consequence, the contribution of each spike to synaptic plasticity will depend on whether it occurs alone, or surrounded by other spikes, and the learning rule will be nonlinear. For the right parameter choice, the nonlinear features of this rule will agree well with nonlinear features of cortical synaptic plasticity. Implementation of this rule will be done in two steps. Firstly, we will define what ”firing episodes” are. Secondly, we will associate the pre- and postsynaptic firing episodes. The first step uses standard methods from hidden Markov models (see e.g. [6]). The pre- and postsynaptic neuron will each be described by a Markov model with three states (fig. 1D), which correspond to firing episodes (state 2; firing probability   ), to the silence between responses (state 0; firing probability  ), and to the first spike of a new firing episode (state 1; firing probability   ; duration = 1 time step). As usual, the parameters of the Markov model are the transition probabilities  , which determine how long firing episodes and silent periods are expected to last, and the emission rates    , which determine the firing rates. ! is the binary observable at time step " ( !#$ at spikes and %& otherwise),   is the firing probability per time step in state ' , and  ( )*,+-  . In general, the pre- and postsynaptic neuron will have different parameters  ! and  . . Once the Markov model is defined, one can use standard algorithms (forward and backward algorithm) to estimate, for any given spike sequence, the state probabilities over time. To model cortical synaptic plasticity, we will increase the synaptic weight whenever the preand the postsynaptic neuron have simultaneous firing episodes (both in state 2), and decrease the weight whenever the postsynaptic firing episode starts first (pre in state 1 while post already in state 2):            for        +  for         otherwise (1) where  and  are the amplitudes of synaptic potentiation and depression. In general, the states are not known with certainty, only their probabilities are, and the actual weight change is therefore defined as:  "!  # %$ ' '&(-)*  $,+  -&(/.    '10 0   32 (2) where the sum is over all possible pre- and postsynaptic states and ( 3454647+  is the probability given the whole spike sequence     98 45464 . As fig. 2 shows, this straightforward learning rule produces weight changes that are similar to those seen in cortex [5]. (One can show that this particular Markov model depends on the parameters  and only through the two combinations : <;"=?>   A@   +    and B /;C=-& ?>    &    where ;"= is the time step. To fit the data on spike pairs and triplets [5], we set :   15ms, :  ?  34ms, B   20ms, B    70ms,   96Hz & ;C= , and   "4ED .) This learning rule is, however, not biologically plausible, because it violates causality. The estimates of state probabilities depend not only on past, but also on future observables, while real synaptic plasticity can depend only on past spikes. To solve this causality problem, we will rewrite the learning rule, essentially deriving a new algorithm in place of the familiar hidden Markov algorithms. We will derive this causal learning rule not only for this specific 3-state model, but for general Markov models. 3 General form of the learning rule 3.1 Learning goal To derive the general form of the learning rule for arbitrary pre- and postsynaptic Markov models, we assume that the transition probabilities   and emission probabilities    are given and that the weight change is some function  . *     " 2 (3) of the pre- and postsynaptic states ! at time " and the time " itself. If the pre- and postsynaptic state sequences   and    were known, the weight   at time " would simply be the initial weight  plus all the previous weight changes:  9F  *  * ?G   @   H?I  # . * H   H %J 2 (4) In the current context, the state sequences are unknown and have to be estimated from the spike trains  )* and   ?K . Ideally, we would like to set the weight at time " equal to the expectation value of  9F  *    G , given the spike trains   and    . But only part of these spike trains are known at time " . Of the sequence   the synapse has already seen the past values L  , M*  ... L)*    , which we will call L  , and the present value L*  . But −25 −5 0 5 25 −25 −5 0 5 25 −0.5 0 0.5 1 t1 (ms) 2/1 triplets; phen. model t2 (ms) dw −25 −5 0 5 25 −25 −5 0 5 25 −0.5 0 0.5 1 t1 (ms) 2/1 triplets; linear rule t2 (ms) dw −25 −5 0 5 25 −25 −5 0 5 25 −0.5 0 0.5 1 t1 (ms) 2/1 triplets; hidden Markov model t2 (ms) dw −25−5 0 525 −25−5 0 5 25 −0.5 0 0.5 1 t1 (ms) 1/2 triplets; phen. model t2 (ms) dw −25−5 0 525 −25−5 0 5 25 −0.5 0 0.5 1 t1 (ms) 1/2 triplets; linear rule t2 (ms) dw −25−5 0 525 −25−5 0 5 25 −0.5 0 0.5 1 t1 (ms) 1/2 triplets; hidden Markov model t2 (ms) dw 25 5 0 −5 −25 −25 −5 0 5 25 t1 (ms) t2 (ms) examples of 2/1 triplets −25 −5 0 5 25 25 5 0 −5 −25 t1 (ms) t2 (ms) examples of 1/2 triplets Figure 2: Weight change produced by spike triplets in various models. Our learning rule (second column), which depends on the timing of firing episodes but only weakly on the timing of individual spikes, and which was implemented using hidden Markov models, agrees well with the phenomenological model (first column) that was used in [5, fig 3b] to fit data from superficial layers in rat visual cortex. It certainly agrees better than a purely linear rule (third column). Parameters were set so that all three models produce the same results for spike pairs (1 presynaptic and 1 postsynaptic spike). Upper row: Weight change produced by 2 presynaptic and 1 postsynaptic spikes (2/1 triplet). Lower row: 1 presynaptic and 2 postsynaptic spikes (1/2 triplet). =. and =3 are the times between preand postsynaptic spikes. The small boxes on the right show examples of spike patterns for positive and negative =  and = it has not yet seen the future sequence      , M    , ..., which we will call L*  . All one can do is to make some assumption about what the future spikes will be, set   accordingly, and correct   in the future, when the real spike sequence becomes known. Our algorithm assumes no future spikes and sets the weight at time " equal to:       F     G +!M*   M  M*  L    L   L ?   (5) where   46454 +  is the expectation value given the spike sequences  . The condition that all future spikes are 0 is written as  *   and M    . One could make other assumptions about the future spikes, but all these assumptions would affect only when the weight changes, but not how much it changes in the long run. This is because the expectation value of a past weight change:   # . * H   H %J 2 0 0 L  M*  M*  L ?  M   L    (6) will depend little on the future spikes    and M   , if the time J is much earlier than the time " . As " grows, most weight changes will lie in the distant past and depend only weakly on our assumptions about future spikes. Next we will show how to compute the expectation value in eq. (5) without having to store the past spike trains   . To simplify the notation, we will regard each pair of pre- and postsynaptic states . * H   H 2 as a state  H of a combined pre- and postsynaptic Markov model. We will also combine the pre- and postsynaptic spikes .(M H M  H 2 , each of which can take the two values 0 or 1, to a single observable  H , which can take 4 values. The desired weight is then equal to:      F  G +        with   F  G   @   H?I    H" J  (7) 3.2 Running estimate of state probabilities To compute   , it is helpful to first compute the probabilities    "   (   ' +       (8) of the states given the past and present spikes and assuming that there are no future spikes. The    "  can be computed recursively, in terms of    " +  (this is similar to the familiar forward algorithm for hidden Markov models). Write  as:    "     (-)   '      +        (9)    (-      '       '(-        (10) Because of the Markov property, future and present spikes   and  depend only on the present state  , but not on the past state !   or on   . Similarly,  depends only on    but not on   . Thus the enumerator of the last expression is equal to: (-    +   ' '& (    +   ' '& (    ' +      1&(          (11)    " -&   '&  . & (-       (12) with   "   (     +   '  (13) The probabilities   "  of having no future spikes after state ' can be computed by the backward algorithm:   "    (-        $,+    '     " @ '& ( '&   (14) This is a linear equation with constant coefficients. As long as the end of the Markov chain is far enough in the future, this equation reduces to an eigenvalue problem with the solution   "   &   " @  , where is the largest eigenvalue of the matrix with elements ( "&   and is the corresponding eigenvector. As the matrix elements are positive, will be real, and the eigenvector will be unique up to a constant factor (except for quite exceptional, disconnected Markov chains, in which it may depend on the choice of end state). The last unknown factor in eq. (12) is (  !       , which can be expressed in terms of    " +  : (-           " +,1(-         '(-       +      (15) where the Markov property was used again. Putting everything together, one gets the update rule for    "  :    "      .     1&   " +  (16) with  .           1&   1&   &   " ?>   " +  (17)        (-         '(-        (18) The ratio   " ?>   " +     "  >  ,&   "   does not really depend on " but only on the eigenvalue and the relative size of the elements of the eigenvector . If there is no pre- or postsynaptic spike at time " (    ), the normalization factor       is equal to 1, and  . no longer depends on " or   . In this case, eq. (16) is a linear equation with constant coefficients, which can be integrated analytically from one spike to the next, thereby speeding up the numerical simulation. At pre- or postsynaptic spikes (     ),  can be computed by summing eq. (16) over ' and using      "    : )         !     " + '&   1&   &   " ?>   " +    (19) 3.3 Running estimate of weights Using the knowledge of the probabilities    "  , one can now compute the weight     9F  G +!       (20)      F  G +!       @   #  ' "  &    "  (21) The expectation value     F  G +  in this equation will be equal to     , if there is no pre- or postsynaptic spike at time " (    ). In between spikes, the weight therefore changes as:        @   #  ' " '&    "  (22) At the time of spikes, the weight change is more complex, because earlier weight changes have to be modified according to the new state information given by the spikes. To compute it, let us introduce the quantities    "     " 1&    F  G +      '    (23) The weight is equal to the sum of these  :         "  (24) and, as we will see next, the    "  can be computed in a recursive way, even in the presence of spikes. Start with:    "      " '&  #  ' "  @    -F  G +      '    (25)  #  ' " '&    "  @   (-)    +      '   -&    " '& &     F  G +         '         (26) Because of the Markov property, the last expectation value depends only on   and  , but not on   , ' , or   , and it is thus equal to    " +  >    " +  . The other two factors (-)     +        '    &    "  (-)   '      +!        (27) combine to give the same expression that already occurred in equation (9). As shown above (eq. (16)), this expression is equal to  . (   1&    " +  (28) with the same   as before. Putting everything together, one gets the update rule for    "  :    "   #  ' "  &    " A@         1&    " +  (29) Together with eqs. (16), (17), (19), and (24) this constitutes our learning rule. It is causal, because it depends only on past, not on future signals, but in the long run it will give the same weight change as the standard hidden Markov rule (2). In between spikes, the  in eq. (16) and the  in eq. (29) evolve according to linear rules, and the weight changes according to the simple rule (22). These simplifications are a consequence of assuming, in the definition of   , that there are no future spikes. Other assumptions are possible: One could, for example, set   equal to      F  G +    , assuming that future spikes occur with the rate predicted by the Markov model, and one could also derive a causal learning rule for this   (not shown), but then the evolution of  and  between spikes would be nonlinear and the evolution of  would also be more complex. This learning rule still has a rather unusual form. Usually, one writes   as the sum of     plus some weight change. Our rule can also be written in this form, if the  are replaced by: ;   "      "  +    " '&   (30)     "    9F  G +      '    +   F  G +        (31) ;   "  is a measure for how much the weight should be changed if one suddenly learned, with certainty, that the neurons are in state ' . By definition, the ; sum to zero:   ;   "   . Inserting the update rule for    "  gives the update rule for ;   "  : ;   "    #  ' "  +       "  @    .       ;   " +  @    " +       (32)   #  ' "  +   @     '&    " A@          '&;   " +  (33) Summing over ' gives the update rule for   :        @   #  ' " '&    "  @   !   .    '& ;   " +  (34) The last, ; -dependent sum is nonzero only if spikes arrive. It occurs because a new spike changes the probability estimates of previous states, and thereby the desired weight. 3.4 Summary of the learning algorithm To simplify notation, we combined the pre- and postsynaptic Markov models into a single one. How does the learning rule look in terms of the original pre- and postsynaptic parameters? If the presynaptic model has  states and the postsynaptic one   , then the combined model has )* &   states. At each time step, we have to update not only the weight   but  &  ?K signal traces ; , which we will now write as ;   "  , where  denotes the presynaptic and  the postsynaptic state. However, one needs to update only * @  ? of the signal traces  , because they factorize into a pre- and a postsynaptic part:     "      " '&  ?K   "  . The learning algorithm is then given by:  Initialization ( "   : Define the states and the parameter and  of the pre- and postsynaptic Markov model. Define the weight change # .%     " 2 for all possible state pairs. Find the leading eigenvector of both Markov chains in the absence of spikes:  & *     )*   '&     & *  (35) Initialize  , ; , and  (    ; ;      for arbitrary start state and 0 otherwise)  Recursion  "    45464  :  *    *  & )*  L  '&    & *  >   & *     (36) ,*    * &    M  1&    & *  >   & )*   (37)  *        &  *  (38) and analogous equations for       , and    . ;    "!    $ ' " '&  &    @  "!    !     & ,   &;   (39) ;    # %$ ' "  + ;  1&  * &   ?  @   !     &     & ;   (40)    @ ;  (41)  Terminate at the end of the spike sequences   and    . 4 Conclusion This demonstrates that the basic principle of associating not individual spikes, but whole firing episodes, can be implemented in a causal learning rule, which depends only on past signals. This rule does not have to store the time of all past spikes, but only a few signal traces  and ; , and may thus be biologically plausible. For the right parameter choice, it agrees well with some nonlinear features of cortical synaptic plasticity (fig. 2). This does not imply that actual synaptic plasticity follows the same rule, but only that these particular features are consistent with our basic principle. Based on the predictions of this rule, one could design more precise experimental tests of whether cortical synaptic plasticity associates individual spikes or whole firing episodes. Acknowledgments This work was supported by R01-EY11001. We thank T. Sejnowski for his comments on a similar type of learning rules, which he suggested to call ”hidden Hebbian learning”. The second author (KM) would like to emphasize that his contribution to this paper was limited to assistance in writing. References [1] G.-Q. Bi and M.-M. Poo Synaptic modification by correlated activity: Hebb’s postulate revisited. Ann. Rev. Neurosci., 24:139–166, 2001. [2] O. Paulsen and T. J. Sejnowski. Natural patterns of activity and long-term synaptic plasticity. Curr Opin Neurobiol., 10:172–179, 2000. [3] W. Senn, H. Markram, and M. Tsodyks. An algorithm for modifying neurotransmitter release probability based on pre- and postsynaptic spike timing. Neural Comput., 13:35–67, 2001. [4] P. J. Sjostrom, Turrigiano G. G., and S. B. Nelson. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron, 32:1149–1164, 2001. [5] R. C. Froemke and Y. Dan. Spike-timing-dependent synaptic modification induced by natural spike trains. Nature, 416:433–438, 2002. [6] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77:257–286, 1989.
2002
62
2,268
Automatic Alignment of Local Representations Yee Whye Teh and Sam Roweis Department of Computer Science, University of Toronto ywteh,roweis  @cs.toronto.edu Abstract We present an automatic alignment procedure which maps the disparate internal representations learned by several local dimensionality reduction experts into a single, coherent global coordinate system for the original data space. Our algorithm can be applied to any set of experts, each of which produces a low-dimensional local representation of a highdimensional input. Unlike recent efforts to coordinate such models by modifying their objective functions [1, 2], our algorithm is invoked after training and applies an efficient eigensolver to post-process the trained models. The post-processing has no local optima and the size of the system it must solve scales with the number of local models rather than the number of original data points, making it more efficient than model-free algorithms such as Isomap [3] or LLE [4]. 1 Introduction: Local vs. Global Dimensionality Reduction Beyond density modelling, an important goal of unsupervised learning is to discover compact, informative representations of high-dimensional data. If the data lie on a smooth low dimensional manifold, then an excellent encoding is the coordinates internal to that manifold. The process of determining such coordinates is dimensionality reduction. Linear dimensionality reduction methods such as principal component analysis and factor analysis are easy to train but cannot capture the structure of curved manifolds. Mixtures of these simple unsupervised models [5, 6, 7, 8] have been used to perform local dimensionality reduction, and can provide good density models for curved manifolds, but unfortunately such mixtures cannot do dimensionality reduction. They do not describe a single, coherent low-dimensional coordinate system for the data since there is no pressure for the local coordinates of each component to agree. Roweis et al [1] recently proposed a model which performs global coordination of local coordinate systems in a mixture of factor analyzers (MFA). Their model is trained by maximizing the likelihood of the data, with an additional variational penalty term to encourage the internal coordinates of the factor analyzers to agree. While their model can trade off modelling the data and having consistent local coordinate systems, it requires a user given trade-off parameter, training is quite inefficient (although [2] describes an improved training algorithm for a more constrained model), and it has quite serious local minima problems (methods like LLE [4] or Isomap [3] have to be used for initialization). In this paper we describe a novel, automatic way to align the hidden representations used by each component of a mixture of dimensionality reducers into a single global representation of the data throughout space. Given an already trained mixture, the alignment is achieved by applying an eigensolver to a matrix constructed from the internal representations of the mixture components. Our method is efficient, simple to implement, and has no local optima in its optimization nor any learning rates or annealing schedules. 2 The Locally Linear Coordination Algorithm Suppose we have a set of data points given by the rows of       from a  -dimensional space, which we assume are sampled from a  dimensional manifold. We approximate the manifold coordinates using images    !   in a  dimensional embedding space. Suppose also that we have already trained, or have been given, a mixture of " local dimensionality reducers. The # th reducer produces a %$ dimensional internal representation &(' $ for data point )' as well as a “responsibility” * ' $,+.describing how reliable the # th reducer’s representation of  ' is. These satisfy / $ * ' $ 10 and can be obtained, for example, using a gating network in a mixture of experts, or the posterior probabilities in a probabilistic network. Notice that the manifold coordinates and internal representations need not have the same number of dimensions. Given the data, internal representations, and responsibilities, our algorithm automatically aligns the various hidden representations into a single global coordinate system. Two key ideas motivate the method. First, to use a convex cost function whose unique minimum is attained at the desired global coordinates. Second, to restrict the global coordinates ' to depend on the data  ' only through the local representations & ' $ and responsibilities * '2$ , thereby leveraging the structure of the mixture model to regularize and reduce the effective size of the optimization problem. In effect, rather than working with individual data points, we work with large groups of points belonging to particular submodels. We first parameterize the global coordinates ' in terms of * ' $ and & ' $ . Given an input  ' , each local model infers its internal coordinates & ' $ and then applies a linear projection 3 $ and offset 465 $ to these to obtain its guess at the global coordinates. The final global coordinates ' is obtained by averaging the guesses using the responsibilities as weights: ' 87 $ * '2$%9 3 $ & ' $;: 4<5 $>= 87 $ ?A@ 7 BDC 5 * ' $>E B ' $ 4 B $ F7>GH ' G 4 G (1) F8I 3 J  J 9LK # =  HM' G N*O' $P& B ' $  4 G N4 B $ (2) where 4 B $ is the K th column of 3 $ , E B '2$ is the K th entry of & ' $ , and E 5 ' $ Q0 is a bias. This process is described in figure 1. To simplify our calculations, we have vectorized the indices 9LK # = into a single new index J 9LK # = , where J 9LK R# = is an invertible mapping from the domain of 9LK R# = to 0 RS%A" :,/ $  $  . For compactness, we will write J  J 9K R# = . Now define the matrices I and 3 as HT' G F*O' $ E B ' $ and the J th row of 3 as 4 G .4 B $ . Then (1) becomes a system of linear equations (2) with fixed I and unknown parameters 3 . Responsibility−weighted local representations local dimensionality reduction models data high−dimensional xn responsibilities local coordinates global coordinates rnk znk ny nj u alignment parameters lj Figure 1: Obtaining global coordinates from data via responsibility-weighted local coordinates. The key assumption, which we have emphasized by re-expressing ' above, is that the mapping between the local representations and the global coordinates ' is linear in each of & ' $ , * ' $ and the unknown parameters 4 B $ . Crucially, however, the mapping between the original data  ' and the images ' is highly non-linear since it depends on the multiplication of responsibilities and internal coordinates which are in turn non-linearly related to the data  ' through the inference procedure of the mixture model. We now consider determining 3 according to some given cost function 9  = . For this we advocate using a convex 9  = . Notice that since  is linear in 3 , 9  9 3 = = is convex in 3 as well, and there is a unique optimum that can be computed efficiently using a variety of methods. This is still true if we also have feasible convex constraints  9  =  - on  . The case where the cost and constraints are both quadratic is particularly appealing since we can use an eigensolver to find the optimal 3 . In particular suppose  and  are matrices defining the cost and constraints, and let 8FI   I and   I   I . This gives: 9  =          9  =       ?    3  I   I 3   3  I   I 3 ? (3)    3   3   3   3 ? where   is the trace operator. The matrices  and  are typically obtained from the original data and summarize the essential geometries among them. The solution to the constrained minimization above is given by the  smallest generalized eigenvectors  with   . In particular the columns of 3 are given by these generalized eigenvectors. Below, we investigate a cost function based on the Locally Linear Embedding (LLE) algorithm of Roweis and Saul [4]. We call the resulting algorithm Locally Linear Coordination (LLC). The idea of LLE is to preserve the same locally linear relationships between the original data points  ' and their counterparts ' . We identify for each point  ' its nearest-neighbours  ' and then minimize 9 , = F7 '    ' / ! #"#$ '      % (  9 & '  = 9 ( ' =  (4) with respect to  subject to the constraints / ! " $ '   0 . The weights are unique1 and can be solved for efficiently using constrained least squares (since solving for $ '  is decoupled across ) ). The weights summarize the local geometries relating the data points to their neighbours, hence to preserve these relationships among the coordinates ' we arrange to minimize the same cost 9   = * (   9 & '  = 9 ( ' =  (5) but with respect to  instead. is invariant to translations and rotations of  , and scales as we scale  . In order to break these degeneracies we enforce the following constraints: 0 + 7 ' '  0 +-, 0    0 + 7 ' '  '  0 +    * ? (6) where , 0 is a vector of 0 ’s. For this choice, the cost function and constraints above become: 9  . = % ( 3  I  9 (   = 9 ( ' = I 3 M*/O 3   3  (7)   , 0 MI 3    3  I  I 3  3 0 3 % ? (8) with cost and constraint matrices 8FI  9 ( '  = 9   = I  0 + I  I (9) 1In the unusual case where the number of neighbours is larger than the dimensionality of the data 1 , simple regularization of the norm of the weights once again makes them unique. As shown previously, the solution to this problem is given by the smallest generalized eigenvectors  with 8 0( . To satisfy   , 0  I 3  - , we need to find eigenvectors that are orthogonal to the vector  5  I /,0 . Fortunately,  5 is the smallest generalized eigenvector, corresponding to an eigenvalue of 0. Hence the solution to the problem is given by the S ' ? to 9  : 0 =  smallest generalized eigenvectors instead. LLC Alignment Algorithm:  Using data , compute local linear reconstruction weights $ '  using (4).  Train or receive a pre-trained mixture of local dimensionality reducers. Apply this mixture to , obtaining a local representation & ' $ and responsibility * ' $ for each submodel # and each data point  ' .  Form the matrix I with H ' G  * ' $>E B ' $ and calculate  and  from (9).  Find the eigenvectors corresponding to the smallest  : 0 eigenvalues of the generalized eigenvalue system   0( .  Let 3 be a matrix with columns formed by the S nd to  : 0 st eigenvectors. Return the J th row of 3 as alignment weight 4 B $ . Return the global manifold coordinates as  FI 3 . Note that the edge size of the matrices  and  whose generalized eigenvectors we seek is " : / $  $ which scales with the number of components and dimensions of the local representations but not with the number of data points + . As a result, solving for the alignment weights is much more efficient than the original LLE computation (or those of Isomap) which requires solving an eigenvalue system of edge size + . In effect, we have leveraged the mixture of local models to collapse large groups of points together and worked only with those groups rather than the original data points. Notice however that the computation of the weights  still requires determining the neighbours of the original data points, which scales as  9 + = in the worse case. Coordination with LLC also yields a mixture of noiseless factor analyzers over the global coordinate space , with the # th factor analyzer having mean 4 5 $ and factor loading 3 $ . Given any global coordinates , we can infer the responsibilities *P$ and the posterior means & $ over the latent space of each factor analyzer. If our original local dimensionality reducers also supports computing  from * $ and & $ , we can now infer the high dimensional mean data point  which corresponds to the global coordinates . This allows us to perform operations like visualization and interpolation using the global coordinate system. This is the method we used to infer the images in figures 4 and 5 in the next section. 3 Experimental Results using Mixtures of Factor Analyzers The alignment computation we have described is applicable to any mixture of local dimensionality reducers. In our experiments, we have used the most basic such model: a mixture of factor analyzers (MFA) [8]. The # th factor analyzer in the mixture describes a probabilistic linear mapping from a latent variable & $ to the data  with additive Gaussian noise. The model assumes that the data manifold is locally linear and it is this local structure that is captured by each factor analyzer. The non-linearity in the data manifold is handled by patching multiple factor analyzers together, each handling a locally linear region. MFAs are trained in an unsupervised way by maximizing the marginal log likelihood of the observed data, and parameter estimation is typically done using the EM algorithm2. 2In our experiments, we initialized the parameters by drawing the means from the global covariance of the data and setting the factors to small random values. We also simplified the factor analyzers to share the same spherical noise covariance    although this is not essential to the process. A B C D Figure 2: LLC on the S curve (A). There are 14 factor analyzers in the mixture (B), each with 2 latent dimensions. Each disk represents one of them with the two black lines being the factor loadings. After alignment by LLC (C), the curve is successfully unrolled; it is also possible to retroactively align the original data space models (D). A B Figure 3: Unknotting the trefoil curve. We generated 6000 noisy points from the curve. Then we fit an MFA with 30 components with 1 latent dimension each (A), but aligned them in a 2D space (B). We used 10 neighbours to reconstruct each data point. Since there is no constraint relating the various hidden variables & $ , a MFA trained only to maximize likelihood cannot learn a global coordinate system for the manifold that is consistent across every factor analyzer. Hence this is a perfect model on which to apply automatic alignment. Naturally, we use the mean of & $ conditioned on the data  (assuming the # th factor analyzer generated  ) as the # th local representation of  , while we use the posterior probability that the # th factor analyzer generated  as the responsibility. We illustrate LLC on two synthetic toy problems to give some intuition about how it works. The first problem is the S curve given in figure 2(A). An MFA trained on 1200 points sampled uniformly from the manifold with added noise (B) is able to model the linear structure of the curve locally, however the internal coordinates of the factor analyzers are not aligned properly. We applied LLC to the local representations and aligned them in a 2D space (C). When solving for local weights, we used 12 neighbours to reconstruct each data point. We see that LLC has successfully unrolled the S curve onto the 2D space. Further, given the coordinate transforms produced by LLC, we can retroactively align the latent spaces of the MFAs (D). This is done by determining directions in the various latent spaces which get transformed to the same direction in the global space. To emphasize the topological advantages of aligning representations into a space of higher dimensionality than the local coordinates used by each submodel, we also trained a MFA on data sampled from a trefoil curve, as shown in figure 3(A). The trefoil is a circle with a knot in 3D. As figure 3(B) shows, LLC connects these models into a ring of local topology faithful to the original data. We applied LLC to MFAs trained on sets of real images believed to come from a complex manifold with few degrees of freedom. We studied face images of a single person under varying pose and expression changes and handwritten digits from the MNIST database. After training the MFAs, we applied LLC to align the models. The face models were aligned into a 2D space as shown in figure 4. The first dimension appears to describe Figure 4: A map of reconstructions of the faces when the global coordinates are specified. Contours describe the likelihood of the coordinates. Note that some reconstructions around the edge of the map are not good because the model is extrapolating from the training images to regions of low likelihood. A MFA with 20 components and 8 latent dimensions each is used. It is trained on 1965 images. The weights are calculated using 36 neighbours. changes in pose, while the second describes changes in expression. The digit models were aligned into a 3D space. Figure 5 (top) shows maps of reconstructions of the digits. The first dimension appears to describe the slant of each digit, the second the fatness of each digit, and the third the relative sizes of the upper to lower loops. Figure 5 (bottom) shows how LLC can smoothly interpolate between any two digits. In particular, the first row interpolates between left and right slanting digits, the second between fat and thin digits, the third between thick and thin line strokes, and the fourth between having a larger bottom loop and larger top loop. 4 Discussion and Conclusions Previous work on nonlinear dimensionality reduction has usually emphasized either a parametric approach, which explicitly constructs a (sometimes probabilistic) mapping between the high-dimensional and low-dimensional spaces, or a nonparametric approach which merely finds low-dimensional images corresponding to high-dimensional data points but without probabilistic models or hidden variables. Compared to the global coordination model [1], the closest parametric approach to ours, our algorithm can be understood as post coordination, in which the latent spaces are coordinated after they have been fit to data. By decoupling the data fitting and coordination problems we gain efficiency and avoid local optima in the coordination phase. Further, since we are just maximizing likelihood when fitting the original mixture to data, we can use a whole range of known techniques to escape local minima, and improve efficiency in the first phase as well. On the nonparametric side, our approach can be compared to two recent algorithms, LLE Figure 5: Top: maps of reconstructions of digits when two global coordinates are specified, and the third integrated out. Left: st and  nd coordinates specified; right:  nd and  rd. Bottom: Interpolating between two digits using LLC. In each row, we interpolate between the upper leftmost and rightmost digits. The LLC interpolants are spread out evenly along a line connecting the global coordinates of the two digits. For comparison, we show the 20 training images whose coordinates are closest to the line segment connecting those of the two digits at each side. A MFA with 50 components, each with 6 latent dimensions is used. It is trained on 6000 randomly chosen digits from the combined training and test sets of 8’s in MNIST. The weights were calculated using 36 neighbours. [4] and Isomap [3]. The cost functions of LLE and Isomap are convex, so they do not suffer from the local minima problems of earlier methods [9, 10], but these methods must solve eigenvalue systems of size equal to the number of data points. (Although in LLE the systems are highly sparse.) Another problem is neither LLE nor Isomap yield a probabilistic model or even a mapping between the data and embedding spaces. Compared to these models (which are run on individual data points) LLC uses as its primitives descriptions of the data provided by the individual local models. This makes the eigenvalue system to be solved much smaller and as a result the computational cost of the coordination phase of LLC is much less than that for LLE or Isomap. (Note that the construction of the eigenvalue system still requires finding nearest neighbours for each point, which is costly.) Furthermore, if each local model describes a complete (probabilistic) mapping from data space to its latent space, the final coordinated model will also describe a (probabilistic) mapping from the whole data space to the coordinated embedding space. Our alignment algorithm improves upon local embedding or density models by elevating their status to full global dimensionality reduction algorithms without requiring any modifications to their training procedures or cost functions. For example, using mixtures of factor analyzers (MFAs) as a test case, we show how our alignment method can allow a model previously suited only for density estimation to do complex operations on high dimensional data such as visualization and interpolation. Brand [11] has recently proposed an approach, similar to ours, that coordinates local parametric models to obtain a globally valid nonlinear embedding function. Like LLC, his “charting” method defines a quadratic cost function and finds the optimal coordination directly and efficiently. However, charting is based on a cost function much closer in spirit to the original global coordination model and it instantiates one local model centred on each training point, so its scaling is the same as that of LLE and Isomap. In principle, Brand’s method can be extended to work with fewer local models and our alignment procedure can be applied using the charting cost rather than the LLE cost we have pursued here. Automatic alignment procedures emphasizes a powerful but often overlooked interpretation of local mixture models. Rather than considering the output of such systems to be a single quantity, such as a density estimate or a expert-weighted regression, it is possible to view them as networks which convert high-dimensional inputs into a vector of internal coordinates from each submodel, accompanied by responsibilities. As we have shown, this view can lead to efficient and powerful algorithms which allow separate local models to learn consistent global representations. Acknowledgments We thank Geoffrey Hinton for inspiration and interesting discussions, Brendan Frey and Yann LeCun for sharing their data sets, and the reviewers for helpful comments. References [1] S. Roweis, L. Saul, and G. E. Hinton. Global coordination of local linear models. In Advances in Neural Information Processing Systems, volume 14, 2002. [2] J. J. Verbeek, N. Vlassis, and B. Kr¨ose. Coordinating principal component analysers. In Proceedings of the International Conference on Artificial Neural Networks, 2002. [3] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, December 2000. [4] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, December 2000. [5] K. Fukunaga and D. R. Olsen. An algorithm for finding intrinsic dimensionality of data. IEEE Transactions on Computers, 20(2):176–193, 1971. [6] N. Kambhatla and T. K. Leen. Dimension reduction by local principal component analysis. Neural Computation, 9:1493–1516, 1997. [7] M. E. Tipping and C. M. Bishop. Mixtures of probabilistic principal component analysers. Neural Computation, 11(2):443–482, 1999. [8] Z. Ghahramani and G. E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, University of Toronto, Department of Computer Science, 1996. [9] T. Kohonen. Self-organization and Associative Memory. Springer-Verlag, Berlin, 1988. [10] C. Bishop, M. Svensen, and C. Williams. GTM: The generative topographic mapping. Neural Computation, 10:215–234, 1998. [11] M. Brand. Charting a manifold. This volume, 2003.
2002
63
2,269
Field-Programmable Learning Arrays Seth Bridges, Miguel Figueroa, David Hsu, and Chris Diorio Department of Computer Science and Engineering University of Washington 114 Sieg Hall, Box 352350 Seattle, WA 98195-2350 seth,miguel,hsud,diorio  @cs.washington.edu Abstract This paper introduces the Field-Programmable Learning Array, a new paradigm for rapid prototyping of learning primitives and machinelearning algorithms in silicon. The FPLA is a mixed-signal counterpart to the all-digital Field-Programmable Gate Array in that it enables rapid prototyping of algorithms in hardware. Unlike the FPGA, the FPLA is targeted directly for machine learning by providing local, parallel, online analog learning using floating-gate MOS synapse transistors. We present a prototype FPLA chip comprising an array of reconfigurable computational blocks and local interconnect. We demonstrate the viability of this architecture by mapping several learning circuits onto the prototype chip. 1 Introduction Implementing machine-learning algorithms in VLSI is a logical step toward enabling realtime or mobile applications of these algorithms [1]. Several machine-learning architectures such as neural networks and Bayes nets map naturally to VLSI, because each uses many simple elements in parallel and computes using only local information. Such algorithms, when implemented in VLSI, can leverage the inherent parallelism offered by the millions of transistors on a single silicon die. Depending on the design technique, hardware implementations of learning algorithms can realize significant performance increases over standard computers in terms of speed or power consumption. Despite the benefits of implementing machine-learning algorithms in VLSI, several issues have kept hardware implementations from penetrating mainstream machine learning. First, many previous hardware systems were not scalable due to the size of many primary components such as digital multipliers or digital-to-analog converters[2, 3]. Second, many systems such as [4] have inflexible circuit topologies, allowing them to be used for only very specific problems. Third, many hardware learning systems did not comprise a complete solution with on-chip learning [5] and often required external weight updates[3, 6]. In addition to these problems of scalability and inflexibility, perhaps the biggest impediment to implementing learning in VLSI is that designing VLSI chips is a time-consuming and error-prone process. All current VLSI learning implementations required a detailed knowledge of analog and digital circuit design. This prerequisite knowledge impedes hardware development by a hardware novice; indeed, the design process can challenge even the most experienced circuit designer. Because we make extensive use of floating-gate synapse transistors [1] in our learning circuits to enable local adaptation, the design process becomes even more difficult due to slow and inaccurate simulation of these devices. A reconfigurable learning system would solve these problems by allowing rapid prototyping and flexibility in learning system hardware. Also, reconfigurability allows the system to adapt to changes in the problem definition. For example, a designer can trade input dimensionality for resolution by reallocating FPLA resources, even after the implementation is complete. A custom VLSI solution would not allow such tradeoffs after fabrication. When combined with a simple user interface, a reconfigurable learning system can enable anyone with a machine-learning background to express his/her ideas in hardware. In this paper, we propose a mixed analog-digital Field-Programmable Learning Array (FPLA), a reconfigurable system for rapid prototyping of machine-learning algorithms in hardware. The FPLA enables the design cycle shown in Figure 1(a) in which the designer expresses a machine-learning problem as an algorithm, compiles that representation into an FPLA configuration, and prototypes the algorithm in an FPLA. The FPLA is similar in concept to all-digital Field-Programmable Gate Arrays (FPGA), in that they both enable reconfigurable computation and prototyping using arrays of simple elements and reconfigurable wiring. Unlike previous reconfigurable hardware learning solutions [3, 4, 6, 7], the FPLA is a general-purpose prototyping tool and does not target one specific architecture. Moreover, our FPLA supports on-chip adaptation and enables rapid prototyping of a large class of learning algorithms. We have implemented a prototype core for an FPLA. Our chip comprises a small (2 2) array of Programmable Learning Blocks (PLBs) as well as a simple interconnect structure to allow the PLBs to communicate in an all-to-all fashion. Our results show that this prototype system achieves its design goal of enabling rapid prototyping of floating-gate learning circuits by implementing learning circuits known in the literature as well as new circuits prototyped for the first time. The remainder of the paper proceeds as follows. In section 2, we discuss the proposed FPLA architecture, as well as the subset that is our prototype. Section 3 shows results from our test chip of the prototype design. Section 4 concludes with a discussion of improvements that we are making to the design and opportunities for future work. 2 FPLA Architecture 2.1 An FPLA Architecture Our proposed FPLA architecture, shown in Figure 1(b), has three properties that enable machine learning: 1) a core comprising an array of Programmable Learning Blocks to compute machine-learning functions, 2) reconfigurable interconnect to enable inter-PLB communication, 3) the ability to compute with sufficient accuracy, and 4) a simple and well-defined user interface. The first two properties are dimensions of the FPLA design space, where tradeoffs between them results in varying levels of flexibility and functionality at the cost of area and power. The FPLA core determines the system’s functionality. For example, in a task-oriented FPLA, the PLBs that compose the core should allow high-level functions such as multiplication and outer-product learning. Likewise, to develop new learning algorithms in silicon, the PLBs should allow lower-level functions such as current mirrors, differential pairs, and current sources. In addition to a multi-functional core, a reconfigurable learning array requires flexible interconnect that provides good local connectivity between neighboring PLBs and global PLB PLB PLB PLB PLB PLB PLB ADC ADC ADC PLB PLB Description Algorithmic (a) Trained FPLA Configured FPLA Problem Training data and learning Hardware compilation Define and translate algorithm (b) space Output Filtering Input Filtering DAC DAC DAC Local Interconnect Global interconnect Analog Digital In Out Figure 1: (a) FPLA-Based Design Flow. A user programs a machine-learning algorithm and tests it using standard software tools (e.g. Matlab). The design compiler transforms this code into an FPLA configuration, which is then downloaded to the chip. At this point, the FPLA runs the algorithm on a training data set and performs on-chip learning. (b) Proposed FPLA Architecture. The architecture comprises an array of Programmable Learning Blocks (PLBs), a flexible interconnect, and support circuitry on the periphery. Local interconnect enables efficient, low-cost communication between adjacent PLBs. Global interconnect enables distant PLBs to communicate, albeit at a higher cost. interconnect for long-range connections. The global interconnect must be sparse because of area constraints in VLSI chips, but flexible enough to allow a wide range of PLB connectivity. Local connectivity is critical to enable the creation of complex learning primitives from combinations of PLBs and the implementation of large classes of machine-learning algorithms that exhibit strong local computation. Analog and mixed signal VLSI systems are typically plagued by offsets and device mismatch. Even though accurate systems are possible[8], the accuracy usually comes at the cost of increased power consumption and die area. The adaptive properties of floating-gate transistors can overcome these intrinsic accuracy limitations[9], therefore enabling mixed analog-digital computation to obtain the best combination of power, area, scalability, and performance. A user interface for an FPLA comprises two different components: a design compilation and configuration tool, and a chip interface that provides both digital and analog I/O. An FPLA design compiler allows a user to compile an abstract expression of an algorithm (e.g. Matlab code) to an FPLA configuration. The chip interface provides digital I/O to interface with standard computers and surrounding digital circuitry, as well as analog I/O to interface with signals from sensors such as vision chips and implantable devices. 2.2 Prototype Chip As a first step in designing an FPLA, we built a prototype focusing on the PLB design and local interconnect. Our design comprises a 2 2 array of PLBs interconnected in an allto-all fashion. The system I/O comprises digital input for programming and bidirectional analog input/output for system operation. We show the prototype FPLA architecture and chip micrograph in Figure 2. We fabricated the chip in the TSMC 0.35 m double-poly, four metal process available from MOSIS. The FPLA included two pFET PLBs and two nFET PLBs, each containing 8 uncommitted lines, 4 I/O blocks, and the computational primitives described below. The FPLA occupies 2000 m 700 m including the programming 4pFET PLB I/O nFET PLB I/O nFET PLB I/O Inter−PLB Block I/O pFET PLB Configuration Shift Register Configuration Decoder (a) (b) pFET PLB nFET PLB nFET PLB pFET PLB Programming Logic Decoder Inter−PLB Inter−PLB Inter−PLB Figure 2: (a) Fabricated Chip Architecture. Our prototype FPLA comprises 4 PLBs that contain simple analog functional primitives. A set of interconnect switches connect the PLBs in an all-toall fashion. (b) Chip Micrograph. The chip photograph shows the four PLBs, inter-PLB blocks, and programming circuitry. The chip was fabricated in the TSMC 0.35 m double-poly four-metal process from MOSIS. to-16 decoder and 108-bit shift register. Through design optimization, we have recently reduced the size by more than 50%. Each of the four PLBs comprises computational circuitry and a large switching matrix built of pass-gates controlled by SRAM. There are two different types of PLBs, the pFET PLB and the nFET PLB, because nFET and pFET are the two flavors of transistors available in standard CMOS processes. The computational primitives that compose the PLBs are two floating-gate transistors, a differential pair, a current mirror, a diode-connected transistor, a bias current source, three transistors with configurable length and width, and two configurable capacitors. These circuit primitives can be wired into arbitrary configurations simply by changing the state of the PLB switch matrix. When deciding what functions to place in the PLBs, our starting point was the decomposition of known primitives [10, 11] for silicon learning as well as standard analog primitives such as those in Mead’s book on silicon neural systems [12]. The circuits included in our PLBs are the most common subcircuits found when decomposing these primitives. Each of the four PLBs is independent of the others and can be programmed and operated independently. However, more useful circuits require resources from multiple PLBs. InterPLB blocks provide local connectivity between PLBs where each inter-PLB block is an array of SRAM pass-gate switches that can connect an uncommitted line in one PLB to an uncommitted line in another PLB. The six inter-PLB blocks provide a path from one PLB to any other PLB in the system. To interface with the external world, there are four I/O connections per PLB, each of which can be configured in one of two ways: as a bare connection to the pad for voltage inputs or current outputs, or as a voltage output through a unity-gain buffer. The user configures the FPLA by shifting the configuration bits into the configuration SRAM, located throughout the PLBs and interconnect. 3 Implementing Machine-Learning Primitives To show the correct functionality of our chip, we implemented various circuits from the literature as well as new circuits developed entirely in the FPLA. In the following section, we show results for three of these circuits. X 2 X 2 X 2 X 2 X 3 X 3 X 2 X 2 X 2 20 40 60 80 0.25 0.5 0.75 FPLA Pr(X|Y) Weq (nA) Custom pFET PLB nFET PLB W b Vtun (a) (b) (c) X Y X W X X Y tun V b V V Figure 3: (a) Schematic of the correlational-learning circuit described by Shon and Hsu in [11]. (b) Schematic of the same circuit as implemented in the FPLA. (c) Experimental results comparing the performance of the custom circuit against the reconfigurable circuit. We scaled the data to compensate for differences in operating point between the two implementations. The data reported by Shon and Hsu is smoother because it is averaged over a larger number of experiments. 3.1 Correlational-Learning Primitive As a first test of our chip, we implemented the correlational-learning circuit described by Shon and Hsu in [11]. This circuit learns the conditional probability of a binary event given another binary event  . We show the original circuit in Figure 3(a), and the FPLA implementation Figure 3(b). We implemented this circuit using primitives from two PLBs. We input the signals and  as voltage pulses. Figure 3(c) compares the results from the custom chip to the results from the FPLA. Both sets of data can be fit by:       ! #"%$& (1) where , ' ( ,  , and ) are fit constants. We conclude from this experiment that the correlational-learning circuit, when implemented in the FPLA, operates as the original circuit. SPICE simulations confirm that the interconnect switches have a negligible effect on circuit performance. 3.2 Regression-Learning Primitive The regression-learning circuit described in this section is a new hardware learning primitive first implemented in the FPLA. The circuit performs regression learning on a set of 2-D input data. It comprises two correlational learning circuits like the one shown in Figure 4(a) to encode a differential weight * . Each circuit learns *,+ and *.respectively, such that: *  */+101*.(2) The circuit operates as follows. We apply a zero-mean input signal 2 , encoded as a varying current 3 plus some DC bias current 4 , to the two inputs of the circuit. The differential output current 57698 of each circuit represents the product of its stored weight with the input current. 57698:+   3#;<4 "%*/+ (3) 57698=  3#;<4 "%*.(4) The difference in those output currents represents the total product of the current input and the weight stored on the floating gate. 5768  57698:+1057698= 3  */+01*.->"?;@4  *A+10*.->" (5) Current Input Update Control Current Output Vb −1 −0.5 0 0.5 1 −0.5 0 0.5 Input(nA) Output(nA) i=x+b out=w(x+b) w (a) (b) Figure 4: (a) Regression Learning Circuit. This circuit is one-half of the regression learning circuit and learns the positive weight  . The other half of the circuit is identical but used to represent the negative differential weight  . The difference between the learned weights  and  converges to the slope of the incoming data. (b) Experimental Data. This data is taken from the FPLA configured as the circuit on the left. The circuit was shown 388 data points with a slope of 0.5 and zero-mean Gaussian noise of 5%. The circuit learned a slope of 0.4924. where the multiplication is performed by the current mirror formed by the input diode and the floating gate. The output prediction we seek is */3 , so we remove the scaled input offset current *,4 with a high-pass filter implemented in the test computer. 5768     3  *A+0*." (6) Circuit training occurs in a supervised manner. An input 3 is provided to the circuit, and the circuit predicts an output */3 . The computer running the test compares that predicted output with the target and feeds an error signal back to the chip. Based on the error signal, the circuit adapts the weight * . Positive changes in * + increase * , while positive changes in * decrease * . We implement a small weight decay on the both synapses. Results from this circuit are shown in Figure 4(b). 3.3 Clustering Primitive We tested a new clustering primitive that is based on the adaptive bump circuit introduced in [10]. The circuit performs two functions: 1) computes the similarity between an input and a stored value, and 2) adapts the stored value to decrease its distance to the input. This adaptive bump circuit exhibits improved adaptation over previous versions [10, 13] due to the inclusion of the autonulling differential pair[14], shown in Figure 5(a) (top). The autonulling differential pair ensures that the adaptation process increases the similarity between the stored mean and the input. The data in Figure 5(b) shows the clustering primitive adapting to an input that is initially distant from the stored value. The result of this adaptation is that over time, the circuit learns to produce a maximal output response at the present input. This circuit was easily prototyped in the FPLA. Creation of a configuration file took less than one hour, experimental setup took another hour, and data was produced within two additional hours. Instead of waiting several months for chip fabrication, we were able to produce experimental results from a chip in under four hours. Also, the results are a more accurate model of actual circuit behavior than a SPICE simulation.                           −2 −1 0 1 2 0 100 200 300 400 500 600 700 800 V1−V2(V) Iout(nA) (a) (b) adaptation Figure 5: (a) Clustering Primitive. This circuit can: 1) compute the similarity between the stored value and the input, and 2) adapt the stored value to decrease its distance to the input. (b) Experimental Data. This plot shows that circuit adaptation moves the circuit’s peak response toward the presented input. Adaptation strength decreases as the stored value approaches the input. 4 Future Work The chip that we developed is effective for prototyping single learning primitives, but is too small for solving real machine-learning problems. An FPLA whose target is machinelearning algorithms requires PLBs that comprise higher-level functions, such as the primitives presented in the previous section. To scale up our design for machine-learning applications, we will make the following improvements to our prototype. First, to reduce the size of the PLBs, we will increase the ratio of computational circuitry to switching circuitry by replacing the low-level functions such as current mirrors and synapse transistors with higher-level primitives such as those mentioned in the previous section. Second, we will increase the number of PLBs in the design, which will require an efficient and scalable global interconnect structure. We will base our revisions on commercial FPGA architectures and other well-known on-chip communication schemes. Third, we will improve the I/O structures to enable multichip systems. Finally, we have begun work on the design compiler, a software tool that maps machinelearning algorithms to an FPLA configuration. 5 Conclusions Because of the match between the parallelism offered by hardware and the parallelism in machine-learning algorithms, mixed analog-digital VLSI is a promising substrate for machine-learning implementations. However, custom VLSI solutions are costly, inflexible, and difficult to design. To overcome these limitations, we have proposed Field-Programmable Learning Arrays, a viable reconfigurable architecture for prototyping machine-learning algorithms in hardware. FPLAs combine elements of FPGAs, analog VLSI, and on-chip learning to provide a scalable and cost-effective solution for learning in silicon. Our results show that our prototype core and interconnect can effectively implement existing learning primitives and assist in the development of new circuits. An enhanced version of the FPLA, currently under development, will support complex learning algorithms. Acknowledgments This work was supported by ONR grant #N00014-01-1-0566 and an Intel Fellowship. Chips were fabricated by the MOSIS service. References [1] C. Diorio, D. Hsu, and M. Figueroa, “Adaptive CMOS: From biological inspiration to systemson-a-chip,” Proceedings of the IEEE, vol. 90, no. 3, pp. 245–357, 2002. [2] J. B. Burr, “Digital Neural Network Implementations,” in Neural Networks: Concepts, Applications, and Implementations, Volume 2 (P. Antognetti and V. Milutinovic, eds.), pp. 237–285, Prentice Hall, 1991. [3] S. Satyanarayana, Y. Tsividis, and H. Graf, “A reconfigurable VLSI neural network,” IEEE Journal of Solid-State Circuits, vol. 27, January 1992. [4] R. Coggins, M. Jabri, B. Flower, and S. Pickard, “ICEG morphology classification using an analogue VLSI neural network,” in Advances in Neural Information Processing Systems 7, pp. 731–738, MIT Press, 1995. [5] M. Holler, S. Tam, H. Castro, and R. Benson, “An electrically trainable artificial neural network with 10240 ’floating gate’ synapses,” in Proceedings of the International Joint Conference on Neural Networks(IJCNN89), vol. 2, (Washington D.C), pp. 191–196, 1989. [6] E. K. F. Lee and P. G. Gulak, “A CMOS field programmable analog array,” IEEE Journal of Solid-State Circuits, vol. 26, December 1991. [7] A. Montalvo, R. Gyurcsik, and J. Paulos, “An analog VLSI neural network with on-chip learning,” IEEE Journal of Solid-State Circuits, vol. 32, no. 4, 1997. [8] R. Genov and G. Cauwenberghs, “Stochastic mixed-signal VLSI architecture for highdimensional kernel machines,” in Advances in Neural Information Processing Systems 14 (T. G. Dietterich, S. Becker, and Z. Ghahramani, eds.), (Cambridge, MA), MIT Press, 2002. [9] J. Hyde, T. Humes, C. Diorio, M. Thomas, and M. Figueroa, “A floating-gate trimmed, 14bit, 250 ms/s digital-to-analog converter in standard 0.25 m CMOS,” in Symposium on VLSI Circuits Digest of Technical Papers, pp. 328–331, 2002. [10] D. Hsu, M. Figueroa, and C. Diorio, “A silicon primitive for competitive learning,” in Advances in Neural Information Processing Systems 13 (T. K. Leen, T. G. Dietterich, and V. Tresp, eds.), pp. 713–719, MIT Press, 2001. [11] A. P. Shon, D. Hsu, and C. Diorio, “Learning spike-based correlations and conditional probabilities in silicon,” in Advances in Neural Information Processing Systems 14 (T. G. Dietterich, S. Becker, and Z. Ghahramani, eds.), (Cambridge, MA), MIT Press, 2002. [12] C. Mead, Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley, 1989. [13] P. Hasler, “Continuous-time feedback in floating-gate MOS circuits,” IEEE Transactions on Circuits and Systems II, vol. 48, pp. 56–64, January 2001. [14] D. Hsu, S. Bridges, and C. Diorio, “Adaptive quantization and density estimation in silicon,” 2002. In submission.
2002
64
2,270
Boosting Density Estimation Saharon Rosset Department of Statistics Stanford University Stanford, CA, 94305 saharon@stat.stanford.edu Eran Segal Computer Science Department Stanford University Stanford, CA, 94305 eran@cs.stanford.edu Abstract Several authors have suggested viewing boosting as a gradient descent search for a good fit in function space. We apply gradient-based boosting methodology to the unsupervised learning problem of density estimation. We show convergence properties of the algorithm and prove that a strength of weak learnability property applies to this problem as well. We illustrate the potential of this approach through experiments with boosting Bayesian networks to learn density models. 1 Introduction Boosting is a method for incrementally building linear combinations of “weak” models, to generate a “strong” predictive model. Given data    , a basis (or dictionary) of weak learners and a loss function , a boosting algorithm sequentially finds models      and constants       to minimize      !     #"$" . AdaBoost [6], the original boosting algorithm, was specifically devised for the task of classification, where %'& )( *$+,-" with +,.,/1020 and &  +2  3    4( #"$" . AdaBoost sequentially fits weak learners on re-weighted versions of the data, where the weights are determined according to the performance of the model so far, emphasizing the more “challenging” examples. Its inventors attribute its success to the “boosting” effect which the linear combination of weak learners achieves, when compared to their individual performance. This effect manifests itself both in training data performance, where the boosted model can be shown to converge, under mild conditions, to ideal training classification, and in generalization error, where the success of boosting has been attributed to its “separating” — or margin maximizing — properties [18]. It has been shown [8, 13] that AdaBoost can be described as a gradient descent algorithm, where the weights in each step of the algorithm correspond to the gradient of an exponential loss function at the “current” fit. In a recent paper, [17] show that the margin maximizing properties of AdaBoost can be derived in this framework as well. This view of boosting as gradient descent has allowed several authors [7, 13, 21] to suggest “gradient boosting machines” which apply to a wider class of supervised learning problems and loss functions than the original AdaBoost. Their results have been very promising. In this paper we apply gradient boosting methodology to the unsupervised learning problem of density estimation, using the negative log likelihood loss criterion 5   !     6"$"7& /98 :,;   !     6"$" . The density estimation problem has been studied extensively in many contexts using various parametric and non-parametric approaches [2, 5]. A particular framework which has recently gained much popularity is that of Bayesian networks [11], whose main strength stems from their graphical representation, allowing for highly interpretable models. More recently, researchers have developed methods for learning Bayesian networks from data including learning in the context of incomplete data. We use Bayesian networks as our choice of weak learners, combining the models using the boosting methodology. We note that several researchers have considered learning weighted mixtures of networks [14], or ensembles of Bayesian networks combined by model averaging [9, 20]. We describe a generic density estimation boosting algorithm, following the approach of [13]. The main idea is to identify, at each boosting iteration, the basis function   which gives the largest “local” improvement in the loss at the current fit. Intuitively,  assigns higher probability to instances that received low probability by the current model. A line search is then used to find an appropriate coefficient for the newly selected  function, and it is added to the current model. We provide a theoretical analysis of our density estimation boosting algorithm, showing an explicit condition, which if satisfied, guarantees that adding a weak learner to the model improves the training set loss. We also prove a “strength of weak learnability” theorem which gives lower bounds on overall training loss improvement as a function of the individual weak learners’ performance on re-weighted versions of the training data. We describe the instantiation of our generic boosting algorithm for the case of using Bayesian networks as our basis of weak learners and provide experimental results on two distinct data sets, showing that our algorithm achieves higher generalization on unseen data as compared to a single Bayesian network and one particular ensemble of Bayesian networks. We also show that our theoretical criterion for a weak learner to improve the overall model applies well in practice. 2 A density estimation boosting algorithm At each step in a boosting algorithm, the model built so far is:    " &          " . If we now choose a weak learner   and add it to our model with a small coefficient  , then developing the training loss of the new model &     in a Taylor series around the loss at   gives  5    "*"7&  5    #"$"            "$"       "   #"     "  which in the case of negative log-likelihood loss can be written as  /    "$" &  /     "$" /   0    #"   -"     "  Since  is small, we can ignore the second order term and choose the next boosting step   to maximize   ! #"%$'&    " . We are thus finding the first order optimal weak learner, which gives the “steepest descent” in the loss at the current model predictions. However, we should note that once  becomes non-infinitesimal, no “optimality” property can be claimed for this selected   . The main idea of gradient-based generic boosting algorithms, such as AnyBoost [13] and GradientBoost [7], is to utilize this first order approach to find, at each step, the weak learner which gives good improvement in the loss and then follow the “direction” of this weak learner to augment the current model. The step size   is determined in various ways in the different algorithms, the most popular choice being line-search, which we adopt here. When we consider applying this methodology to density estimation, where the basis is comprised of probability distributions and the overall model  is a probability distribution as well, we cannot simply augment the model, since      will no longer be a probability distribution. Rather, we consider a step of the form   &  0 /   "       , where     0 . It is easy to see that the first order theory of gradient boosting and the line search solution apply to this formulation as well. If at some stage , the current   cannot be improved by adding any of the weak learners as above, the algorithm terminates, and we have reached a global minimum. This can only happen if the derivative of the loss at the current model with respect to the coefficient of each weak learner is non-negative:        /98 :,; * 0/  "   -"    #"$"    , & /  0     -"   Thus, the algorithm terminates if no   gives    '     "  (see section 3 for proof and discussion). The resulting generic gradient boosting algorithm for density estimation can be seen in Fig. 1. Implementation details for this algorithm include the choice of the family of weak learners , and the method for searching for   at each boosting iteration. We address these details in Section 4. 1. Set  to uniform on the domain of  2. For t = 1 to T (a) Set ! "#%$&!!' (b) Find ( #*),+ to maximize    ( #    (c) If    ( #   /.10 break. (d) Find 2#354!687:9<;>=@? BADC>E 7FGH A 23H#%$I&!"'KJL2(M#N!'G (e) Set #H A 2#HH#%$I&JO2#P(M# 3. Output the final model Q Figure 1: Boosting density estimation algorithm 3 Training data performance The concept of “strength of weak learnability” [6, 18] has been developed in the context of boosting classification models. Conceptually, this property can be described as follows: “if for any weighting of the training data SR$   , there is a weak learner   which achieves weighted training error slightly better than random guessing on the re-weighted version of the data using these weights, then the combined boosted learner will have vanishing error on the training data”. In classification, this concept is realized elegantly. At each step in the algorithm, the weighted error of the previous model, using the new weights is exactly  UT . Thus, the new weak learner doing “better than random” on the re-weighted data means it can improve the previous weak learner’s performance at the current fit, by achieving weighted classification error better than   T . In fact it is easy to show that the weak learnability condition of at least one weak learner attaining classification error less than   T on the re-weighted data does not hold only if the current combined model is the optimal solution in the space of linear combinations of weak learners. We now derive a similar formulation for our density estimation boosting algorithm. We start with a quantitative description of the performance of the previous weak learner   at the combined model   , given in the following lemma: Lemma 1 Using the algorithm of section 2 we get:    *V  #" $ &   #" $ & &W , where is the number of training examples. Proof: The line search (step 2(c) in the algorithm) implies:  &    /98 :,; $ 0 /  "    #"     -"*"    2:  & 0 0/    9/       "    -" "  Lemma 1 allows us to derive the following stopping criterion (or optimality condition) for the boosting algorithm, illustrating that in order to improve training set loss, the new weak learner only has to exceed the previous one’s performance at the current fit. Theorem 1 If there does not exist a weak learner   such that     #" $ &    " , then   is the global minimum in the domain of normalized linear combinations of :   &  ;    /         #"$",       & 0 Proof: This is a direct result of the optimality conditions for a convex function (in this case /98 :,; ) in a compact domain. So unless we have reached the global optimum in the simplex within    (which will generally happen quickly only if is very small, i.e. the “weak” learners are very weak), we will have some weak learners doing better than “random” and attaining  DV #" $ &   #" $ & . If this is indeed the case, we can derive an explicit lower bound for training set loss improvement as a function of the new weak learner’s performance at the current model: Theorem 2 Assume: 1. The sequence of selected weak learners in the algorithm of section 2 has:     #" $ &    -" &   2.   3     "      "$"   Then we get: /   8:2;      "*"  /   8:2;      "$" /  "!     Proof:    / 8:2; $ 0/  "    #"     -"$"    25& /      "      " & /       /98 :,; * 0/  "      "       "$"    &  #     " /    -"%$  #  0/  "      "       "&$      Combining these two gives: ')( $ +* , '   &' ' #" $ &/.  V  #" $ &'& '    /     !   , which implies:   8 :,;      "$" /   8:2;      "*"  /1032 54   6    /87  !    ( & /   9!      9!     & /   "!     The second assumption of theorem 2 may not seem obvious but it is actually quite mild. With a bit more notation we could get rid of the need to lower bound   completely. For   , we can see intuitively that a boosting algorithm will not let any observation have exceptionally low probability over time since that would cause this observation to have overwhelming weight in the next boosting iteration and hence the next selected   is certain to give it high probability. Thus, after some iterations we can assume that we would actually have a threshold  independent of the iteration number and hence the loss would decrease at least as the sum of squares of the “weak learnability” quantities   . 4 Boosting Bayesian Networks We now focus our attention on a specific application of the boosting methodology for density estimation, using Bayesian networks as the weak learners. A Bayesian network is a graphical model for describing a joint distribution over a set of random variables. Recently, there has been much work on developing algorithms for learning Bayesian networks (both network structure and parameters) from data for the task of density estimation and hence they seem appropriate as our choice of weak learners. Another advantage of Bayesian networks in our context, is the ability to tune the strength of the weak learners using parameters such as number of edges and strength of prior. Assume we have categorical data      in a domain where each of the observations contains assignments to  variables. We rewrite step 2(b) of the boosting algorithm as:   " Find    to maximize  " "   6" , where " &  " $  " R In this formulation, all possible values of  have weights, some of which may be  . As mentioned above, the two main implementation-specific details in the generic density estimation algorithm are the set of weak models and the method for searching for the ”optimal” weak model   at each boosting iteration. When boosting Bayesian networks, a natural way of limiting the “strength” of weak learners in is to limit the complexity of the network structure in . This can be done, for instance, by bounding the number of edges in each “weak density estimator” learned during the boosting iterations. The problem of finding an “optimal” weak model at each boosting iteration (step 2(b) of the algorithm) is trickier. We first note that if we only impose an constraint on the norm of   (specifically, the PDF constraint  "   6"& 0 ), then step 2(b) has a trivial solution, concentrating all the probability at the value of  with the highest “weight”:   6" & 02 &  ;      . This phenomenon is not limited to the density estimation case and would appear in boosting for classification if the set of weak learners had fixed norm, rather than the fixed  norm, implicitly imposed by limiting to contain classifiers. This consequence of limiting to contain probability distributions is particularly problematic when boosting Bayesian networks, since  can be represented with a fully disconnected network. Thus, limiting to “simple” structures by itself does not amend this problem. However, the boosting algorithm does not explicitly require to include only probability distributions. Let us consider instead a somewhat different family of candidate models, with an implicit  size constraint, rather than as in the case of probability distributions (note that using an  constraint as in Adaboost is not possible, since the trivial optimal solution would be   0 ). For the unconstrained “distribution” case (corresponding to a fully connected Bayesian network), this leads to re-writing step 2(b) of the boosting algorithm as:   " Find  to maximize  " "   6" , subject to  "   6"  & 0 By considering the Lagrange multiplier version of this problem it is easy to see that the optimal solution is     6" &   ( "!    and is proportional to the optimal solution to the log-likelihood maximization problem:   "  Find  to maximize  " " 8:2;    6"$" , subject to  "   6"7& 0 given by  #%$'&   "1& ( (  "!   . This fact points to an interesting correspondence between solutions to  -constrained linear optimization problems and -constrained log optimization problems and leads us to believe that good solutions to step  ) " of the boosting algorithm can be approximated by solving step   "  instead. The formulation given in  ) "  presents us with a problem that is natural for Bayesian network learning, that of maximizing the log-likelihood (or in this case the weighted loglikelihood  "* " 8 :,;   6" ) of the data given the structure. Our implementation of the boosting algorithm, therefore, does indeed limit to include probability distributions only, in this case those that can be represented by “simple” Bayesian networks. It solves a constrained version of step   "  instead of the original version   " . Note that this use of “surrogate” optimization tasks is not alien to other boosting -28.5 -28 -27.5 -27 -26.5 -26 -25.5 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 Boosting Iterations Avg. Log-likelihood Boosting Bayesian Network AutoClass -26.5 -26.3 -26.1 -25.9 -25.7 -25.5 -25.3 -25.1 -24.9 -24.7 -24.5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 BoostingIterations Avg.Log-likelihood Boosting BayesianNetwork (a) (b) 0 10 20 30 40 50 60 70 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 -27 -26 -25 -24 -23 -22 -21 -20 LogWeakLearnability Avg.Log-Likelihood Trainingperformance BoostingIterations WeakLearnability Log(n) 0 20 40 60 80 100 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 -25.6 -25.4 -25.2 -25 -24.8 -24.6 -24.4 -24.2 -24 LogWeakLearnability Avg.Log-Likelihood BoostingIterations Trainingperformance WeakLearnability Log(n) (c) (d) Figure 2: (a) Comparison of boosting, single Bayesian network and AutoClass performance on the genomic expression dataset. The average log-likelihood for each test set instance is plotted. (b) Same as (a) for the census dataset. Results for AutoClass were omitted as they were not competitive in this domain (see text). (c) The weak learnability condition is plotted along with training data performance for the genomic expression dataset. The plot is in log-scale and also includes C>E 7F as a reference where is the number of training instances (d) Same as (c) for the census dataset. applications as well. For example, Adaboost calls for optimizing a re-weighted classification problem at each step; Decision trees, the most popular boosting weak learners, search for “optimal” solutions using surrogate loss functions - such as the Gini index for CART [3] or information gain for C4.5 [16]. 5 Experimental Results We evaluated the performance of our algorithms on two distinct datasets: a genomic expression dataset and a US census dataset. In gene expression data, the level of mRNA transcript of every gene in the cell is measured simultaneously, using DNA microarray technology, allowing researchers to detect functionally related genes based on the correlation of their expression profiles across the various experiments. We combined three yeast expression data sets [10, 12, 19] for a total of 550 expression experiments. To test our methods on a set of correlated variables, we selected 56 genes associated with the oxidative phosphorlylation pathway in the KEGG database [1]. We discretized the expression measurements of each gene into three levels (down, same, up) as in [15]. We obtained the 1990 US census data set from the UC Irvine data repository (http://kdd.ics.uci.edu/databases/census1990/USCensus1990.html). The data set includes 68 discretized attributes such as age, income, occupation, work status, etc. We randomly selected 5k entries from the 2.5M available entries in the entire data set. Each of the data sets was randomly partitioned into 5 equally sized sets and our boosting algorithm was learned from each of the 5 possible combinations of 4 partitions. The performance of each boosting model was evaluated by measuring the log-likelihood achieved on the data instances in the left out partition. We compared the performance achieved to that of a single Bayesian network learned using standard techniques (see [11] and references therein). To test whether our boosting approach gains its performance primarily by using an ensemble of Bayesian networks, we also compared the performance to that achieved by an ensemble of Bayesian networks learned using AutoClass [4], varying the number of classes from 2 to 100. We report results for the setting of AutoClass achieving the best performance. The results are reported as the average log-likelihood measured for each instance in the test data and summarized in Fig. 2(a,b). We omit the results of AutoClass for the census data as it was not comparable to boosting and a single Bayesian network, achieving an average test instance log-likelihood of /02     . As can be seen, our boosting algorithm performs significantly better, rendering each instance in the test data roughly and    times more likely than it is using other approaches in the genomic and census datasets, respectively. To illustrate the theoretical concepts discussed in Section 3, we recorded the performance of our boosting algorithm on the training set for both data sets. As shown in Section 3, if     #" $ &    " , then adding  to the model is guaranteed to improve our training set performance. Theorem 2 relates the magnitude of this difference to the amount of improvement in training set performance. Fig. 2(c,d) plots the weak learnability quantity     #" $ &   -" , the training set log-likelihood and the threshold for both data sets on a log scale. As can be seen, the theory matches nicely, as the improvement is large when the weak learnability condition is large and stops entirely once it asymptotes to . Finally, boosting theory tells us that the effect of boosting is more pronounced for “weaker” weak learners. To that extent, we experimented (data not shown) with various strength parameters for the family of weak learners (number of allowed edges in each Bayesian network, strength of prior). As expected, the overall effect of boosting was much stronger for weaker learners. 6 Discussion and future work In this paper we extended the boosting methodology to the domain of density estimation and demonstrated its practical performance on real world datasets. We believe that this direction shows promise and hope that our work will lead to other boosting implementations in density estimation as well as other function estimation domains. Our theoretical results include an exposition of the training data performance of the generic algorithm, proving analogous results to those in the case of boosting for classification. Of particular interest is theorem 1, implying that the idealized algorithm converges, asymptotically, to the global minimum. This result is interesting, as it implies that the greedy boosting algorithm converges to the exhaustive solution. However, this global minimum is usually not a good solution in terms of test-set performance as it will tend to overfit (especially if is not very small). Boosting can be described as generating a regularized path to this optimal solution [17], and thus we can assume that points along the path will usually have better generalization performance than the non-regularized optimum. In Section 4 we described the theoretical and practical difficulties in solving the optimization step of the boosting iterations (step 2(b)). We suggested replacing it with a more easily solvable log-optimization problem, a replacement that can be partly justified by theoretical arguments. However, it will be interesting to formulate other cases where the original problem has non-trivial solutions - for instance, by not limiting to probability distributions only and using non-density estimation algorithms to generate the “weak” models   . The popularity of Bayesian networks as density estimators stems from their intuitive interpretation as describing causal relations in data. However, when learning the network structure from data, a major issue is assigning confidence to the learned features. A potential use of boosting could be in improving interpretability and reducing instability in structure learning. If the weak models in are limited to a small number of edges, we can collect and interpret the “total influence” of edges in the combined model. This seems like a promising avenue for future research, which we intend to pursue. Acknowledgements We thank Jerry Friedman, Daphne Koller and Christian Shelton for useful discussions. E. Segal was supported by a Stanford Graduate Fellowship (SGF). References [1] Kegg: Kyoto encyclopedia of genes and genomes. In http://www.genome.ad.jp/kegg. [2] C. M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, Oxford, U.K., 1995. [3] L. Breiman, J.H. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wardsworth International Group, 1984. [4] P. Cheeseman and J. Stutz. Bayesian Classification (AutoClass): Theory and Results. AAAI Press, 1995. [5] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. John Wiley & Sons, New York, 1973. [6] Y. Freund and R.E. Scahpire. A decision theoretic generalization of on-line learning and an application to boosting. In the 2nd Eurpoean Conference on Computational Learning Theory, 1995. [7] J.H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, Vol. 29 No. 5, 2001. [8] J.H. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, Vol. 28 pp. 337-407, 2000. [9] N. Friedman and D. Koller. Being bayesian about network structure: A bayesian approach to structure discovery in bayesian networks. Machine Learning Journal, 2002. [10] A.P. Gasch, P.T. Spellman, C.M. Kao, O.Carmel-Harel, M.B. Eisen, G.Storz, D.Botstein, and P.O. Brown. Genomic expression program in the response of yeast cells to environmental changes. Mol. Bio. Cell, 11:4241–4257, 2000. [11] D. Heckerman. A tutorial on learning with Bayesian networks. In M. I. Jordan, editor, Learning in Graphical Models. MIT Press, Cambridge, MA, 1998. [12] T. R. Hughes et al. Functional discovery via a compendium of expression profiles. Cell, 102(1):109–26, 2000. [13] L. Mason, J. Baxter, P. Bartlett, and P. Frean. Boosting algorithms as gradient descent in function space. In Proc. NIPS, number 12, pages 512–518, 1999. [14] M. Meila and T. Jaakkola. Tractable bayesian learning of tree belief networks. Technical Report CMU-RI-TR-00-15, Robotics institute, Carnegie Mellon University, 2000. [15] D. Pe’er, A. Regev, G. Elidan, and N. Friedman. Inferring subnetworks from perturbed expression profiles. In ISMB’01, 2001. [16] J.R. Quinlan. C4.5 - Programs for Machine Learning. Morgan-Kaufmann, 1993. [17] S. Rosset, J. Zhu, and T. Hastie. Boosting as a regularized path to a margin maximizer. Submitted to NIPS 2002. [18] R.E. Scahpire, Y. Freund, P. Bartlett, and W.S. Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics, Vol. 26 No. 5, 1998. [19] P. T. Spellman et al. Comprehensive identification of cell cycle-regulated genes of the yeast saccharomyces cerevisiae by microarray hybridization. Mol. Biol. Cell, 9(12):3273–97, 1998. [20] B. Thiesson, C. Meek, and D. Heckerman. Learning mixtures of dag models. Technical Report MSR-TR-98-12, Microsoft Research, 1997. [21] R.S. Zemel and T. Pitassi. A gradient-based boosting algorithm for regression problems. In Proc. NIPS, 2001.
2002
65
2,271
Support Vector Machines for Multi ple-Instance Learning Stuart Andrews, Ioannis Tsochantaridis and Thomas Hofmann Department of Computer Science, Brown University, Providence, RI 02912 {stu,it,th}@cs.brown.edu Abstract This paper presents two new formulations of multiple-instance learning as a maximum margin problem. The proposed extensions of the Support Vector Machine (SVM) learning approach lead to mixed integer quadratic programs that can be solved heuristically. Our generalization of SVMs makes a state-of-the-art classification technique, including non-linear classification via kernels, available to an area that up to now has been largely dominated by special purpose methods. We present experimental results on a pharmaceutical data set and on applications in automated image indexing and document categorization. 1 Introduction Multiple-instance learning (MIL) [4] is a generalization of supervised classification in which training class labels are associated with sets of patterns, or bags, instead of individual patterns. While every pattern may possess an associated true label, it is assumed that pattern labels are only indirectly accessible through labels attached to bags. The law of inheritance is such that a set receives a particular label, if at least one of the patterns in the set possesses the label. In the important case of binary classification, this implies that a bag is "positive" if at least one of its member patterns is a positive example. MIL differs from the general set-learning problem in that the set-level classifier is by design induced by a pattern-level classifier. Hence the key challenge in MIL is to cope with the ambiguity of not knowing which of the patterns in a positive bag are the actual positive examples and which ones are not. The MIL setting has numerous interesting applications. One prominent application is the classification of molecules in the context of drug design [4]. Here, each molecule is represented by a bag of possible conformations. The efficacy of a molecule can be tested experimentally, but there is no way to control for individual conformations. A second application is in image indexing for content-based image retrieval. Here, an image can be viewed as a bag of local image patches [9] or image regions. Since annotating whole images is far less time consuming then marking relevant image regions, the ability to deal with this type of weakly annotated data is very desirable. Finally, consider the problem of text categorization for which we are the first to apply the MIL setting. Usually, documents which contain a relevant passage are considered to be relevant with respect to a particular category or topic, yet class labels are rarely available on the passage level and are most commonly associated with the document as a whole. Formally, all of the above applications share the same type of label ambiguity which in our opinion makes a strong argument in favor of the relevance of the MIL setting. We present two approaches to modify and extend Support Vector Machines (SVMs) to deal with MIL problems. The first approach explicitly treats the pattern labels as unobserved integer variables, subjected to constraints defined by the (positive) bag labels. The goal then is to maximize the usual pattern margin, or soft-margin, jointly over hidden label variables and a linear (or kernelized) discriminant function. The second approach generalizes the notion of a margin to bags and aims at maximizing the bag margin directly. The latter seems most appropriate in cases where we mainly care about classifying new test bags, while the first approach seems preferable whenever the goal is to derive an accurate pattern-level classifier. In the case of singleton bags, both methods are identical and reduce to the standard soft-margin SVM formulation. Algorithms for the MIL problem were first presented in [4, 1, 7]. These methods (and related analytical results) are based on hypothesis classes consisting of axis-aligned rectangles. Similarly, methods developed subsequently (e.g., [8, 12]) have focused on specially tailored machine learning algorithms that do not compare favorably in the limiting case of the standard classification setting. A notable exception is [10]. More recently, a kernel-based approach has been suggested which derives MI-kernels on bags from a given kernel defined on the pattern-level [5]. While the MI-kernel approach treats the MIL problem merely as a representational problem, we strongly believe that a deeper conceptual modification of SVMs as outlined in this paper is necessary. However, we share the ultimate goal with [5], which is to make state-ofthe-art kernel-based classification methods available for multiple-instance learning. 2 Multiple-Instance Learning In statistical pattern recognition, it is usually assumed that a training set of labeled patterns is available where each pair (Xi, Yi) E ~d X Y has been generated independently from an unknown distribution. The goal is to induce a classifier, i.e., a function from patterns to labels ! : ~d --+ y. In this paper, we will focus on the binary case of Y = {-I, I}. Multiple-instance learning (MIL) generalizes this problem by making significantly weaker assumptions about the labeling information. Patterns are grouped into bags and a label is attached to each bag and not to every pattern. More formally, given is a set of input patterns Xl, ... , Xn grouped into bags B l , ... , B m , with BI = {Xi: i E I} for given index sets I ~ {I, ... , n} (typically non-overlapping). With each bag B I is associated a label YI. These labels are interpreted in the following way: if YI = -1, then Yi = -1 for all i E I, i.e., no pattern in the bag is a positive example. If on the other hand YI = 1, then at least one pattern Xi E BI is a positive example of the underlying concept. Notice that the information provided by the label is asymmetric in the sense that a negative bag label induces a unique label for every pattern in a bag, while a positive label does not. In general, the relation between pattern labels Yi and bag labels YI can be expressed compactly as YI = maxiEI Yi or alternatively as a set of linear constraints '"' Yi + 1 ~ -2- ;::: 1, VI s.t. YI = 1, and Yi = -1, VI s.t. YI = -1. (1) iEI Finally, let us call a discriminant function! : X --+ ~ MI-separating with respect to a multiple-instance data set if sgn maxiEI !(Xi) = YI for all bags BI holds. (a) ..... Q) 2 (b) 2 ..•.. <j) 3 13 2 2 3 .... @. 8 \ 3 3 2 2 2 2 Figure 1: Large margin classifiers for MIL. Negative patterns are denoted by "-" symbols, positive bag patterns by numbers that encode the bag membership. The figure to the left sketches the mi-SVM solution while the figure to the right shows the MI-SVM solution. 3 Maximum Pattern Margin Formulation of MIL We omit an introduction to SVMs and refer the reader to the excellent books on this topic, e.g. [11]. The mixed integer formulation of MIL as a generalized soft-margin SVM can be written as follows in primal form 1 minmin -llwI12+CL~i (2) {v;} w,b,€ 2 . t mi-SVM s.t. Vi: Yi((w,xi)+b):::=:l-~i' ~i:::=:O, Yi E{-l,l},and (1) hold. Notice that in the standard classification setting, the labels Yi of training patterns Xi would simply be given, while in (2) labels Yi of patterns Xi not belonging to any negative bag are treated as unknown integer variables. In mi-SVM one thus maximizes a soft-margin criterion jointly over possible label assignments as well as hyperplanes. Figure 1 (a) illustrates this idea for the separable case: We are looking for an MI-separating linear discriminant such that there is at least one pattern from every positive bag in the positive halfspace, while all patterns belonging to negative bags are in the negative halfspace. At the same time, we would like to achieve the maximal margin with respect to the (completed) data set obtained by imputing labels for patterns in positive bags in accordance with Eq. (1). This is similar to the approach pursued in [6] and [3] for transductive inference. In the latter case, patterns are either labeled or unlabeled. Unlabeled data points are utilized to refine the decision boundary by maximizing the margin on all data points. While the labeling for each unlabeled pattern can be carried out independently in transductive inference, labels of patterns in positive bags are coupled in MIL through the inequality constraints. The mi-SVM formulation leads to a mixed integer programming problem. One has to find both the optimal labeling and the optimal hyperplane. On a conceptual level this mixed integer formulation captures exactly what MIL is about, i.e. to recover the unobserved pattern labels and to simultaneously find an optimal discriminant. Yet, this poses a computational challenge since the resulting mixed integer programming problem cannot be solved efficiently with state-of-the-art tools, even for moderate size data sets. We will present an optimization heuristic in Section 5. 4 Maximum Bag Margin Formulation of MIL An alternative way of applying maximum margin ideas to the MIL setting is to extend the notion of a margin from individual patterns to sets of patterns. It is natural to define the functional margin of a bag with respect to a hyperplane by II == YI max( (w, Xi) + b). (3) iEI This generalization reflects the fact that predictions for bag labels take the form YI = sgn maxiEI( (w, Xi) +b). Notice that for a positive bag the margin is defined by the margin of the "most positive" pattern, while the margin of a negative bag is defined by the "least negative" pattern. The difference between the two formulations of maximum-margin problems is illustrated in Figure 1. For the pattern-centered miSVM formulation, the margin of every pattern in a positive bag matters, although one has the freedom to set their label variables so as to maximize the margin. In the bag-centered formulation, only one pattern per positive bag matters, since it will determine the margin of the bag. Once these "witness" patterns have been identified, the relative position of other patterns in positive bags with respect to the classification boundary becomes irrelevant. Using the above notion of a bag margin, we define an MIL version of the soft-margin classifier by MI-SVM . 1 '" mm -llwl1 2 + C ~~I w , b ,~ 2 I (4) s.t. VI: YI mal x ( (w, Xi) + b) ::::: 1 ~I, ~I ::::: O . • E For negative bags one can unfold the max operation by introducing one inequality constraint per pattern, yet with a single slack variable ~I. Hence the constraints on negative bag patterns, where YI = -1, read as -(W,Xi) - b::::: 1- ~I' Vi E I. For positive bags, we introduce a selector variable s(I) E I which denotes the pattern selected as the positive "witness" in BI. This will result in constraints (w, xs(I)) + b ::::: 1 ~I. Thus we arrive at the following equivalent formulation 1 min min -llwl1 2 + C 2: ~I (5) s w ,b,~ 2 I s.t. VI: YI = -1 /\ -(W,Xi) - b::::: 1- ~I, Vi E I, or YI=l /\ (w,xs(I))+b:::::1-~I' and6:::::0. (6) In this formulation, every positive bag BI is thus effectively represented by a single member pattern XI == xs(I). Notice that "non-witness" patterns (Xi, i E I with i =I- s(I)) have no impact on the objective. For given selector variables, it is straightforward to derive the dual objective function which is very similar to the standard SVM Wolfe dual. The only major difference is that the box constraints for the Lagrange parameters c¥ are modified compared to the standard SVM solution, namely one gets o ::; C¥I ::; C, for I s.t. YI = 1 and 0::; 2: C¥i ::; C, for I s.t. YI = -1. (7) iEI Hence, the influence of each bag is bounded by C. 5 Optimization Heuristics As we have shown, both formulations, mi-SVM and MI-SVM, can be cast as mixedinteger programs. In deriving optimization heuristics, we exploit the fact that for initialize Yi = YI for i E I REPEAT compute SVM solution vv, b for data set with imputed labels compute outputs Ii = (VV, Xi) + b for all xi in positive bags set Yi = sgn(fi) for every i E I, YI = 1 FOR (every positive bag BI) END IF (LiEI( l + Yi)/2 == 0) compute i* = arg maxiEI Ii set Yi* = 1 END WHILE (imputed labels have changed) OUTPUT (vv, b) Figure 2: Pseudo-code for mi-SVM optimization heuristics (synchronous update). initialize XI = L iE I xillII for every positive bag BI REPEAT compute QP solution vv,b for data set with positive examples {XI : YI = I} compute outputs Ii = (VV,Xi) + b for all xi in positive bags set XI = Xs(I) , 8(I) = arg maxiEI Ii for every I, YI = 1 WHILE (selector variables 8(1) have changed) OUTPUT (vv, b) Figure 3: Pseudo-code for MI-SVM optimization heuristics (synchronous update). given integer variables, i.e. the hidden labels in mi-SVM and the selector variables in MI-SVM, the problem reduces to a QP that can be solved exactly. Of course, all the derivations also hold for general kernel functions K . A general scheme for a simple optimization heuristic may be described as follows. Alternate the following two steps: (i) for given integer variables, solve the associated QP and find the optimal discriminant function, (ii) for a given discriminant, update one, several, or all integer variables in a way that (locally) minimizes the objective. The latter step may involve the update of a label variable Yi of a single pattern in miSVM, the update of a single selector variable 8(I) in MI-SVM, or the simultaneous update of all integer variables. Since the integer variables are essentially decoupled given the discriminant (with the exception of the bag constraints in mi-SVM), this can be done very efficiently. Also notice that we can re-initialize the QP-solver at every iteration with the previously found solution, which will usually result in a significant speed-up. In terms of initialization of the optimization procedure, we suggest to impute positive labels for patterns in positive bags as the initial configuration in mi-SVM. In MI-SVM, XI is initialized as the centroid of the bag patterns. Figure 2 and 3 summarize pseudo-code descriptions for the algorithms utilized in the experiments. There are many possibilities to refine the above heuristic strategy, for example, by starting from different initial conditions, by using branch and bound techniques to explore larger parts of the discrete part of the search space, by performing stochastic updates (simulated annealing) or by maintaining probabilities on the integer variables in the spirit of deterministic annealing. However, we have been able to achieve competitive results even with the simpler optimization heuristics, which valEMDDl12J DD 19J MI-NN l10J IAPR l4J mi-SVM MI-SVM MUSK1 84.8 88.0 88.9 92.4 87.4 77.9 MUSK2 84.9 84.0 82.5 89.2 83.6 84.3 Table 1: Accuracy results for various methods on the MUSK data sets. idate the maximum margin formulation of SVM. We will address further algorithmic improvements in future work. 6 Experimental Results We have performed experiments on various data sets to evaluate the proposed techniques and compare them to other methods for MIL. As a reference method we have implemented the EM Diverse Density (EM-DD) method [12], for which very competitive results have been reported on the MUSK benchmark!. 6.1 MUSK Data Set The MUSK data sets are the benchmark data sets used in virtually all previous approaches and have been described in detail in the landmark paper [4]. Both data sets, MUSK1 and MUSK2, consist of descriptions of molecules using multiple low-energy conformations. Each conformation is represented by a 166-dimensional feature vector derived from surface properties. MUSK1 contains on average approximately 6 conformation per molecule, while MUSK2 has on average more than 60 conformations in each bag. The averaged results of ten 10-fold cross-validation runs are summarized in Table 1. The SVM results are based on an RBF kernel K(x, y) = exp( -')'llx - Y112) with coarsely optimized ')'. For both MUSK1 and MUSK2 data sets, mi-SVM achieves competitive accuracy values. While MI-SVM outperforms mi-SVM on MUSK2, it is significantly worse on MUSK1. Although both methods fail to achieve the performance of the best method (iterative APR)2, they compare favorably with other approaches to MIL. 6.2 Automatic Image Annotation We have generated new MIL data sets for an image annotation task. The original data are color images from the Corel data set that have been preprocessed and segmented with the Blobworld system [2]. In this representation, an image consists of a set of segments (or blobs), each characterized by color, texture and shape descriptors. We have utilized three different categories ("elephant", "fox", "tiger") in our experiments. In each case, the data sets have 100 positive and 100 negative example images. The latter have been randomly drawn from a pool of photos of other animals. Due to the limited accuracy of the image segmentation, the relative small number of region descriptors and the small training set size, this ends up being quite a hard classification problem. We are currently investigating alternative image 1 However, the description of EM-DD in [12] seems to indicate that the authors used the test data to select the optimal solution obtained from multiple runs of the algorithm. In the pseudo-code formulation of EM-DD, Di is used to compute the error for the i-th data fold, where it should in fact be Dt = D - Di (using the notation of [12]). We have used the corrected version of the algorithm in our experiments and have obtained accuracy numbers using EM-DD that are more in line with previously published results. 2Since the IAPR (iterative axis parallel rectangle) methods in [4] have been specifically designed and optimized for the MUSK classification task, the superiority of APR should not be interpreted as a failure. Data Set Dims EM-DD mi-SVM MI-SVM Category inst/feat linear poly rbf linear poly rbf Elephant 1391/230 78.3 82.2 78.1 80.0 81.4 79.0 73.1 Fox 1320/230 56.1 58.2 55.2 57.9 57.8 59.4 58.8 Tiger 1220/230 72.1 78.4 78.1 78.9 84.0 81.6 66.6 Table 2: Classification accuracy of different methods on the Corel image data sets. Data Set Dims EM-DD mi-SVM MI-SVM Category inst/feat linear poly rbf linear poly rbf TST1 3224/6668 85.8 93.6 92.5 90.4 93.9 93.8 93.7 TST2 3344/6842 84.0 78.2 75.9 74.3 84.5 84.4 76.4 TST3 3246/6568 69.0 87.0 83.3 69.0 82.2 85.1 77.4 TST4 3391/6626 80.5 82.8 80.0 69.6 82.4 82.9 77.3 TST7 3367/7037 75.4 81.3 78.7 81.3 78.0 78.7 64.5 TST9 3300/6982 65.5 67.5 65.6 55.2 60.2 63.7 57.0 TST10 3453/7073 78.5 79.6 78.3 52.6 79.5 81.0 69.1 Table 3: Classification accuracy of different methods on the TREC9 document categorization sets. representations in the context of applying MIL to content-based image retrieval and automated image indexing, for which we hope to achieve better (absolute) classification accuracies. However, these data sets seem legitimate for a comparative performance analysis. The results are summarized in Table 2. They show that both, mi-SVM and MI-SVM achieve a similar accuracy and outperform EM-DD by a few percent. While MI-SVM performed marginally better than mi-SVM, both heuristic methods were susceptible to other nearby local minima. Evidence of this effect was observed through experimentation with asynchronus updates, as described in Section 5, where we varied the number of integer variables updated at each iteration. 6.3 Text Categorization Finally, we have generated MIL data sets for text categorization. Starting from the publicly available TREC9 data set, also known as OHSUMED, we have split documents into passages using overlapping windows of maximal 50 words each. The original data set consists of several years of selected MEDLINE articles. We have worked with the 1987 data set used as training data in the TREC9 filtering task which consists of approximately 54,000 documents. MEDLINE documents are annotated with MeSH terms (Medical Subject Headings), each defining a binary concept. The total number of MeSH terms in TREC9 was 4903. While we are currently performing a larger scale evaluation of MIL techniques on the full data set, we report preliminary results here on a smaller, randomly subsampled data set. We have been using the first seven categories of the pre-test portion with at least 100 positive examples. Compared to the other data sets the representation is extremely sparse and high-dimensional, which makes this data an interesting additional benchmark. Again, using linear and polynomial kernel functions, which are generally known to work well for text categorization, both methods show improved performance over EM-DD in almost all cases. No significant difference between the two methods is clearly evident for the text classification task. 7 Conclusion and Future Work We have presented a novel approach to multiple-instance learning based on two alternative generalizations of the maximum margin idea used in SVM classification. Although these formulations lead to hard mixed integer problems, even simple local optimization heuristics already yield quite competitive results compared to the baseline approach. We conjecture that better optimization techniques, that can for example avoid unfavorable local minima, may further improve the classification accuracy. Ongoing work will also extend the experimental evaluation to include larger scale problems. As far as the MIL research problem is concerned, we have considered a wider range of data sets and applications than is usually done and have been able to obtain very good results across a variety of data sets. We strongly suspect that many MIL methods have been optimized to perform well on the MUSK benchmark and we plan to make the data sets used in the experiments available to the public to encourage further empirical comparisons. Acknowledgments This work was sponsored by an NSF-ITR grant, award number IIS-0085836. References [1] P. Auer. On learning from multi-instance examples: Empirical evaluation of a theoretical approach. In Proc. 14th International Conf. on Machine Learning, pages 21- 29. Morgan Kaufmann, San Francisco, CA, 1997. [2] C. Carson, M. Thomas, S. Belongie, J. M. Hellerstein, and J. Malik. Blobworld: A system for region-based image indexing and retrieval. In Proceedings Third International Conference on Visual Information Systems. Springer, 1999. [3] A. Demirez and K. Bennett. Optimization approaches to semisupervised learning. In M. Ferris, O. Mangasarian, and J. Pang, editors, Applications and Algorithms of Complementarity. Kluwer Academic Publishers, Boston, 2000. [4] T . G. Dietterich, R. H. Lathrop, and T . Lozano-Perez. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1-2):31- 71, 1997. [5] T. Gartner, P. A. Flach, A. Kowalczyk, and A. J. Smola. Multi-instance kernels. In Proc. 19th International Conf. on Machine Learning. Morgan Kaufmann, San Francisco, CA, 2002. [6] T. Joachims. Transductive inference for text classification using support vector machines. In Proceedings 16th International Conference on Machine Learning, pages 200- 209. Morgan Kaufmann, San Francisco, CA, 1999. [7] P.M. Long and L. Tan. PAC learning axis aligned rectangles with respect to product distributions from multiple-instance examples. In Proc. Compo Learning Theory, 1996. [8] O. Maron and T. Lozano-Perez. A framework for multiple-instance learning. In Advances in Neural Information Processing Systems, volume 10. MIT Press, 1998. [9] O. Maron and A. L. Ratan. Multiple-instance learning for natural scene classification. In Proc. 15th International Conf. on Machine Learning, pages 341- 349. Morgan Kaufmann, San Francisco, CA, 1998. [10] J. Ramon and L. De Raedt. Multi instance neural networks. In Proceedings of ICML2000, Workshop on Attribute- Value and Relational Learning, 2000. [11] B. SchOlkopf and A. Smola. Learning with Kernels. Support Vector Machines, Regularization, Optimization and Beyond. MIT Press, 2002. [12] Qi Zhang and Sally A. Goldman. EM-DD: An improved multiple-instance learning technique. In Advances in Neural Information Processing Systems, volume 14. MIT Press, 2002.
2002
66
2,272
Bias-Optimal Incremental Problem Solving J¨urgen Schmidhuber IDSIA, Galleria 2, 6928 Manno-Lugano, Switzerland juergen@idsia.ch Abstract Given is a problem sequence and a probability distribution (the bias) on programs computing solution candidates. We present an optimally fast way of incrementally solving each task in the sequence. Bias shifts are computed by program prefixes that modify the distribution on their suffixes by reusing successful code for previous tasks (stored in non-modifiable memory). No tested program gets more runtime than its probability times the total search time. In illustrative experiments, ours becomes the first general system to learn a universal solver for arbitrary disk Towers of Hanoi tasks (minimal solution size  ). It demonstrates the advantages of incremental learning by profiting from previously solved, simpler tasks involving samples of a simple context free language. 1 Brief Introduction to Optimal Universal Search Consider an asymptotically optimal method for tasks with quickly verifiable solutions: Method 1.1 (LSEARCH) View the -th binary string              as a potential program for a universal Turing machine. Given some problem, for all do: every  steps on average execute (if possible) one instruction of the -th program candidate, until one of the programs has computed a solution. Given some problem class, if some unknown optimal program  requires  steps to solve a problem instance of size  , and  happens to be the  -th program in the alphabetical list, then LSEARCH (for Levin Search) [6] will need at most    !#"$   steps — the constant factor % may be huge but does not depend on  . Compare [11, 7, 3]. Recently Hutter developed a more complex asymptotically optimal search algorithm for all well-defined problems [3]. HSEARCH (for Hutter Search) cleverly allocates part of the total search time for searching the space of proofs to find provably correct candidate programs with provable upper runtime bounds, and at any given time focuses resources on those programs with the currently best proven time bounds. Unexpectedly, HSEARCH manages to reduce the constant slowdown factor to a value of &(' , where ' is an arbitrary positive constant. Unfortunately, however, the search in proof space introduces an unknown additive problem class-specific constant slowdown, which again may be huge. In the real world, constants do matter. In this paper we will use basic concepts of optimal search to construct an optimal incremental problem solver that at any given time may exploit experience collected in previous searches for solutions to earlier tasks, to minimize the constants ignored by nonincremental HSEARCH and LSEARCH. 2 Optimal Ordered Problem Solver (OOPS) Notation. Unless stated otherwise or obvious, to simplify notation, throughout the paper newly introduced variables are assumed to be integer-valued and to cover the range clear from the context. Given some finite or infinite countable alphabet "   , let denote the set of finite sequences or strings over , where is the empty string. We use the alphabet name’s lower case variant to introduce (possibly variable) strings such as %      ;   denotes the number of symbols in string  , where    " ;   is the -th symbol of  ;    " if  and        otherwise (where ! ""  # "$ ).     is the concatenation of   and   (e.g., if   ""%'&)( and   "+*,%-( then     ".%'&)(*,%,( ). Consider countable alphabets / and . Strings 0 0  0  12/3 represent possible internal states of a computer; strings %     454 represent code or programs for manipulating states. We focus on / being the set of integers and 6 "    87 representing a set of 7 instructions of some programming language (that is, substrings within states may also encode programs). 9 is a set of currently unsolved tasks. Let the variable 0 :;</ denote the current state of task := 9 , with > -th component 0,?@: on a computation tape : (think of a separate tape for each task). For convenience we combine current state 0 @: and current code  in a single address space, introducing negative and positive addresses ranging from  !0 :   to   &  , defining the content of address > as A @>  : ".,? if CBD>3EF! and A @> @: " 0HG ? @: if   0 @:!!E>CE . All dynamic task-specific data will be represented at nonpositive addresses. In particular, the current instruction pointer ip(r) "IA % ?KJ @:  @: of task : can be found at (possibly variable) address % ?KJ @:E . Furthermore, 0 : also encodes a modifiable probability distribution  : "$   @:   @:   HL @: NM ? O? @: "   on . This variable distribution will be used to select a new instruction in case >  @: points to the current topmost address right after the end of the current code  . %'PRQST)U WV is a variable address that cannot decrease. Once chosen, the code bias   XZY\[N]N^`_a will remain unchangeable forever — it is a (possibly empty) sequence of programs      , some of them prewired by the user, others frozen after previous successful searches for solutions to previous tasks. Given 9 , the goal is to solve all tasks :b 9 , by a program that appropriately uses or extends the current code    XcY`[d]N^`_a . We will do this in a bias-optimal fashion, that is, no solution candidate will get much more search time than it deserves, given some initial probabilistic bias on program space e : Definition 2.1 (BIAS-OPTIMAL SEARCHERS) Given is a problem class f , a search space g of solution candidates (where any problem :h<f should have a solution in g ), a taskdependent bias in form of conditional probability distributions i !jk: on the candidates l g , and a predefined procedure that creates and tests any given  on any :lmf within time n % : (typically unknown in advance). A searcher is -bias-optimal ( V  ) if for any maximal total search time o 8X)p  it is guaranteed to solve any problem :Cqf if it has a solution q g satisfying n   :Eri  sj:to 8X)p,u . Unlike reinforcement learners [4] and heuristics such as Genetic Programming [2], OOPS (section 2.2) will be -bias-optimal, where is a small and acceptable number, such as 8. 2.1 OOPS Prerequisites: Multitasking & Prefix Tracking Through Method “Try” The Turing machine-based setups for HSEARCH and LSEARCH assume potentially infinite storage. Hence they may largely ignore questions of storage management. In any practical system, however, we have to efficiently reuse limited storage. This, and multitasking, is what the present subsection is about. The recursive method Try below allocates time to program prefixes, each being tested on multiple tasks simultaneously, such that the sum of the runtimes of any given prefix, tested on all tasks, does not exceed the total search time multiplied by the prefix probability (the product of the tape-dependent probabilities of its previously selected components in ). Try tracks effects of tested program prefixes, such as storage modifications (including probability changes) and partially solved task sets, to reset conditions for subsequent tests of alternative prefix continuations in an optimally efficient fashion (at most as expensive as the prefix tests themselves). Optimal backtracking requires that any prolongation of some prefix by some token gets immediately executed. To allow for efficient undoing of state changes, we use global Boolean variables h%,: ? :  (initially FALSE) for all possible state components 0 ? @: . We initialize time n  "  probability i "  ; q-pointer   " % PRQS\T)U  and state 0 : (including >  @: and  @: ) with task-specific information for all task names : in a ring 9  of tasks. Here the expression “ring” indicates that the tasks are ordered in cyclic fashion; j 9 j denotes the number of tasks in ring 9 . Given a global search time limit o , we Try to solve all tasks in 9  , by using existing code in  ".    J and / or by discovering an appropriate prolongation of  : Method 2.1 (BOOLEAN Try (   :  9  \n  i )) (returns TRUE or FALSE; :   9  ). 1. Make an empty stack  ; set local variables :b "2:  9 " 9  n1 "2n  Done " FALSE. WHILE j 9 j  and n E5i;o and instruction pointer valid (  !0 :  hE >  : hEW  ) and instruction valid (  E A >  @:  @:#E 7 ) and no halt condition (e.g., error such as division by 0) encountered (evaluate conditions in this order until first satisfied, if any) DO: If possible, interpret / execute token A @>  :! : according to the rules of the given programming language (this may modify 0 @: including instruction pointer >  @: and distribution  @: , but not  ), continually increasing n by the consumed time. Whenever the execution changes some state component 0 ? @: whose  %,:  ? : " FALSE, set  %,: ? @: " TRUE and save the previous value  0 ? @: by pushing the triple > \:  0 ? @:! onto  . Remove : from 9 if solved. IF j 9 j  , set : equal to the next task in ring 9 . ELSE set Done " TRUE; % PRQST)U  "2  (all tasks solved; new code frozen, if any). 2. Use  to efficiently reset only the modified  %,:  ?  to FALSE (but do not pop  yet). 3. IF >  :  "   &  (this means an online request for prolongation of the current prefix through a new token): WHILE Done " FALSE and there is some yet untested token  $(untried since n  as value for   J   ), set   J  D " and Done " Try (   &  : 9 \n i   :    ), where  :    is  ’s probability according to current  @: . 4. Use  to efficiently restore only those 0 ?  changed since n  , thus also restoring instruction pointer >  :R  and original search distribution  :H  . Return the value of Done. It is important that instructions whose runtimes are not known in advance can be interrupted by Try at any time. Essentially, Try conducts a depth-first search in program space, where the branches of the search tree are program prefixes, and backtracking is triggered once the sum of the runtimes of the current prefix on all current tasks exceeds the prefix probability multiplied by the total time limit. A successful Try will solve all tasks, possibly increasing % PRQST)U  . In any case Try will completely restore all states of all tasks. Tracking / undoing effects of prefixes essentially does not cost more than their execution. So the in Def. 2.1 of -bias-optimality is not greatly affected by backtracking: ignoring hardware-specific overhead, we lose at most a factor 2. An efficient iterative (non-recursive) version of Try for a broad variety of initial programming languages was implemented in C. 2.2 OOPS For Finding Universal Solvers Now suppose there is an ordered sequence of tasks :O :c  . Task : may or may not depend on solutions for : ? >  "    s >   For instance, task : may be to find a faster way through a maze than the one found during the search for a solution to task : G  . We are searching for a single program solving all tasks encountered so far (see [9] for variants of this setup). Inductively suppose we have solved the first tasks through programs stored below address % PRQS\T)U  , and that the most recently found program starting at address %  X E=%'PRQS\T)U  actually solves all of them, possibly using information conveyed by earlier programs. To find a program solving the first &  tasks, OOPS invokes Try as follows (using set notation for ring 9 ): Method 2.2 (OOPS (n+1)) Initialize o "  R q ".% PRQS\T)U  . 1. Set 9 " R:    and >  :    "2%  X . IF Try (   \:    9    ) then exit. 2. IF &  =o go to 3. Set 9 " c:, \:R  \: H  ; set local variable %# "I%OPcQSTU  &  ; for all :4 9 set >  @: "2% . IF Try (   :    9    ) set %  X "I% and exit. 3. Set o "  o , and go to 1. That is, we spend roughly equal time on two simultaneous searches. The second (step 2) considers all tasks and all prefixes. The first (step 1), however, focuses only on task &  and the most recent prefix and its possible continuations. In particular, start address %  X does not increase as long as new tasks can be solved by prolonging  X N XZY\[d]N^`_a . Why is this justified? A bit of thought shows that it is impossible for the most recent code starting at %  X to request any additional tokens that could harm its performance on previous tasks. We already inductively know that all of its prolongations will solve all tasks up to . Therefore, given tasks :- :c  we first initialize %  X ; then for >h "    invoke OOPS @>  to find programs starting at (possibly increasing) address %  X , each solving all tasks so far, possibly eventually discovering a universal solver for all tasks in the sequence. As address %  X increases for the -th time,   is defined as the program starting at %  X ’s old value and ending right before its new value. Clearly,   (   ) may exploit   . Optimality. OOPS not only is asymptotically optimal in Levin’s sense [6] (see Method 1.1), but also near bias-optimal (Def. 2.1). To see this, consider a program  solving problem : within  steps, given current code bias    XcY`[d]N^`_a and %  X . Denote  ’s probability by i    . A bias-optimal solver would solve : within at most  u i    steps. We observe that OOPS will solve : within at most   u i    steps, ignoring overhead: a factor 2 might get lost for allocating half the search time to prolongations of the most recent code, another factor 2 for the incremental doubling of o (necessary because we do not know in advance the best value of o ), and another factor 2 for Try’s resets of states and tasks. So the method is 8-bias-optimal (ignoring hardware-specific overhead) with respect to the current task. Our only bias shifts are due to freezing programs once they have solved a problem. That is, unlike the learning rate-based bias shifts of ADAPTIVE LSEARCH [10], those of OOPS do not reduce probabilities of programs that were meaningful and executable before the addition of any new  ? . Only formerly meaningless, interrupted programs trying to access code for earlier solutions when there weren’t any suddenly may become prolongable and successful, once some solutions to earlier tasks have been stored. Hopefully we have i   ; i    , where  is among the most probable fast solvers of : that do not use previously found code. For instance,  may be rather short and likely because it uses information conveyed by earlier found programs stored below %PRQST)U  . E.g.,  may call an earlier stored  ? as a subprogram. Or maybe  is a short and fast program that copies  ? into state 0 @:  , then modifies the copy just a little bit to obtain   ? , then successfully applies   ? to : . If  is not many times faster than  , then OOPS will in general suffer from a much smaller constant slowdown factor than LSEARCH, reflecting the extent to which solutions to successive tasks do share useful mutual information. Unlike nonincremental LSEARCH and HSEARCH, which do not require online-generated programs for their aymptotic optimality properties, OOPS does depend on such programs: The currently tested prefix may temporarily rewrite the search procedure by invoking previously frozen code that redefines the probability distribution on its suffixes, based on experience ignored by LSEARCH & HSEARCH (metasearching & metalearning!). As we are solving more and more tasks, thus collecting and freezing more and more  ? , it will generally become harder and harder to identify and address and copy-edit particular useful code segments within the earlier solutions. As a consequence we expect that much of the knowledge embodied by certain  actually will be about how to access and edit and use programs  ? ( > B ) previously stored below  . 3 A Particular Initial Programming Language The efficient search and backtracking mechanism described in section 2.1 is not aware of the nature of the particular programming language given by , the set of initial instructions for modifying states. The language could be list-oriented such as LISP, or based on matrix operations for neural network-like parallel architectures, etc. For the experiments we wrote an interpreter for an exemplary, stack-based, universal programming language inspired by FORTH [8], whose disciples praise its beauty and the compactness of its programs. Each task’s tape holds its state: various stack-like data structures represented as sequences of integers, including a data stack ds (with stack pointer dp) for function arguments, an auxiliary data stack Ds, a function stack fns of entries describing (possibly recursive) functions defined by the system itself, a callstack cs (with stack pointer cp and top entry (Z0(  ) for calling functions, where local variable (Z0(   >  is the current instruction pointer, and base pointer (Z0(   * points into ds below the values considered as arguments of the most recent function call: Any instruction of the form inst (      ) expects its arguments on top of ds, and replaces them by its return values. Illegal use of any instruction will cause the currently tested program prefix to halt. In particular, it is illegal to set variables (such as stack pointers or instruction pointers) to values outside their prewired ranges, or to pop empty stacks, or to divide by 0, or to call nonexistent functions, or to change probabilities of nonexistent tokens, etc. Try (Section 2.1) will interrupt prefixes as soon as their nDo i . Instructions. We defined 68 instructions, such as oldq(n) for calling the -th previously found program   , or getq(n) for making a copy of   on stack ds (e.g., to edit it with additional instructions). Lack of space prohibits to explain all instructions (see [9]) — we have to limit ourselves to the few appearing in solutions found in the experiments, using readable names instead of their numbers: Instruction c1() returns constant 1. Similarly for c2(), ..., c5(). dec(x) returns    ; by2(x) returns   ; grt(x,y) returns 1 if   , otherwise 0; delD() decrements stack pointer Dp of Ds; fromD() returns the top of Ds; toD() pushes the top entry of *-0 onto Ds; cpn(n) copies the n topmost ds entries onto the top of ds, increasing dp by ; cpnb(n) copies ds entries above the (c0(   * -th ds entry onto the top of ds; exec(n) interprets as the number of an instruction and executes it; bsf(n) considers the entries on stack ds above its (c0(   * & -th entry as code and uses callstack cs to call this code (code is executed by step 1 of Try (Section 2.1), one instruction at a time; the instruction ret() causes a return to the address of the next instruction right after the calling instruction). Given input arguments on ds, instruction defnp() pushes onto ds the begin of a definition of a procedure with inputs; this procedure returns if its topmost input is 0, otherwise decrements it. callp() pushes onto ds code for a call of the most recently defined function / procedure. Both defnp and callp also push code for making a fresh copy of the inputs of the most recently defined code, expected on top of ds. endnp() pushes code for returning from the current call, then calls the code generated so far on stack ds above the inputs, applying the code to a copy of the inputs on top of *-0 . boostq(i) sequentially goes through all tokens of the > -th self-discovered frozen program, boosting each token’s probability by adding 7 to its enumerator and also to the denominator shared by all instruction probabilities — denominator and all numerators are stored on tape, defining distribution  : . Initialization. Given any task, we add task-specific instructions. We start with a maximum entropy distribution on the  ? (all numerators set to 1), then insert substantial prior bias by assigning the lowest (easily computable) instruction numbers to the task-specific instructions, and by boosting (see above) the initial probabilities of appropriate “small number pushers” (such as c1, c2, c3) that push onto ds the numbers of the task-specific instructions, such that they become executable as part of code on ds. We also boost the probabilities of the simple arithmetic instructions by2 and dec, such that the system can easily create other integers from the probable ones (e.g., code sequence (c3 by2 by2 dec) will return integer 11). Finally we also boost boostq. 4 Experiments: Towers of Hanoi and Context-Free Symmetry Given are disks of different sizes, stacked in decreasing size on the first of three pegs. Moving some peg’s top disk to the top of another (possibly empty) peg, one disk at a time, but never a larger disk onto a smaller, transfer all disks to the third peg. Remarkably, the fastest way of solving this famous problem requires     moves  V  . Untrained humans find it hard to solve instances  . Anderson [1] applied traditional reinforcement learning methods and was able to solve instances up to " , solvable within at most 7 moves. Langley [5] used learning production systems and was able to solve Hanoi instances up to "  , solvable within at most 31 moves. Traditional nonlearning planning procedures systematically explore all possible move combinations. They also fail to solve Hanoi problem instances with    , due to the exploding search space (Jana Koehler, IBM Research, personal communication, 2002). OOPS, however, is searching in program space instead of raw solution space. Therefore, in principle it should be able to solve arbitrary instances by discovering the problem’s elegant recursive solution: given and three pegs /   (source peg, auxiliary peg, destination peg), define procedure Method 4.1 (HANOI(S,A,D,n)) IF " exit. Call HANOI(S, D, A, n-1); move top disk from S to D; call HANOI(A, S, D, n-1). The -th task is to solve all Hanoi instances up to instance . We represent the dynamic environment for task on the -th task tape, allocating &  addresses for each peg, to store its current disk positions and a pointer to its top disk (0 if there isn’t any). We represent pegs /  by numbers 1, 2, 3, respectively. That is, given an instance of size , we push onto ds the values    . By doing so we insert substantial, nontrivial prior knowledge about problem size and the fact that it is useful to represent each peg by a symbol. We add three instructions to the 68 instructions of our FORTH-like programming language: mvdsk() assumes that /  are represented by the first three elements on ds above the current base pointer (Z0(   * , and moves a disk from peg / to peg . Instruction xSA() exchanges the representations of / and  , xAD() those of  and (combinations may create arbitrary peg patterns). Illegal moves cause the current program prefix to halt. Overall success is easily verifiable since our objective is achieved once the first two pegs are empty. Within reasonable time (a week) on an off-the-shelf personal computer (1.5 GHz) the system was not able to solve instances involving more than 3 disks. This gives us a welcome opportunity to demonstrate its incremental learning abilities: we first trained it on an additional, easier task, to teach it something about recursion, hoping that this would help to solve the Hanoi problem as well. For this purpose we used a seemingly unrelated symmetry problem based on the context free language   : given input on the data stack ds, the goal is to place symbols on the auxiliary stack Ds such that the  topmost elements are 1’s followed by 2’s. We add two more instructions to the initial programming language: instruction 1toD() pushes 1 onto Ds, instruction 2toD() pushes 2. Now we have a total of five task-specific instructions (including those for Hanoi), with instruction numbers 1, 2, 3, 4, 5, for 1toD, 2toD, mvdsk, xSA, xAD, respectively. So we first boost (Section 3) instructions c1, c2 for the first training phase where the -th task  "     is to solve all symmetry problem instances up to . Then we undo the symmetry-specific boosts of c1, c2 and boost instead the Hanoi-specific “instruction number pushers” (  (  (  for the subsequent training phase where the -th task (again "    ) is to solve all Hanoi instances up to . Results. Within roughly 0.3 days, OOPS found and froze code solving the symmetry problem. Within 2 more days it also found a universal Hanoi solver, exploiting the benefits of incremental learning ignored by nonincremental HSEARCH and LSEARCH. It is instructive to study the sequence of intermediate solutions. In what follows we will transform integer sequences discovered by OOPS back into readable programs (to fully understand them, however, one needs to know all side effects, and which instruction has got which number). For the symmetry problem, within less than a second, OOPS found silly but working code for "  . Within less than 1 hour it had solved the 2nd, 3rd, 4th, and 5th instances, always simply prolonging the previous code without changing the start address %  X . The code found so far was unelegant: (defnp 2toD grt c2 c2 endnp boostq delD delD bsf 2toD fromD delD delD delD fromD bsf by2 bsf by2 fromD delD delD fromD cpnb bsf). But it does solve all of the first 5 instances. Finally, after 0.3 days, OOPS had created and tested a new, elegant, recursive program (no prolongation of the previous one) with a new increased start address %  X , solving all instances up to 6: (defnp c1 calltp c2 endnp). That is, it was cheaper to solve all instances up to 6 by discovering and applying this new program to all instances so far, than just prolonging old code on instance 6 only. In fact, the program turns out to be a universal symmetry problem solver. On the stack, it constructs a 1-argument procedure that returns nothing if its input argument is 0, otherwise calls the instruction 1toD whose code is 1, then calls itself with a decremented input argument, then calls 2toD whose code is 2, then returns. Using this program, within an additional 20 milliseconds, OOPS had also solved the remaining 24 symmetry tasks up to "  . Then OOPS switched to the Hanoi problem. 1 ms later it had found trivial code for "  : (mvdsk). After a day or so it had found fresh yet bizarre code (new start address %  X ) for "   : (c4 c3 cpn c4 by2 c3 by2 exec). Finally, after 3 days it had found fresh code (new %  X ) for "    : (c3 dec boostq defnp c4 calltp c3 c5 calltp endnp). This already is an optimal universal Hanoi solver. Therefore, within 1 additional day OOPS was able to solve the remaining 27 tasks for up to 30, reusing the same program  X d XZY\[N]N^`_a again and again. Recall that the optimal solution for "  takes    mvdsk operations, and that for each mvdsk several other instructions need to be executed as well! The final Hanoi solution profits from the earlier recursive solution to the symmetry problem. How? The prefix (c3 dec boostq) (probability 0.003) temporarily rewrites the search procedure (this illustrates the benefits of metasearching!) by exploiting previous code: Instruction c3 pushes 3; dec decrements this; boostq takes the result 2 as an argument and thus boosts the probabilities of all components of the 2nd frozen program, which happens to be the previously found universal symmetry solver. This leads to an online bias shift that greatly increases the probability that defnp, calltp, endnp, will appear in the suffix of the online-generated program. These instructions in turn are helpful for building (on the data stack ds) the double-recursive procedure generated by the suffix (defnp c4 calltp c3 c5 calltp endnp), which essentially constructs a 4-argument procedure that returns nothing if its input argument is 0, otherwise decrements the top input argument, calls the instruction xAD whose code is 4, then calls itself, then calls mvdsk whose code is 5, then calls xSA whose code is 3, then calls itself again, then returns (compare the standard Hanoi solution). The total probability of the final solution, given the previous codes, is      G   . On the other hand, the probability of the essential Hanoi code (defnp c4 calltp c3 c5 calltp endnp), given nothing, is only    G  , which explains why it was not quickly found without the help of an easier task. So in this particular setup the incremental training due to the simple recursion for the symmetry problem indeed provided useful training for the more complex Hanoi recursion, speeding up the search by a factor of roughly 1000. The entire 4 day search tested 93,994,568,009 prefixes corresponding to 345,450,362,522 instructions costing 678,634,413,962 time steps (some instructions cost more than 1 step, in particular, those making copies of strings with length   , or those increasing the probabilities of more than one instruction). Search time of an optimal solver is a natural measure of initial bias. Clearly, most tested prefixes are short — they either halt or get interrupted soon. Still, some programs do run for a long time; the longest measured runtime exceeded 30 billion steps. The stacks  of recursive invocations of Try for storage management (Section 2.1) collectively never held more than 20,000 elements though. Different initial bias will yield different results. E.g., we could set to zero the initial probabilities of most of the 73 initial instructions (most are unnecessary for our two problem classes), and then solve all   tasks more quickly (at the expense of obtaining a nonuniversal initial programming language). The point of this experimental section, however, is not to find the most reasonable initial bias for particular problems, but to illustrate the general functionality of the first general near-bias-optimal incremental learner. In ongoing research we are equipping OOPS with neural network primitives and are applying it to robotics. Since OOPS will scale to larger problems in essentially unbeatable fashion, the hardware speed-up factor of   expected for the next 30 years appears promising. References [1] C. W. Anderson. Learning and Problem Solving with Multilayer Connectionist Systems. PhD thesis, University of Massachusetts, Dept. of Comp. and Inf. Sci., 1986. [2] N. L. Cramer. A representation for the adaptive generation of simple sequential programs. In J.J. Grefenstette, editor, Proceedings of an International Conference on Genetic Algorithms and Their Applications, Carnegie-Mellon University, July 24-26, 1985, Hillsdale NJ, 1985. Lawrence Erlbaum Associates. [3] M. Hutter. The fastest and shortest algorithm for all well-defined problems. International Journal of Foundations of Computer Science, 13(3):431–443, 2002. [4] L.P. Kaelbling, M.L. Littman, and A.W. Moore. Reinforcement learning: a survey. Journal of AI research, 4:237–285, 1996. [5] P. Langley. Learning to search: from weak methods to domain-specific heuristics. Cognitive Science, 9:217–260, 1985. [6] L. A. Levin. Universal sequential search problems. Problems of Information Transmission, 9(3):265–266, 1973. [7] M. Li and P. M. B. Vit´anyi. An Introduction to Kolmogorov Complexity and its Applications (2nd edition). Springer, 1997. [8] C. H. Moore and G. C. Leach. FORTH - a language for interactive computing, 1970. http://www.ultratechnology.com. [9] J. Schmidhuber. Optimal ordered problem solver. Technical Report IDSIA-12-02, arXiv:cs.AI/0207097 v1, IDSIA, Manno-Lugano, Switzerland, July 2002. [10] J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with success-story algorithm, adaptive Levin search, and incremental self-improvement. Machine Learning, 28:105–130, 1997. [11] R.J. Solomonoff. An application of algorithmic probability to problems in artificial intelligence. In L. N. Kanal and J. F. Lemmer, editors, Uncertainty in Artificial Intelligence, pages 473–491. Elsevier Science Publishers, 1986.
2002
67
2,273
Rate Distortion Function in the Spin Glass State: a Toy Model Tatsuto Murayama and Masato Okada Laboratory for Mathematical Neuroscience RIKEN Brain Science Institute Saitama, 351-0198, JAPAN {murayama,okada}@brain.riken.go.jp Abstract We applied statistical mechanics to an inverse problem of linear mapping to investigate the physics of optimal lossy compressions. We used the replica symmetry breaking technique with a toy model to demonstrate Shannon’s result. The rate distortion function, which is widely known as the theoretical limit of the compression with a fidelity criterion, is derived. Numerical study shows that sparse constructions of the model provide suboptimal compressions. 1 Introduction Many information-science studies are very similar to those of statistical physics. Statistical physics and information science may have been expected to be directed towards common objectives since Shannon formulated an information theory based on the concept of entropy. However, envisaging how this actually happened would have been difficult; that the physics of disordered systems, and spin glass theory in particular, at its maturity naturally includes some important aspects of information sciences, thus reuniting the two disciplines. This cross-disciplinary field can thus be expected to develop much further beyond current perspectives in the future [1]. The areas where these relations are particularly strong are Shannon’s coding theory [2] and classical spin systems with quenched disorder, which is the replica theory of disordered statistical systems [3]. Triggered by the work of Sourlas [4], these links have recently been examined in the area of matrix-based error corrections [5, 6], network-based compressions [7], and turbo decoding [8]. Recent results of these topics are mostly based on the replica technique. Without exception, their basic characteristics (such as channel capacity, entropy rate, or achievable rate region) are only captured by the concept of a phase transition with a first-order jump between the optimal and the other solutions arising in the scheme. However, the research in the cross-disciplinary field so far can be categorized as a so-called ‘zero-distortion’ decoding scheme in terms of information theory: the system requires perfect reproduction of the input alphabets [2]. Here, the same spin glass techniques should be useful to describe the physics of systems with a fidelity criterion; i.e., a certain degree of information distortion is assumed when reproducing the alphabets. This framework is called the rate distortion theory [9, 10]. Though processing information requires regarding the concept of distortions practically, where input alphabets are mostly represented by continuous variables, statistical physics only employs a few approaches [11, 12]. In this paper, we introduce a prototype that is suitable for cross-disciplinary study. We analyze how information distortion can be described by the concepts of statistical physics. More specifically, we study the inverse problem of a Sourlas-type decoding problem by using the framework of replica symmetry breaking (RSB) of diluted disordered systems [13]. According to our analysis, this simple model provides an optimal compression scheme for an arbitrary fidelity-criterion degree, though the encoding procedure remains an NPcomplete problem without any practical encoders. The paper is organized as follows. In Section 2, we briefly review the concept of the rate distortion theory as well as the main results related to our purpose. In Section 3, we introduce a toy model. In Section 4, we obtain consistent results with information theory. Conclusions are given in the last section. Detailed derivations will be reported elsewhere. 2 Review: Rate Distortion Theory We briefly recall the definitions of the concepts of the rate distortion theory and state the simplest version of the main result at the end of this section. Let J be a discrete random variable with alphabet J . Assume that we have a source that produces a sequence J1, J2, · · · , JM, where each symbol is randomly drawn from a distribution. We will assume that the alphabet is finit. Throughout this paper we use vector notation to represent sequences for convenience of explanation: J = (J1, J2, · · · , JM)T ∈J M. Here, the encoder describes the source sequence J ∈J M by a codeword ξ = f(J) ∈X N. The decoder represents J by an estimate ˆJ = g(ξ) ∈ˆ J M, as illustrated in Figure 1. Note that M represents the length of a source sequence, while N represents the length of a codeword. Here, the rate is defined by R = N/M. Note that the relation N < M always holds when a compression is considered; therefore, R < 1 also holds. Definition 2.1 A distortion function is a mapping d : J × ˆ J →R+ (1) from the set of source alphabet-reproduction alphabet pairs into the set of non-negative real numbers. Intuitively, the distortion d(J, ˆJ) is a measure of the cost of representing the symbol J by the symbol ˆJ. This definition is quite general. In most cases, however, the reproduction alphabet ˆ J is the same as the source alphabet J . Hereafter, we set ˆ J = J and the following distortion measure is adopted as the fidelity criterion: Definition 2.2 The Hamming distortion is given by d(J, ˆJ) = ( 0 if J = ˆJ 1 if J ̸= ˆJ , (2) , which results in a probable error distortion, since the relation E[d(J, ˆJ)] = P[J ̸= ˆJ] holds, where E[·] represents the expectation and P[·] the probability of its argument. The distortion measure is so far defined on a symbol-by-symbol basis. We extend the definition to sequences: Definition 2.3 The distortion between sequences J, ˆ J ∈J M is defined by d(J, ˆJ) = 1 M M X j=1 d(Jj, ˆJj) . (3) Therefore, the distortion for a sequence is the average distortion per symbol of the elements of the sequence. Definition 2.4 The distortion associated with the code is defined as D = E[d(J, ˆJ)] , (4) where the expectation is with respect to the probability distribution on J . A rate distortion pair (R, D) should be achiebable if a sequence of rate distortion codes (f, g) exist with E[d(J, ˆJ)] ≤D in the limit M →∞. Moreover, the closure of the set of achievable rate distortion pairs is called the rate distortion region for a source. Finally, we can define a function to describe the boundary: Definition 2.5 The rate distortion function R(D) is the infimum of rates R, so that (R, D) is in the rate distortion region of the source for a given distortion D. As in [7], we restrict ourselves to a binary source J with a Hamming distortion measure for simplicity. We assume that binary alphabets are drawn randomly, i.e., the source is not biased to rule out the possiblity of compression due to redundancy. We now find the description rate R(D) required to describe the source with an expected proportion of errors less than or equal to D. In this simplified case, according to Shannon, the boundary can be written as follows. Theorem 2.1 The rate distortion function for a binary source with Hamming distortion is given by R(D) = 1 −H(D) 0 ≤D ≤1 2 0 1 2 < D , (5) where H(·) represents the binary entropy function. J −→ f encoder −→ξ −→ g decoder −→ˆJ Figure 1: Rate distortion encoder and decoder 3 General Scenario In this section, we introduce a toy model for lossy compression. We use the inverse problem of Sourlas-type decoding to realize the optimal encoding scheme [4]. As in the previous section, we assume that binary alphabets are drawn randomly from a non-biased source and that the Hamming distortion measure is selected for the fidelity criterion. We take the Boolean representation of the binary alphabet J , i.e., we set J = {0, 1}. We also set X = {0, 1} to represent the codewords throughout the rest of this paper. Let J be an M-bit source sequence, ξ an N-bit codeword, and ˆJ an M-bit reproduction sequence. Here, the encoding problem can be written as follows. Given a distortion D and a randomly-constructed Boolean matrix A of dimensionality M × N, we find the N-bit codeword sequence ξ, which satisfies ˆJ = Aξ (mod 2) , (6) where the fidelity criterion D = E[d(J, ˆJ)] (7) holds, according to every M-bit source sequence J. Note that we applied modulo 2 arithmetics for the additive operations in (6). In our framework, decoding will just be a linear mapping ˆJ = Aξ, while encoding remains a NP-complete problem. Kabashima and Saad recently expanded on the work of Sourlas, which focused on the zerorate limit, to an arbitrary-rate case [5]. We follow their construction of the matrix A, so we can treat practical cases. Let the Boolean matrix A be characterized by K ones per row and C per column. The finite, and usually small, numbers K and C define a particular code. The rate of our codes can be set to an arbitrary value by selecting the combination of K and C. We also use K and C as control parameters to define the rate R = K/C. If the value of K is small, i.e., the relation K ≪N holds, the Boolean matrix A results in a very sparse matrix. By contrast, when we consider densely constructed cases, K must be extensively big and have a value of O(N). We can also assume that K is not O(1) but K ≪N holds. The codes within any parameter region, including the sparsely-constructed cases, will result in optimal codes as we will conclude in the following section. This is one new finding of our analysis using statistical physics. 4 Physics of the model: One-step RSB Scheme The similarity between codes of this type and Ising spin systems was first pointed out by Sourlas, who formulated the mapping of a code onto an Ising spin system Hamiltonian in the context of error correction [4]. To facilitate the current investigation, we first map the problem to that of an Ising model with finite connectivity following Sourlasfmethod. We use the Ising representation {1, −1} of the alphabet J and X rather than the Boolean one {0, 1}; the elements of the source J and the codeword sequences ξ are rewritten in Ising values by mapping only, and the reproduction sequence ˆ J is generated by taking products of the relevant binary codeword sequence elements in the Ising representation ˆJ⟨i1,i2,··· ,iK⟩= ξi1ξi2 · · · ξiK, where the indices i1, i2, · · · , iK correspond to the ones per row A, producing a Ising version of ˆ J. Note that the additive operation in the Boolean representation is translated into the multiplication in the Ising one. Hereafter, we set Jj, ˆJj, ξi = ±1 while we do not change the notations for simplicity. As we use statisticalmechanics techniques, we consider the source and codeword-sequence dimensionality (M and N, respectively) to be infinite, keeping the rate R = N/M finite. To explore the system’s capabilities, we examine the Hamiltonian: H(S) = − X ⟨i1,··· ,iK⟩ A⟨i1,··· ,iK⟩J⟨i1,··· ,iK⟩Si1 · · · SiK , (8) where we have introduced the dynamical variable Si to find the most suitable Ising codeword sequence ξ to provide the reproduction sequence ˆ J in the decoding stage. Elements of the sparse connectivity tensor A⟨i1,··· ,iK⟩take the value one if the corresponding indices of codeword bits are chosen (i.e., if all corresponding indices of the matrix A are one) and zero otherwise; C ones per i index represent the system’s degree of connectivity. For calculating the partition function Z(A, J) = Tr{S} exp[−βH(S)], we apply the replica method following the calculation of Kabashima and Saad [5]. To calculate replicafree energy, we have to calculate the annealed average of the n-th power of the partition function by preparing n replicas. Here we introduce the inverse temperature β, which can be interpreted as a measure of the system’s sensitivity to distortions. As we see in the following calculation, the optimal value of β is naturally determined when the consistency of the replica symmetry breaking scheme is considered [13, 3]. We use integral representations of the Dirac δ function to enforce the restriction, C bonds per index, on A [14]: δ   X ⟨i2,i3,··· ,iK⟩ A⟨i,i2,··· ,iK⟩−C  = I 2π 0 dZ 2π Z−(C+1)Z P ⟨i2,i3,··· ,iK ⟩A⟨i,i2,··· ,iK ⟩, (9) giving rise to a set of order parameters qα,β,··· ,γ = 1 N N X i=1 ZiSα i Sβ i · · · Sγ i , (10) where α, β, · · · , γ represent replica indices, and the average over J is taken with respect to the probability distribution: P[J⟨i1,i2,··· ,iK⟩] = 1 2δ(J⟨i1,i2,··· ,iK⟩−1) + 1 2δ(J⟨i1,i2,··· ,iK⟩+ 1) (11) as we consider the non-biased source sequences for simplicity. Assuming the replica symmetry, we use a different representation for the order parameters and the related conjugate variables [14]: qα,β,··· ,γ = q Z dx π(x) tanhl(βx) , (12) ˆqα,β,··· ,γ = ˆq Z dx ˆπ(ˆx) tanhl(βˆx) , (13) where q = [(K −1)!NC]1/K and ˆq = [(K −1)!]−1/K[NC](K−1)/K are normalization constants, and π(x) and ˆπ(ˆx) represent probability distributions related to the integration variables. Here l denotes the number of related replica indices. Throughout this paper, integrals with unspecified limits denote integrals over the range of (−∞, +∞). We then obtain an expression for the free energy per source bit expressed in terms of the probability distributions π(x) and ˆπ(ˆx): −βf = 1 M ⟨⟨ln Z(A, J)⟩⟩ = ln cosh β + Z K Y l=1 dxl π(xl) * ln 1 + tanh βJ K Y l=1 tanh βxl !+ J −K Z dx π(x) Z dˆx ˆπ(ˆx) ln(1 + tanh βx tanh βˆx) + C K Z C Y l=1 dˆxl ˆπ(ˆxl) ln " Tr S C Y l=1 (1 + S tanh βˆxl) # , (14) where ⟨⟨· · · ⟩⟩denotes the average over quenched randomness of A and J. The saddle point equations with respect to probability distributions provide a set of relations between π(x) and ˆπ(ˆx): π(x) = Z "C−1 Y l=1 dˆxl ˆπ(ˆxl) # δ x − C−1 X l=1 ˆxl ! , ˆπ(ˆx) = Z "C−1 Y l=1 dxl π(xl) # * δ " ˆx −1 β tanh−1 tanh βJ K−1 Y l=1 tanh βxl !#+ J . (15) By using the result obtained for the free energy, we can easily perform further straightforward calculations to find all the other observable thermodynamical quantities, including internal energy: e = 1 M DD TrSH(S)e−βH(S)EE = −1 M ∂ ∂β ⟨⟨ln Z(A, J)⟩⟩, (16) which records reproduction errors. Therefore, in terms of the considered replica symmetric ansatz, a complete solution of the problem seems to be easily obtainable; unfortunately, it is not. This set of equations (15) may be solved numerically for general β, K, and C. However, there exists an analytical solution of this equations. We first consider this case. Two dominant solutions emerge that correspond to the paramagnetic and the spin glass phases. The paramagnetic solution, which is also valid for general β, K, and C, is in the form of π(x) = δ(x) and ˆπ = δ(ˆx); it has the lowest possible free energy per bit fPARA = −1, although its entropy sPARA = (R−1) ln 2 is positive only for R ≥1. It means that the true solution must be somewhere beyond the replica symmetric ansatz. As a first step, which is called the one-step replica symmetry breaking (RSB), n replicas are usually divided into n/m groups, each containing m replicas. Pathological aspects due to the replica symmetry may be avoided making use of the newly-defined freedom m. Actually, this one-step RSB scheme is considered to provide the exact solutions when the random energy model limit is considered [15], while our analysis is not restricted to this case. The spin glass solution can be calculated for both the replica symmetric and the one-step RSB ansatz. The former reduces to the paramagnetic solution (fRS = fPARA), which is unphysical for R < 1, while the latter yields π1RSB(x) = δ(x), ˆπ1RSB(ˆx) = δ(ˆx) with m = βg(R)/β and βg obtained from the root of the equation enforcing the non-negative replica symmetric entropy sRS = ln cosh βg −βg tanh βg + R ln 2 = 0 , (17) with a free energy f1RSB = −1 βg ln cosh βg −R βg ln 2 . (18) Since the target bit of the estimation in this model is J⟨i1,··· ,iK⟩and its estimator the product Si1 · · · SiK, a performance measure for the information corruption could be the per-bond energy e. According to the one-step RSB framework, the lowest free energy can be calculated from the probability distributions π1RSB(x) and ˆπ1RSB(ˆx) satisfying the saddle point equation (15) at the characteristic inverse temperature βg, when the replica symmetric entropy sRS disappears. Therefore, f1RSB equals e1RSB. Let the Hamming distortion be our fidelity criterion. The distortion D associated with this code is given by the fraction of the free energies that arise in the spin glass phase: D = f1RSB −fRS 2|fRS| = 1 −tanh βg 2 . (19) Here, we substitute the spin glass solutions into the expression, making use of the fact that the replica symmetric entropy sRS disappears at a consistent βg, which is determined by (17). Using (17) and (19), simple algebra gives the relation between the rate R = N/M and the distortion D in the form R = 1 −H(D) , which coincides with the rate distortion function retrieving Theorem 2.1. Surprisingly, we do not observe any first-order jumps between analytical solutions. Recently, we have seen that many approaches to the family of codes, characterized by the linear encoding operations, result in a quite different picture: the optimal boundary is constructed in the random energy model limit and is well captured by the concept of a first-order jump. Our analysis of this model, viewed as a kind of inverse problem, provides an exception. Many optimal conditions in textbook information theory may be well described without the concept of a first-order phase transitions from a view point of statistical physics. We will now investigate the possiblity of the other solutions satisfying (15) in the case of finite K and C. Since the saddle point equations (15) appear difficult for analytical arguments, we resort to numerical evaluations representing the probability distributions π1RSB(x) and ˆπ1RSB(ˆx) by up to 105 bin models and carrying out the integrations by using Monte Carlo methods. Note that the characteristic inverse temperature βg is also evaluated numerically by using (17). We set K = 2 and selected various values of C to demonstrate the performance of stable solutions. The numerical results obtained by the one-step RSB senario show suboptimal properties [Figure 2]. This strongly implies that the analytical solution is not the only stable solution. This conjecture might be verified elsewhere, carrying out large scale simulations. 5 Conclusions Two points should be noted. Firstly, we found that the consistency between the rate distortion theory and the Parisi one-step RSB scheme. Secondly, we conjectured that the analytical solution, which is consistent with the Shannon’s result, is not the only stable solution for some situations. We are currently working on the verification. Acknowledgments We thank Yoshiyuki Kabashima and Shun-ichi Amari for their comments on the manuscript. We also thank Hiroshi Nagaoka and Te Sun Han for giving us valuable references. This research is supported by the Special Postdoctoral Researchers Program at RIKEN. References [1] H. Nishimori. Statistical Physics of Spin Glasses and Information Processing. Oxford University Press, 2001. [2] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, 1991. [3] V. Dotsenko. Introduction to the Replica Theory of Disordered Statistical Systems. Cambridge University Press, 2001. [4] N. Sourlas. Spin-glass models as error-correcting codes. Nature, 339:693–695, 1989. [5] Y. Kabashima and D. Saad. Statistical mechanics of error-correcting codes. Europhys. Lett., 45:97–103, 1999. [6] Y. Kabashima, T. Murayama, and D. Saad. Typical performance of Gallager-type error-correcting codes. Phys. Rev. Lett., 84:1355–1358, 2000. 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 D R K=2 R(D) -2 -1 0 1 2 0 1 2 π ( x ) π ( x ) ^ ^ Figure 2: Numerically-constructed stable solutions: Stable solutions of (15) for the finite values of K and L are calculated by using Monte Carlo methods. We use 105 bin models to approximate the probability distributions π1RSB(x) and ˆπ1RSB(ˆx), starting from various initial conditions. The distributions converge to the continuous ones, giving suboptimal performance. (◦) K = 2 and L = 3, 4, · · · , 12 ; Solid line indicates the rate distortion function R(D). Inset: Snapshots of the distributions, where L = 3 and βg = 2.35. [7] T. Murayama. Statistical mechanics of the data compression theorem. J. Phys. A, 35:L95–L100, 2002. [8] A. Montanari and N. Sourlas. The statistical mechanics of turbo codes. Eur. Phys. J. B, 18:107–119, 2000. [9] C. E. Shannon. Coding theorems for a discrete source with a fidelity criterion. IRE National Convention Record, Part 4, pages 142–163, 1959. [10] T. Berger. Rate Distortion Theory: A Mathematical Basis for Data Compression. Prentice-Hall, 1971. [11] T. Hosaka, Y. Kabashima, and H. Nishimori. Statistical mechanics of lossy data compression using a non-monotonic perceptron. cond-mat/0207356. [12] Y. Matsunaga and H. Yamamoto. A coding theorem for lossy data compression by LDPC codes. In Proceedings 2002 IEEE International Symposium on Information Theory, page 461, 2002. [13] M. Mezard, G. Parisi, and M. Virasoro. Spin-Glass Theory and Beyound. World Scientific, 1987. [14] K. Y. M. Wong and D. Sherrington. Graph bipartitioning and spin glasses on a random network of fixed finite valence. J. Phys. A, 20:L793–L799, 1987. [15] B. Derrida. The random energy model, an exactly solvable model of disordered systems. Phys. Rev. B, 24:2613–2626, 1981.
2002
68
2,274
Adaptive Nonlinear System Identification with Echo State Networks Herbert Jaeger International University Bremen D-28759 Bremen, Germany h.jaeger@iu-bremen. de Abstract Echo state networks (ESN) are a novel approach to recurrent neural network training. An ESN consists of a large, fixed, recurrent "reservoir" network, from which the desired output is obtained by training suitable output connection weights. Determination of optimal output weights becomes a linear, uniquely solvable task of MSE minimization. This article reviews the basic ideas and describes an online adaptation scheme based on the RLS algorithm known from adaptive linear systems. As an example, a 10-th order NARMA system is adaptively identified. The known benefits of the RLS algorithms carryover from linear systems to nonlinear ones; specifically, the convergence rate and misadjustment can be determined at design time. 1 Introduction It is fair to say that difficulties with existing algorithms have so far precluded supervised training techniques for recurrent neural networks (RNNs) from widespread use. Echo state networks (ESNs) provide a novel and easier to manage approach to supervised training of RNNs. A large (order of 100s of units) RNN is used as a "reservoir" of dynamics which can be excited by suitably presented input and/or fed-back output. The connection weights of this reservoir network are not changed by training. In order to compute a desired output dynamics, only the weights of connections from the reservoir to the output units are calculated. This boils down to a linear regression. The theory of ESNs, references and many examples can be found in [5] [6]. A tutorial is [7]. A similar idea has recently been independently investigated in a more biologically oriented setting under the name of "liquid state networks" [8] [9]. In this article I describe how ESNs can be conjoined with the "recursive least squares" (RLS) algorithm, a method for fast online adaptation known from linear systems. The resulting RLS-ESN is capable of tracking a 10-th order nonlinear system with high quality in convergence speed and residual error. Furthermore, the approach yields apriori estimates of tracking performance parameters and thus allows one to design nonlinear trackers according to specificationsl . 1 All algorithms and calculations described m this article are conArticle organization. Section 2 recalls the basic ideas and definitions of ESNs and introduces an augmentation of the basic technique. Section 3 demonstrates ESN omine learning on the 10th order system identification task. Section 4 describes the principles of using the RLS algorithm with ESN networks and presents a simulation study. Section 5 wraps up. 2 Basic ideas of echo state networks For the sake of a simple notation, in this article I address only single-input, singleoutput systems (general treatment in [5]). We consider a discrete-time "reservoir" RNN with N internal network units, a single extra input unit, and a single extra output unit. The input at time n 2 1 is u(n), activations of internal units are x(n) = (xl(n), ... ,xN(n)), and activation of the output unit is y(n). Internal connection weights are collected in an N x N matrix W = (Wij), weights of connections going from the input unit into the network in an N-element (column) weight vector win = (w~n), and the N + 1 (input-and-network)-to-output connection weights in aN + 1element (row) vector wout = (w?ut). The output weights wout will be learned, the internal weights Wand input weights win are fixed before learning, typically in a sparse random connectivity pattern. Figure 1 sketches the setup used in this article. N internal units Figure 1: Basic setup of ESN. Solid arrows: fixed weights; dashed arrows: trainable weights. The activation of internal units and the output unit is updated according to x(n + 1) y(n + 1) f(Wx(n) + winu(n + 1) + v(n + 1)) rut (wout ( u(n + 1), x(n + 1) )) , (1) (2) where f stands for an element-wise application of the unit nonlinearity, for which we here use tanh; v(n + 1) is an optional noise vector; (u(n + l) ,x(n + 1)) is a vector concatenated from u(n + 1) and x(n + 1); and rut is the output unit's nonlinearity (tanh will be used here, too). Training data is a stationary I/O signal (Uteach(n), Yteach(n)). When the network is updated according to (1), then under certain conditions the network state becomes asymptotically independent of initial conditions. More precisely, if the network is started from two arbitrary states x(O), X(O) and is run with the same input sequence in both cases, the resulting state sequences x(n), x(n) converge to each other. If this condition holds, the reservoir network state will asymptotically depend only on the input history, and the network tained in a tutorial Mathematica notebook which can be fetched from http://www.ais.fraunhofer.de/INDY /ESNresources.html. is said to be an echo state network (ESN). A sufficient condition for the echo state property is contractivity of W. In practice it was found that a weaker condition suffices, namely, to ensure that the spectral radius I Amax I of W is less than unity. [5] gives a detailed account. Consider the task of computing the output weights such that the teacher output is approximated by the network. In the ESN approach, this task is spelled out concretely as follows: compute wout such that the training error (3) is minimized in the mean square sense. Note that the effect of the output nonlinearity is undone by (f0ut)-l in this error definition. We dub (fout)-IYteach(n) the teacher pre-signal and (f0ut)-l (wout (Uteach(n), x(n)) + v(n)) the network's preoutput. The computation of wout is a linear regression. Here is a sketch of an offline algorithm for the entire learning procedure: 1. Fix a RNN with a single input and a single output unit, scaling the weight matrix W such that I Amax 1< 1 obtains. 2. Run this RNN by driving it with the teaching input signal. Dismiss data from initial transient and collect remaining input+network states (Uteach (n), Xteach (n)) row-wise into a matrix M. Simultaneously, collect the remaining training pre-signals (f0ut)-IYteach(n) into a column vector r. 3. Compute the pseudo-inverse M-l, and put wout = (M-Ir) T (where T denotes transpose). 4. Write wout into the output connections; the ESN is now trained. The modeling power of an ESN grows with network size. A cheaper way to increase the power is to use additional nonlinear transformations ofthe network state x(n) for computing the network output in (2). We use here a squared version of the network state. Let w~~~ares denote a length 2N + 2 output weight vector and Xsquares(n) the length 2N +2 (column) vector (u(n), Xl (n), . . . , xN(n), u2(n), xi(n), ... , xJv(n)). Keep the network update (1) unchanged, but compute outputs with the following variant of (2): y(n + 1) (4) The "reservoir" and the input is now tapped by linear and quadratic connections. The learning procedure remains linear and now goes like this: 1. (unchanged) 2. Drive the ESN with the training input. Dismiss initial transient and collect remaining augmented states Xsquares(n) row-wise into M. Simultaneously, collect the training pre-signals (fout)-IYteach(n) into a column vector r. 3. Compute the pseudo-inverse M-l, and put w~~~ares = (M-Ir) T. 4. The ESN is now ready for exploitation, using output formula (4). 3 Identifying a 10th order system: offline case In this section the workings of the augmented algorithm will be demonstrated with a nonlinear system identification task. The system was introduced in a survey-andunification-paper [1]. It is a 10th-order NARMA system: d(n + 1) = 0.3 d(n) + 0.05 d(n) [t, d(n - i)] + 1.5 u(n - 9) u(n) + 0.1. (5) Network setup. An N = 100 ESN was prepared by fixing a random, sparse connection weight matrix W (connectivity 5 %, non-zero weights sampled from uniform distribution in [-1,1], the resulting raw matrix was re-scaled to a spectral radius of 0.8, thus ensuring the echo state property). An input unit was attached with a random weight vector win sampled from a uniform distribution over [-0.1,0.1]. Training data and training. An I/O training sequence was prepared by driving the system (5) with an i.i.d. input sequence sampled from the uniform distribution over [0,0.5]' as in [1]. The network was run according to (1) with the training input for 1200 time steps with uniform noise v(n) of size 0.0001. Data from the first 200 steps were discarded. The remaining 1000 network states were entered into the augmented training algorithm, and a 202-length augmented output weight vector w~~~ares was calculated. Testing. The learnt output vector was installed and the network was run from a zero starting state with newly created testing input for 2200 steps, of which the first 200 were discarded. From the remaining 2000 steps, the NMSE test error NMSEtest = E[(Y(;~(d~(n))2J was estimated. A value of NMSEtest ~ 0.032 was found. Comments. (1) The noise term v(n) functions as a regularizer, slightly compromising the training error but improving the test error. (2) Generally, the larger an ESN, the more training data is required and the more precise the learning. Set up exactly like in the described 100-unit example, an augmented 20-unit ESN trained on 500 data points gave NMSEtest ~ 0.31, a 50-unit ESN trained on 1000 points gave NMSEtest ~ 0.084, and a 400-unit ESN trained on 4000 points gave NMSEtest ~ 0.0098. Comparison. The best NMSE training [!] error obtained in [1] on a length 200 training sequence was NMSEtrain ~ 0.2412 However, the level of precision reported [1] and many other published papers about RNN training appear to be based on suboptimal training schemes. After submission of this paper I went into a friendly modeling competition with Danil Prokhorov who expertly applied EKF-BPPT techniques [3] to the same tasks. His results improve on [1] results by an order of magnitude and reach a slightly better precision than the results reported here. 4 Online adaptation of ESN network Because the determination of optimal (augmented) output weights is a linear task, standard recursive algorithms for MSE minimization known from adaptive linear signal processing can be applied to online ESN estimation. I assume that the reader is familiar with the basic idea of FIR tap-weight (Wiener) filters: i.e., that N input signals Xl (n), ... ,XN(n) are transformed into an output signal y(n) by an inner product with a tap-weight vector (Wl, ... ,WN): y(n) = wlxl(n) + ... + wNxN(n). In the ESN context, the input signals are the 2N + 2 components of the augmented input+network state vector, the tap-weight vector is the augmented output weight vector, and the output signal is the network pre-output (fout)-ly(n). 2The authors miscalculated their NMSE because they used a formula for zero-mean signals. I re-calculated the value NMSEtrain ~ 0.241 from their reported best (miscalculated) NMSE of 0.015. The larger value agrees with the plots supplied in that paper. 4.1 A refresher on adaptive linear system identification For a recursive online estimation of tap-weight vectors, "recursive least squares" (RLS) algorithms are widely used in linear signal processing when fast convergence is of prime importance. A good introduction to RLS is given in [2], whose notation I follow. An online algorithm in the augmented ESN setting should do the following: given an open-ended, typically non-stationary training I/O sequence (Uteach(n), Yteach(n)), at each time n ~ 1 determine an augmented output weight vector w~~~ares(n) which yields a good model of the current teacher system. Formally, an RLS algorithm for ESN output weight update minimizes the exponentially discounted square "pre-error" n LAn- k ((follt)-lYteach(k) - (follt)-lY[n](k))2 , (6) k=l where A < 1 is the forgetting factor and Y[n](k) is the model output that would be obtained at time k when a network with the current output weights w~~~ares(n) would be employed at all times k = 1, ... ,n. There are many variants of RLS algorithms minimizing (6), differing in their tradeoffs between computational cost, simplicity, and numerical stability. I use a "vanilla" version, which is detailed out in Table 12.1 in [2] and in the web tutorial package accompanying this paper. Two parameters characterise the tracking performance of an RLS algorithm: the misadjustment M and the convergence time constant T. The misadjustment gives the ratio between the excess MSE (or excess NMSE) incurred by the fluctuations of the adaptation process, and the optimal steady-state MSE that would be obtained in the limit of offline-training on infinite stationary training data. For instance, a misadjustment of M = 0.3 means that the tracking error of the adaptive algorithm in a steady-state situation exceeds the theoretically achievable optimum (with Sanle tap weight vector length) by 30 %. The time constant T associated with an RLS algorithm determines the exponent of the MSE convergence, e-n / T • For example, T = 200 would imply an excess MSE reduction by I/e every 200 steps. Misadjustment and convergence exponent are related to the forgetting factor and the tap-vector length through and 1 T::::::--. I-A 4.2 Case study: RLS-ESN for our 10th-order system (7) Eqns. (7) can be used to predict/design the tracking characteristics of a RLSpowered ESN. I will demonstrate this with the 10th-order system (5). Ire-use the same augmented lOa-unit ESN, but now determine its 2N + 2 output weight vector online with RLS. Setting A = 0.995, and considering N = 202, Eqns. (7) yield a misadjustment of M = 0.5 and a time constant T :::::: 200. Since the asymptotically optimal NMSE is approximately the NMSE of the offline-trained network, namely, NMSE :::::: 0.032, the misadjustment M = 0.5 lets us expect a NMSE of 0.032 x 150% :::::: 0.048 for the online adaptation after convergence. The time constant T :::::: 200 makes us expect NMSE convergence to the expected asymptotic NMSE by a factor of I/e every 200 steps. Training data. Experiments with the system (5) revealed that the system sometimes explodes when driven with i.i.d. input from [0,0.5]. To bound outputs, I wrapped the r.h.s. of (5) with a tanh. Furthermore, I replaced the original constants 0.3,0.05,1.5, 0.1 by free parameters a, (3", 6, to obtain d(n + 1) = tanh (a d(n) + (3 d(n) [t, d(n - i)] + ,u(n - 9) u(n) + 6). (8) This system was run for 10000 steps with an i.i.d. teacher input from [0,0.5]. Every 2000 steps, 0'.,(3",6 were assigned new random values taken from a ± 50 % interval around the respective original constants. Fig. 2A shows the resulting teacher output sequence, which clearly shows transitions between different "episodes" every 2000 steps. Running the RLS-ENS algorithm. The ENS was started from zero state and with a zero augmented output weight vector. It was driven by the teacher input, and a noise of size 0.0001 was inserted into the state update, as in the offline training. The RLS algorithm (with forgetting factor 0.995) was initialized according to the prescriptions given in [2] and then run together with the network updates, to compute from the augmented input+network states x(n) = (u(n), Xl (n), ... ,XN(n), u2 (n), xi(n), ... ,xJv(n)) a sequence of augmented output weight vectors w~~~ares (n). These output weight vectors were used to calculate a network output y(n) = tanh(w~~~ares(n), x(n)). Results. From the resulting length-l0000 sequences of desired outputs d(n) and network productions y(n) , NMSE's were numerically estimated from averaging within subsequent length-lOO blocks. Fig. 2B gives a logarithmic plot. In the last three episodes, the exponential NMSE convergence after each episode onset disruption is clearly recognizable. Also the convergence speed matches the predicted time constant, as revealed by the T = 200 slope line inserted in Fig. 2B. The dotted horizontal line in Fig. 2B marks the NMSE of the offline-trained ESN described in the previous section. Surprisingly, after convergence, the online-NMSE is lower than the offline NMSE. This can be explained through the IIR (autoregressive) nature of the system (5) resp. (8), which incurs long-term correlations in the signal d( n), or in other words, a nonstationarity of the signal in the timescale of the correlation lengthes, even with fixed parameters a, (3", 6. This medium-term nonstationarity compromises the performance of the offline algorithm, but the online adaptation can to a certain degree follow this nonstationarity. Fig. 2C is a logarithmic plot of the development of the mean absolute output weight size. It is apparent that after starting from zero, there is an initial exponential growth of absolute values of the output weights, until a stabilization at a size of about 1000, whereafter the NMSE develops a regular pattern (Fig. 2B). Finally, Fig. 2D shows an overlay of d(n) (solid) with y(n) (dotted) of the last 100 steps in the experiment, visually demonstrating the precision after convergence. A note on noise and stability. Standard offline training of ESNs yields output weights whose absolute size depends on the noise inserted into the network during training: the larger the noise, the smaller the mean output weights (extensive discussion in [5]). In online training, a similar inverse correlation between output weight size (after settling on plateau) and noise size can be observed. When the online learning experiment was done otherwise identically but without noise insertion, weights grew so large that the RLS algorithm entered a region of numerical instability. Thus, the noise term is crucial here for numerical stability, a condition familiar from EKF -based RNN training schemes [3], which are computationally closely related to RLS. A. C. 0.8 0 . 7 0.6 0.5 0.4 0 . 3 Teacher output signal 2000 4000 6000 8000 10000 LoglO of avo abs. weights ~!I~ 2000 4000 6000 8000 10000 LoglO of NMSE -0 . 5 -1 -1.5 -2 B. Teacher vs. network D. Figure 2: A. Teacher output. B. NMSE with predicted baseline and slopeline. C. Development of weights. D. Last 100 steps: desired (solid) and network-predicted ( dashed) signal. For details see text. 5 Discussion Several of the well-known error-gradient-based RNN training algorithms can be used for online weight adaptation. The update costs per time step in the most efficient of those algorithms (overview in [1]) are O(N2 ) , where N is network size. Typically, standard approaches train small networks (order of N = 20), whereas ESN typically relies on large networks for precision (order of N = 100). Thus, the RLS-based ESN online learning algorithm is typically more expensive than standard techniques. However, this drawback might be compensated by the following properties of RLSESN: • Simplicity of design and implementation; robust behavior with little need for learning parameter hand-tuning. • Custom-design of RLS-ESNs with prescribed tracking parameters, transferring well-understood linear systems methods to nonlinear systems. • Systems with long-lasting short-term memory can be learnt. Exploitable ESN memory spans grow with network size (analysis in [6]). Consider the 30th order system d(n+ 1) = tanh(0.2d(n) +0.04d(n) [L~=o 9d(n - i)] + 1.5 u(n - 29) u(n) + 0.001). It was learnt by a 400-unit augmented adaptive ESN with a test NMSE of 0.0081. The 51-th (!) order system y(n + 1) = u(n - 10) u(n - 50) was learnt offline by a 400-unit augmented ESN with a NMSE of 0.213. All in all, on the kind of tasks considered in above, adaptive (augmented) ESNs reach a similar level of precision as today's most refined gradient-based techniques. A given level of precision is attained in ESN vs. gradient-based techniques with a similar number of trainable weights (D. Prokhorov, private communication). Because gradient-based techniques train every connection weight in the RNN, whereas 3See Mathematica notebook for details. ESNs train only the output weights, the numbers of units of similarly performing standard RNNs vs. ESNs relate as N to N 2 . Thus, RNNs are more compact than equivalent ESNs. However, when working with ESNs, for each new trained output signal one can re-use the same "reservoir", adding only N new connections and weights. This has for instance been exploited for robots in the AIS institute by simultaneously training multiple feature detectors from a single "reservoir" [4]. In this circumstance, with a growing number of simultaneously required outputs, the requisite net model sizes for ESNs vs. traditional RNNs become asymptotically equal. The size disadvantage of ESNs is further balanced by much faster offline training, greater simplicity, and the general possibility to exploit linear-systems expertise for nonlinear adaptive modeling. Acknowledgments The results described in this paper were obtained while I worked at the Fraunhofer AIS Institute. I am greatly indebted to Thomas Christaller for unfaltering support. Wolfgang Maass and Danil Prokhorov contributed motivating discussions and valuable references. An international patent application for the ESN technique was filed on October 13, 2000 (PCT /EPOI/11490). References [1] A.F. Atiya and A.G. Parlos. New results on recurrent network training: Unifying the algorithms and accelerating convergence. IEEE Trans. Neural Networks, 11(3):697- 709,2000. [2] B. Farhang-Boroujeny. Adaptive Filters: Theory and Applications. Wiley, 1998. [3] L.A. Feldkamp, D.V. Prokhorov, C.F. Eagen, and F. Yuan. Enhanced multistream Kalman filter training for recurrent neural networks. In J .A.K. Suykens and J. Vandewalle, editors, Nonlinear Modeling: Advanced Black-Box Techniques, pages 29- 54. Kluwer, 1998. [4] J. Hertzberg, H. Jaeger, and F. Schonherr. Learning to ground fact symbols in behavior-based robots. In F. van Harmelen, editor, Proc. 15th Europ. Gonf. on Art. Int. (EGAI 02), pages 708- 712. lOS Press, Amsterdam, 2002. [5] H. Jaeger. The "echo state" approach to analysing and ing recurrent neural networks. GMD Report 148, GMD man National Research Institute for Computer Science, http://www.gmd.de/People/Herbert.Jaeger/Publications.html. trainGer2001. [6] H. Jaeger. Short term memory in echo state networks. GMD-Report 152, GMD - German National Research Institute for Computer Science, 2002. http://www.gmd.de/People/Herbert.Jaeger/Publications.html. [7] H. Jaeger. Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the echo state network approach. GMD Report 159, Fraunhofer Institute AIS, 2002. [8] W. Maass, T. Natschlaeger, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. http://www.cis.tugraz.at/igi/maass/psfiles/LSM-vl06.pdf. 2002. [9] W. Maass, Th. NatschHiger, and H. Markram. A model for real-time computation in generic neural microcircuits. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing System 15 (Proc. NIPS 2002). MIT Press, 2002.
2002
69
2,275
String Kernels, Fisher Kernels and Finite State Automata Craig Saunders John Shawe-Taylor Alexei Vinokourov Department of Computer Science Royal Holloway, University of London Email: {craig, j st, alexei }«lcs. rhul. ac. uk Abstract In this paper we show how the generation of documents can be thought of as a k-stage Markov process, which leads to a Fisher kernel from which the n-gram and string kernels can be re-constructed. The Fisher kernel view gives a more flexible insight into the string kernel and suggests how it can be parametrised in a way that reflects the statistics of the training corpus. Furthermore, the probabilistic modelling approach suggests extending the Markov process to consider sub-sequences of varying length, rather than the standard fixed-length approach used in the string kernel. We give a procedure for determining which sub-sequences are informative features and hence generate a Finite State Machine model, which can again be used to obtain a Fisher kernel. By adjusting the parametrisation we can also influence the weighting received by the features. In this way we are able to obtain a logarithmic weighting in a Fisher kernel. Finally, experiments are reported comparing the different kernels using the standard Bag of Words kernel as a baseline. 1 Introduction Recently the string kernel [6] has been shown to achieve good performance on textcategorisation tasks. The string kernel projects documents into a feature space indexed by all k-tuples of symbols for some fixed k. The strength of the feature indexed by the k-tuple U = (Ul, ... , Uk) for a document d is the sum over all occurrences of U as a subsequence (not necessarily contiguous) in d, where each occurrence is weighted by an exponentially decaying function of its length in d. This naturally extends the idea of an n-gram feature space where the only occurrences considered are contiguous ones. The dimension of the feature space and the non-sparsity of even modestly sized documents makes a direct computation of the feature vector for the string kernel infeasible. There is, however, a dynamic programming recursion that enables the semi-efficient evaluation of the kernel [6]. String kernels are apparently making no use of the semantic prior knowledge that the structure of words can give and yet they have been used with considerable success. The aim of this paper is to place the n-gram and string kernels in the context of probabilistic modelling of sequences, showing that they can be viewed as Fisher kernels of a Markov generation process. This immediately suggests ways of introducing weightings derived from refining the model based on the training corpus. Furthermore, this view also suggests extending consideration to subsequences of varying lengths in the same model. This leads to a Finite State Automaton again inferred from the data. The refined probabilistic model that this affords gives rise to two Fisher kernels depending on the parametrisation that is chosen, if we take the Fisher information matrix to be the identity. We give experimental evidence suggesting that the new kernels are capturing useful properties of the data while overcoming the computational difficulties of the original string kernel. 2 The Fisher VIew of the n-gram and String kernels In this section we show how the string kernel can be thought of as a type of Fisher kernel [2] where the fixed-length subsequences used as the features in the string kernel correspond to the parameters for building the model. In order to give some insight into the kernel we first give a Fisher formulation of the n-gram kernel (i.e. the string kernel which considers only contiguous sequences), and then extend this to the full string kernel. Let us assume that we have some document d of length s which is a sequence of symbols belonging to some alphabet A, i.e. di E A, i = 1, ... , s. We can consider document d as being generated by a k-stage Markov process. According to this view, for sequences u E A k - l we can define the probability of observing a symbol x after a sequence u as PU--+X. Sequences of k symbols therefore index the parameters of our model. The probability of a document d being generated by the model is therefore Idl P(d) = II Pd[j-k+!:j-l]--+djl j =k where we use the notation d[i: j] to denote the sequence didi+!·· ·dj . Now taking the derivative of the log-probability: o In P( d) o In TIj~k Pd[j -k+!:j -l]--+dj opu--+x Idl L olnpd[j-k+!:j-l]--+dj = tf(ux,d) j=k opu --+ x Pu --+ x (1) where tf(ux,d) is the term frequency of ux in d, that is the number of times the string ux occurs in d. l 1 Since the pu-+x are not independent it is not possible to take the partial derivative of one parameter without affecting others. However we can approximate our approach: We introduce an extra character c. For each (n - I)-gram u we assign a sufficiently small probability to pu-+c and change the other pu-+x to pu-+x = pu-+x (1 - Pu-+c). We now replace each occurence of Pu-+ c in P(d) by 1 - LaEA\{ c}Pu-+ a . Thus, since uc never occurs in d and Pu-+x ~ pv-+x, the u --+ x Fisher score entry for a document d becomes tf( ux, d) Pu-+x tf( uc, d) ~ tf( ux , d) pu-+ c pu-+ x The Fisher kernel is subsequently defined to be k(d,d') = UJrIUd', where Ud is the Fisher score vector with ux-component a~n P(d) and I = Ed[UdUdTJ . p u--t x It has become traditional to set the matrix I to be the identity when defining a Fisher kernel, though this undermines the very satisfying property of the pure definition that it is independent of the parametrisation. We will follow this same route mainly to reduce the complexity ofthe computation. We will, however, subsequently consider alternative parameterisations. Different choices of the parameters PU-rX give rise to different models and hence different kernels. It is perhaps surprising that the n-gram kernel is recovered (up to a constant factor) if we set PU-rX = IAI- I for all u E An-l and x E A, that is the least informative parameter setting. This follows since the feature vector of a document d has entries We therefore recover the n-gram kernel as the Fisher kernel of a model which uses a uniform distribution for generating documents. Before considering how the PU-rX might be chosen non-uniformly we turn our attention briefly to the string kernel. We have shown that we can view the n-gram kernel as a Fisher kernel. A little more work is needed in order to place the full string kernel (which considers noncontiguous subsequences) in the same framework. First we define an index set Sk-l,q over all (possibly non-contiguous) subsequences of length k, which finish in position q, Sk-l, q = {i : 1 :'S il < i2 < ... < i k - l < i k = q}. We now define a probability distribution P Sk_1 ,q over Sk - l,q by weighting sequence i by )..l(i), where l(i) = i k i l + 1 is the length of i, and normalising with a fixed constant C . This may leave some probability unaccounted for, which can be assigned to generating a spurious symbol. We denote by d[iJ the sequence of characters di1 di2 ... dik . We now define a text generation model that generates the symbol for position q by first selecting a sequence i from Sk-l,q according to the fixed distribution P Sk _l,q and then generates the next symbol based on Pd[i']-rdik for all possible values of dq where i' = (iI, i 2 , ••• , i k - l ) is the vector i without its last component. We will refer to this model as the Generalised k-stage Markov model with decay fa ctor )... Hence, if we assume that distributions are uniform a In P( d) a In TIj~k I:iESk_l ,j P Sk-l,j (i)Pd[i']-rdik apu-rx f a In I:iEsk_l ,j P Sk-l,j (i )Pd[i']-rdik j = k aPu-rx Idl IAI L L P Sk -1 ,j (i )Xux(d[i]) Idl IAIC- l L L )..l(i)Xux(d[i]), j = k iESk_l ,j where Xux is the indicator function for string ux. It follows that the corresponding Fisher features will be the weighted sum over all subsequences with decay factor A. In other words we recover the string kernel. Proposition 1 The Fisher kernel of the generalised k-stage Markov model with decay fa ctor A and constant Pu--+x is the string kernel of length k and decay fa ctor A. 3 The Finite State Machine Model Viewing the n-gram and string kernels as Fisher kernels of Markov models means we can view the different sequences of k - 1 symbols as defining states with the next symbol controlling the transition to the next state. We therefore arrive at a finite state automaton with states indexed by A k - 1 and transitions labelled by the elements of A . Hence, if u E Ak-l the symbol x E A causes the transition to state v[2: k], where v = ux. One drawback of the string kernel is that the value of k has to be chosen a-priori and is then fixed. A more flexible approach would be to consider different length subsequences as features, depending on their frequency. Subsequences that occur very frequently should be given a low weighting, as they do not contain much information in the same way that stop words are often removed from the bag of words representation. Rather than downweight such sequences an alternative strategy is to extend their length. Hence, the 3-gram com could be very frequent and hence not a useful discriminator. By extending it either backwards or forwards we would arrive at subsequences that are less frequent and so potentially carry useful information. Clearly, extending a sequence will always reduce its frequency since the extension could have been made in many distinct ways all of which contribute to the frequency of the root n-gram. As this derivation follows more naturally from the analysis of the n-gram kernel described in Section 2 we will only consider contiguous subsequences also known as substrings. We begin by introducing the general Finite State Machine (FSM) model and the corresponding Fisher kernel. Definition 2 A Finite State Machine model over an alphabet A IS a triple F = (~, J,p) where 1. the non-empty set ~ of states IS a finite subset of A* closed under taking substrings, 2. the transition function J J: ~ x A --+~, is defin ed by J(u, x) = v [j : l(v)], where v = ux and j = min{j : v [j : l(v)] E ~}, if the minimum is defined, otherwise the empty sequence f 3. for each state u the function p gives a function Pu, which is either a distribution over next symbols Pu (x) or the all one function Pu (x) = 1, for u E ~ and x E A. Given an FSM model F = (~, J, p) to process a document d we start at the state corresponding to the empty sequence f (guaranteed to be in ~ as it is non-empty and closed under taking substrings) and follow the transitions dictated by the symbols of the document. The probability of a document in the model is the product of the values on all of the transitions used: Idl P.:F (d) = II Pd[id -1](dj ), j =l where ij = min{i: d[i: j -1] E ~}. Note that requiring that the set ~ to be closed under taking substrings ensures that the minimum in the definition of is is always defined and that d[ij : j] does indeed define the state at stage j (this follows from a simple inductive argument on the sequence of states) . If we follow a similar derivation to that given in equation (1) we arrive at the corresponding feature for document d and transition on x from u of () tf( (u, x), d) ¢;u,x d = ()' Pu x where we use tf( (u, x), d) to denote the frequency of the transition on symbol x from a state u with non-unity Pu in document d. Hence, given an FSM model we can construct the corresponding Fisher kernel feature vector by simply processing the document through the FSM and recording the counts for each transition. The corresponding feature vector will be sparse relative to the dimension of the feature space (the total number of transitions in the FSM) since only those transitions actually used will have non-zero entries. Hence, as for the bag of words we can create feature vectors by listing the indices of transitions used followed by their frequency. The number of non-zero features will be at most equal to the number of symbols in the document. Consider taking ~ = U7==-Ol Ai with all the distributions Pu uniform for u E A k - 1 and Pu == 1 for other u. In this case we recover the k-gram model and corresponding kernel. A problem that we have observed when experimenting with the n-gram model is that if we estimate the frequencies of transitions from the corpus certain transitions can become very frequent while others from the same state occur only rarely. In such cases the rare states will receive a very high weighting in the Fisher score vector. One would like to use the strategy adopted for the idf weighting for the bag of words kernel which is often taken to be where m is the number of documents and mi the number containing term i. The In ensures that the contrast in weighting is controlled. We can obtain this effect in the Fisher kernel if we reparametrise the transition probabilities as follows Pu(x) = exp(- exp(-tu(x))), where tu(x) is the new parameter. With this parametrisation the derivative of the In probabilities becomes as required. a lnpu(x) atu(x) exp(-tu(x)) = -lnpu(x), Although this improves performance the problem of frequent substrings being uninformative remains. We now consider the idea outlined above of moving to longer subsequences in order to ensure that transitions are informative. 4 Choosing Features There is a critical frequency at which the most information is conveyed by a feature. If it is ubiquitous as we observed above it gives little or no information for analysing documents. If on the other hand it is very infrequent it again will not be useful since we are only rarely able to use it. The usefulness is maximal at the threshold between these two extremes. Hence, we would like to create states that occur not too frequently and not too infrequently. A natural way to infer the set of such states is from the training corpus. We select all substrings that have occurred at least t times in the document corpus, where t is a small but statistically visible number. In our experiments we took t = 10. Hence, given a corpus S we create the FSM model F t (S) with I;t (S) = {u E A* : u occurs at least t times in the corpus S} . Taking this definition of I;t (S) we construct the corresponding finite state machine model as described in Definition 2. We will refer to the model F t as the frequent set FSM at threshold t. We now construct the transition probabilities by processing the corpus through the Ft (S) keeping a tally of the number of times each transition is actually used. Typically we initialise the counts to some constant value c and convert the resulting counts into probabilities for the model. Hence, if fu ,x is the number of times we leave state u processing symbol x, the corresponding probabilities will be ( ) fu,x + c Pu X = lAic + 2::x/EA fu ,x l (2) Note that we will usually exclude from the count the transitions at the beginning of a document d that start from states d[l : j] for some j ?: O. The following proposition demonstrates that the model has the desired frequency properties for the transitions. We use the notation u ~ v to indicate the transition from state u to state v on processing symbol x. Proposition 3 Given a corpus S the FSM model Ft(S) satisfies the following property. Ignoring transitions from states indexed by d[l : i] for some document d of the corpus, the frequency counts fu,x for transitions u ~ v in the corpus S satisfy for all u E I;t (S) . Proof. Suppose that for some state u E I;t (S) (3) This implies that the string u has occurred at least tlAI times at the head of a transition not at the beginning of a document. Hence, by the pigeon hole principle there is ayE A such that y has occurred t times immediately before one of the transitions in the sum of (3). Note that this also implies that yu occurs at least t times in the corpus and therefore will be in I;t (S). Consider one of the transitions that occurs after yu on some symbol x . This transition will not be of the form u ~ v but rather yu ~ v contradicting its inclusion in the sum (3). Hence, the proposition holds. • Note that the proposition implies that no individual transition can be more frequent than the full sum. The proposition also has useful consequences for the maximum weighting for any Fisher score entries as the next corollary demonstrates. Corollary 4 Given a corpus S if we construct the FSM model F t (S) and compute the probabilities by counting transitions ignoring those from states indexed by d[l : i] for some document d of the corpus, the probabilities on the transitions will satisfy Proof. We substitute the bound given in the proposition into the formula (2). • The proposition and corollary demonstrate that the choice of Ft(S) as an FSM model has the desirable property that all of the states are meaningfully frequent while none of the transitions is too frequent and furthermore the Fisher weighting cannot grow too large for any individual transition. In the next section we will present experimental results testing the kernels we have introduced using the standard and logarithmic weightings. The baseline for the experiments will always be the bag of words kernel using the TFIDF weighting scheme. It is perhaps worth noting that though the IDF weighting appears similar to those described above it makes critical use of the distribution of terms across documents, something that is incompatible with the Fisher approach that we have adopted. It is therefore very exciting to see the results that we are able to obtain using these syntactic features and sub-document level weightings. 5 Experimental Results Our experiments were conducted on the top 10 categories of the standard Reuters21578 data set using the "Mod Apte" split. We compared the standard n-gram kernel with a Uniform, non-uniform and In weighting scheme, and the variablelength FSM model described in Section 4 both with uniform weighting and a In weighting scheme. As mentioned in Section 4, the parameter t was set to 10. In order to keep the comparison fair, the n-gram kernel features were also pruned from the feature vector if they occured less than 10 times. For our experiments we used 5-gram features, which have previously been reported to give the best results [5]. The standard bag of words model using the normal tfidf weighting scheme is used as a baseline. Once feature vectors had been created they were normalised and the SVMlight software package [3] was used with the default parameter settings to obtain outputs for the test examples. In order to compare algorithms, we used the average performance measure commonly used in Information Retrieval (see e.g. [4]). This is the average of precision values obtained when thresholding at each positively classified document. If all positive documents in the corpus are ranked higher than any negative documents, then the average precision is 100%. Average precision incorporates both precision and recall measures and is highly sensitive to document ranking, so therefore can be used to obtain a fair comparison between methods. The results are shown in Table 1. As can bee seen from the table, the variable-length subsequence method performs as well as or better than all other methods and achieves a perfect ranking for documents in one of the categories. Method BoW ngrams FSA Weighting TFIDF Uniform 1;: In 1;: Uniform In 1;: earn 99.86 99.91 96.4 99.9 99.9 99.9 acq 99.62 99.61 99.7 99.5 99.7 99.7 money-fx 80.54 82.43 84.9 83.4 86.5 85.8 grain 99.69 99.67 99.9 99.4 97.8 97.5 crude 98.52 98.23 99.9 97.2 100.0 100.0 trade 95.29 95.53 94.6 95.6 94.6 91.3 interest 91.61 98.83 96.6 95.4 94.0 88.8 ship 96.84 99.42 91.7 98.9 92.7 98.4 wheat 98.52 98.7 97.2 99.3 95.3 98.4 corn 98.95 98.2 99.3 99.0 97.5 98.1 Table 1: Average precision results comparing TFIDF, n-gram and FSM features on the top 10 categories of the reuters data set. 6 Discussion In this paper we have shown how the string kernel can be thought of as a k-stage Markov process, and as a result interpreted as a Fisher kernel. Using this new insight we have shown how the features of a Fisher kernel can be constructed using a Finite State Model parameterisation which reflects the statistics of the frequency of occurance of features within the corpus. This model has then been extended further to incorporate sub-sequences of varying length, which is a great deal more flexible than the fixed-length approach. A procedure for determining informative sub-sequences (states in the FSM model) has also been given. Experimental results have shown that this model outperforms the standard tfidf bag of words model on a well known data set. Although the experiments in this paper are not extensive, they show that the approach of using a Finite-State-Model to generate a Fisher kernel gives new insights and more flexibility over the string kernel, and performs well. Future work would include determining the optimum value for the threshold t (maximum frequency of a sub-string occurring within the FSM before a state is expanded) as this currently has to be set a-priori. References [1] D. Haussler. Convolution kernels on discrete structures. Technical Report UCSC-CRL99-10, University of California, Santa Cruz, July 1999. [2] T. Jaakkola, M. Diekhaus, and D. Haussler. Using the fisher kernel method to detect remote protein homologies. 7th Intell. Sys. Mol. Bio!. , pages 149- 158, 1999. [3] T. Joachims. Making large-scale svm learning practical. In B. Schiilkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MITPress, 1999. [4] Y. Li, H. Zaragoza, R. Herbrich, J. Shawe-Taylor, and J. Kandola. The perceptron algorithm with uneven margins. In Proceedings of the Nineteenth International Conference on Machine Learning (ICML '02), 2002. [5] H Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and Watkins C. Text classification using string kernels. Journal of Machine Learning Research, (2):419- 444, 2002. [6] H. Lodhi, J. Shawe-Taylor, N. Cristianini, and C. Watkins. Text classification using string kernels. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 563- 569. MIT Press, 2001. [7] C. Watkins. Dynamic alignment kernels. Technical Report CSD-TR-98-11, Royal Holloway, University of London, January 1999.
2002
7
2,276
Real Time Voice Processing with Audiovisual Feedback: Toward Autonomous Agents with Perfect Pitch Lawrence K. Saul1, Daniel D. Lee2, Charles L. Isbell3, and Yann LeCun4 1 Department of Computer and Information Science 2Department of Electrical and System Engineering University of Pennsylvania, 200 South 33rd St, Philadelphia, PA 19104 3Georgia Tech College of Computing, 801 Atlantic Drive, Atlanta, GA 30332 4NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 lsaul@cis.upenn.edu, ddlee@ee.upenn.edu, isbell@cc.gatech.edu, yann@research.nj.nec.com Abstract We have implemented a real time front end for detecting voiced speech and estimating its fundamental frequency. The front end performs the signal processing for voice-driven agents that attend to the pitch contours of human speech and provide continuous audiovisual feedback. The algorithm we use for pitch tracking has several distinguishing features: it makes no use of FFTs or autocorrelation at the pitch period; it updates the pitch incrementally on a sample-by-sample basis; it avoids peak picking and does not require interpolation in time or frequency to obtain high resolution estimates; and it works reliably over a four octave range, in real time, without the need for postprocessing to produce smooth contours. The algorithm is based on two simple ideas in neural computation: the introduction of a purposeful nonlinearity, and the error signal of a least squares fit. The pitch tracker is used in two real time multimedia applications: a voice-to-MIDI player that synthesizes electronic music from vocalized melodies, and an audiovisual Karaoke machine with multimodal feedback. Both applications run on a laptop and display the user’s pitch scrolling across the screen as he or she sings into the computer. 1 Introduction The pitch of the human voice is one of its most easily and rapidly controlled acoustic attributes. It plays a central role in both the production and perception of speech[17]. In clean speech, and even in corrupted speech, pitch is generally perceived with great accuracy[2, 6] at the fundamental frequency characterizing the vibration of the speaker’s vocal chords. There is a large literature on machine algorithms for pitch tracking[7], as well as applications to speech synthesis, coding, and recognition. Most algorithms have one or more of the following components. First, sliding windows of speech are analyzed at 5-10 ms intervals, and the results concatenated over time to obtain an initial estimate of the pitch contour. Second, within each window (30-60 ms), the pitch is deduced from peaks in the windowed autocorrelation function[13] or power spectrum[9, 10, 15], then refined by further interpolation in time or frequency. Third, the estimated pitch contours are smoothed by a postprocessing procedure[16], such as dynamic programming or median filtering, to remove octave errors and isolated glitches. In this paper, we describe an algorithm for pitch tracking that works quite differently and—based on our experience—quite well as a real time front end for interactive voicedriven agents. Notably, our algorithm does not make use of FFTs or autocorrelation at the pitch period; it updates the pitch incrementally on a sample-by-sample basis; it avoids peak picking and does not require interpolation in time or frequency to obtain high resolution estimates; and it works reliably over a four octave range—in real time—without any postprocessing. We have implemented the algorithm in two real-time multimedia applications: a voice-to-MIDI player and an audiovisual Karaoke machine. More generally, we are using the algorithm to explore novel types of human-computer interaction, as well as studying extensions of the algorithm for handling corrupted speech and overlapping speakers. 2 Algorithm A pitch tracker performs two essential functions: it labels speech as voiced or unvoiced, and throughout segments of voiced speech, it computes a running estimate of the fundamental frequency. Pitch tracking thus depends on the running detection and identification of periodic signals in speech. We develop our algorithm for pitch tracking by first examining the simpler problem of detecting sinusoids. For this simpler problem, we describe a solution that does not involve FFTs or autocorrelation at the period of the sinusoid. We then extend this solution to the more general problem of detecting periodic signals in speech. 2.1 Detecting sinusoids A simple approach to detecting sinusoids is based on viewing them as the solution of a second order linear difference equation[12]. A discretely sampled sinusoid has the form: sn = A sin(ωn + θ). (1) Sinusoids obey a simple difference equation such that each sample sn is proportional to the average of its neighbors 1 2(sn−1+sn+1), with the constant of proportionality given by: sn = (cos ω)−1 sn−1 + sn+1 2  . (2) Eq. (2) can be proved using trigonometric identities to expand the terms on the right hand side. We can use this property to judge whether an unknown signal xn is approximately sinusoidal. Consider the error function: E(α) = X n  xn −α xn−1 + xn+1 2 2 . (3) If the signal xn is well described by a sinusoid, then the right hand side of this error function will achieve a small value when the coefficient α is tuned to match its frequency, as in eq. (2). The minimum of the error function is found by solving a least squares problem: α∗= 2 P n xn(xn−1 + xn+1) P n(xn−1 + xn+1)2 . (4) Thus, to test whether a signal xn is sinusoidal, we can minimize its error function by eq. (4), then check two conditions: first, that E(α∗)≪E(0), and second, that |α∗|≥1. The first condition establishes that the mean squared error is small relative to the mean squared amplitude of the signal, while the second establishes that the signal is sinusoidal (as opposed to exponential), with estimated frequency: ω∗= cos−1(1/α∗). (5) This procedure for detecting sinusoids (known as Prony’s method[12]) has several notable features. First, it does not rely on computing FFTs or autocorrelation at the period of the sinusoid, but only on computing the zero-lagged and one-sample-lagged autocorrelations that appear in eq. (4), namely P nx2 n and P nxnxn±1. Second, the frequency estimates are obtained from the solution of a least squares problem, as opposed to the peaks of an autocorrelation or FFT, where the resolution may be limited by the sampling rate or signal length. Third, the method can be used in an incremental way to track the frequency of a slowly modulated sinusoid. In particular, suppose we analyze sliding windows—shifted by just one sample at a time—of a longer, nonstationary signal. Then we can efficiently update the windowed autocorrelations that appear in eq. (4) by adding just those terms generated by the rightmost sample of the current window and dropping just those terms generated by the leftmost sample of the previous window. (The number of operations per update is constant and does not depend on the window size.) We can extract more information from the least squares fit besides the error in eq. (3) and the estimate in eq. (5). In particular, we can characterize the uncertainty in the estimated frequency. The normalized error function N(α)=log[E(α)/E(0)] evaluates the least squares fit on a dimensionless logarithmic scale that does not depend on the amplitude of the signal. Let µ=log(cos−1(1/α)) denote the log-frequency implied by the coefficient α, and let ∆µ∗denote the uncertainty in the estimated log-frequency µ∗= log ω∗. (By working in the log domain, we measure uncertainty in the same units as the distance between notes on the musical scale.) A heuristic measure of uncertainty is obtained by evaluating the sharpness of the least squares fit, as characterized by the second derivative: ∆µ∗= "∂2N ∂µ2  µ=µ∗ #−1 2 = 1 ω∗ cos2ω∗ sin ω∗   1 E ∂2E ∂α2  α=α∗ −1 2 . (6) Eq. (6) relates sharper fits to lower uncertainty, or higher precision. As we shall see, it provides a valuable criterion for comparing the results of different least squares fits. 2.2 Detecting voiced speech Our algorithm for detecting voice speech is a simple extension of the algorithm described in the previous section. The algorithm operates on the time domain waveform in a number of stages, as summarized in Fig. 1. The analysis is based on the assumption that the low frequency spectrum of voiced speech can be modeled as a sum of (noisy) sinusoids occurring at integer multiples of the fundamental frequency, f0. Stage 1. Lowpass filtering The first stage of the algorithm is to lowpass filter the speech, removing energy at frequencies above 1 kHz. This is done to eliminate the aperiodic component of voiced fricatives[17], such as /z/. The signal can be aggressively downsampled after lowpass filtering, though the sampling rate should remain at least twice the maximum allowed value of f0. The lower sampling rate determines the rate at which the estimates of f0 are updated, but it does not limit the resolution of the estimates themselves. (In our formal evaluations of the algorithm, we downsampled from 20 kHz to 4 kHz after lowpass filtering; in the real-time multimedia applications, we downsampled from 44.1 kHz to 3675 Hz.) Stage 2. Pointwise nonlinearity The second stage of the algorithm is to pass the signal through a pointwise nonlinearity, such as squaring or half-wave rectification (which clips negative samples to zero). The pointwise nonlinearity lowpass filter two octave filterbank 25-100 Hz 50-200 Hz 100-400 Hz 200-800 Hz sinusoid detectors sharpest estimate pitch f < 800 Hz? 0 f < 100 Hz? 0 f < 400 Hz? 0 f < 200 Hz? 0 yes speech voiced? Figure 1: Estimating the fundamental frequency f0 of voiced speech without FFTs or autocorrelation at the pitch period. The speech is lowpass filtered (and optionally downsampled) to remove fricative noise, then transformed by a pointwise nonlinearity that concentrates additional energy at f0. The resulting signal is analyzed by a bank of bandpass filters that are narrow enough to resolve the harmonic at f0, but too wide to resolve higher-order harmonics. A resolved harmonic at f0 (essentially, a sinusoid) is detected by a running least squares fit, and its frequency recovered as the pitch. If more that one sinusoid is detected at the outputs of the filterbank, the one with the sharpest fit is used to estimate the pitch; if no sinusoid is detected, the speech is labeled as unvoiced. (The two octave filterbank in the figure is an idealization. In practice, a larger bank of narrower filters is used.) purpose of the nonlinearity is to concentrate additional energy at the fundamental, particularly if such energy was missing or only weakly present in the original signal. In voiced speech, pointwise nonlinearities such as squaring or half-wave rectification tend to create energy at f0 by virtue of extracting a crude representation of the signal’s envelope. This is particularly easy to see for the operation of squaring, which—applied to the sum of two sinusoids—creates energy at their sum and difference frequencies, the latter of which characterizes the envelope. In practice, we use half-wave rectification as the nonlinearity in this stage of the algorithm; though less easily characterized than squaring, it has the advantage of preserving the dynamic range of the original signal. Stage 3. Filterbank The third stage of the algorithm is to analyze the transformed speech by a bank of bandpass filters. These filters are designed to satisfy two competing criteria. On one hand, they are sufficiently narrow to resolve the harmonic at f0; on the other hand, they are sufficiently wide to integrate higher-order harmonics. An idealized two octave filterbank that meets these criteria is shown in Fig. 1. The result of this analysis—for voiced speech—is that the output of the filterbank consists either of sinusoids at f0 (and not any other frequency), or signals that do not resemble sinusoids at all. Consider, for example, a segment of voiced speech with fundamental frequency f0 = 180 Hz. For such speech, only the second filter from 50-200 Hz will resolve the harmonic at 180 Hz. On the other hand, the first filter from 25-100 Hz will pass low frequency noise; the third filter from 100-400 Hz will pass the first and second harmonics at 180 Hz and 360 Hz, and the fourth filter from 200-800 Hz will pass the second through fourth harmonics at 360, 540, and 720 Hz. Thus, the output of the filterbank will consist of a sinusoid at f0 and three other signals that are random or periodic, but definitely not sinusoidal. In practice, we do not use the idealized two octave filterbank shown in Fig. 1, but a larger bank of narrower filters that helps to avoid contaminating the harmonic at f0 by energy at 2f0. The bandpass filters in our experiments were 8th order Chebyshev (type I) filters with 0.5 dB of ripple in 1.6 octave passbands, and signals were doubly filtered to obtain sharp frequency cutoffs. Stage 4. Sinusoid detection The fourth stage of the algorithm is to detect sinusoids at the outputs of the filterbank. Sinusoids are detected by the adaptive least squares fits described in section 2.1. Running estimates of sinusoid frequencies and their uncertainties are obtained from eqs. (5–6) and updated on a sample by sample basis for the output of each filter. If the uncertainty in any filter’s estimate is less than a specified threshold, then the corresponding sample is labeled as voiced, and the fundamental frequency f0 determined by whichever filter’s estimate has the least uncertainty. (For sliding windows of length 40–60 ms, the thresholds typically fall in the range 0.08–0.12, with higher thresholds required for shorter windows.) Empirically, we have found the uncertainty in eq. (6) to be a better criterion than the error function itself for evaluating and comparing the least squares fits from different filters. A possible explanation for this is that the expression in eq. (6) was derived by a dimensional analysis, whereas the error functions of different filters are not even computed on the same signals. Overall, the four stages of the algorithm are well suited to a real time implementation. The algorithm can also be used for batch processing of waveforms, in which case startup and ending transients can be minimized by zero-phase forward and reverse filtering. 3 Evaluation The algorithm was evaluated on a small database of speech collected at the University of Edinburgh[1]. The Edinburgh database contains about 5 minutes of speech consisting of 50 sentences read by one male speaker and one female speaker. The database also contains reference f0 contours derived from simultaneously recorded larynogograph signals. The sentences in the database are biased to contain difficult cases for f0 estimation, such as voiced fricatives, nasals, liquids, and glides. The results of our algorithm on the first three utterances of each speaker are shown in Fig. 2. A formal evaluation was made by accumulating errors over all utterances in the database, using the reference f0 contours as ground truth[1]. Comparisons between estimated and reference f0 values were made every 6.4 ms, as in previous benchmarks. Also, in these evaluations, the estimates of f0 from eqs. (4–5) were confined to the range 50–250 Hz for the male speaker and the range 120–400 Hz for the female speaker; this was done for consistency with previous benchmarks, which enforced these limits. Note that our estimated f0 contours were not postprocessed by a smoothing procedure, such as median filtering or dynamic programming. Error rates were computed for the fraction of unvoiced (or silent) speech misclassified as voiced and for the fraction of voiced speech misclassified as unvoiced. Additionally, for the fraction of speech correctly identified as voiced, a gross error rate was computed measuring the percentage of comparisons for which the reference and estimated f0 differed by more than 20%. Finally, for the fraction of speech correctly identified as voiced and in which the estimated f0, was not in gross error, a root mean square (rms) deviation was computed between the reference and estimated f0. The original study on this database published results for a number of approaches to pitch tracking. Earlier results, as well as those derived from the algorithm in this paper, are shown in Table 1. The overall results show our algorithm—indicated as the adaptive least squares (ALS) approach to pitch tracking—to be extremely competitive in all respects. The only anomaly in these results is the slightly larger rms deviation produced by ALS estimation compared to other approaches. The discrepancy could be an artifact of the filtering operations in Fig. 1, resulting in a slight desychronization of the reference and estimated f0 contours. On the other hand, the discrepancy could indicate that for certain voiced sounds, a more robust estimation procedure[12] would yield better results than the simple least squares fits in section 2.1. 100 150 200 pitch (Hz) Where can I park my car? reference estimated 0 0.5 1 1.5 time (sec) 200 250 300 pitch (Hz) Where can I park my car? reference estimated 1 1.5 2 2.5 time (sec) 80 100 120 140 160 180 pitch (Hz) I'd like to leave this in your safe. reference estimated 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 time (sec) 150 200 250 300 350 pitch (Hz) I'd like to leave this in your safe. reference estimated 0.5 1 1.5 2 2.5 time (sec) 80 100 120 140 160 180 pitch (Hz) How much are my telephone charges? reference estimated 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 time (sec) 150 200 250 300 350 pitch (Hz) How much are my telephone charges? reference estimated 0.5 1 1.5 2 2.5 time (sec) Figure 2: Reference and estimated f0 contours for the first three utterances of the male (left) and female (right) speaker in the Edinburgh database[1]. Mismatches between the contours reveal voiced and unvoiced errors. 4 Agents We have implemented our pitch tracking algorithm as a real time front end for two interactive voice-driven agents. The first is a voice-to-MIDI player that synthesizes electronic music from vocalized melodies[4]. Over one hundred electronic instruments are available. The second (see the storyboard in Fig. 3) is a a multimedia Karaoke machine with audiovisual feedback, voice-driven key selection, and performance scoring. In both applications, the user’s pitch is displayed in real time, scrolling across the screen as he or she sings into the computer. In the Karaoke demo, the correct pitch is also simultaneously displayed, providing an additional element of embarrassment when the singer misses a note. Both applications run on a laptop with an external microphone. Interestingly, the real time audiovisual feedback provided by these agents creates a profoundly different user experience than current systems in automatic speech recognition[14]. Unlike dictation programs or dialog managers, our more primitive agents—which only attend to pitch contours—are not designed to replace human operators, but to entertain and amuse in a way that humans cannot. The effect is to enhance the medium of voice, as opposed to highlighting the gap between human and machine performance. unvoiced voiced gross errors rms algorithm in error in error high low deviation (%) (%) (%) (%) (Hz) CPD 18.11 19.89 4.09 0.64 3.60 FBPT 3.73 13.90 1.27 0.64 2.89 HPS 14.11 7.07 5.34 28.15 3.21 IPTA 9.78 17.45 1.40 0.83 3.37 PP 7.69 15.82 0.22 1.74 3.01 SPRD 4.05 15.78 0.62 2.01 2.46 eSPRD 4.63 12.07 0.90 0.56 1.74 ALS 4.20 11.00 0.05 0.20 3.24 CPD 31.53 22.22 0.61 3.97 7.61 FBPT 3.61 12.16 0.60 3.55 7.03 HPS 19.10 21.06 0.46 1.61 5.31 IPTA 5.70 15.93 0.53 3.12 5.35 PP 6.15 13.01 0.26 3.20 6.45 SPRD 2.35 12.16 0.39 5.56 5.51 eSPRD 2.73 9.13 0.43 0.23 5.13 ALS 4.92 5.58 0.33 0.04 6.91 Table 1: Evaluations of different pitch tracking algorithms on male speech (top) and female speech (bottom). The algorithms in the table are cepstrum pitch determination (CPD)[9], feature-based pitch tracking (FBPT)[11], harmonic product spectrum (HPS) pitch determination[10, 15], parallel processing (PP) of multiple estimators in the time domain[5], integrated pitch tracking (IPTA)[16], super resolution pitch determination (SRPD)[8], enhanced SRPD (eSRPD)[1], and adaptive least squares (ALS) estimation, as described in this paper. The benchmarks other than ALS were previously reported[1]. The best results in each column are indicated in boldface. Figure 3: Screen shots from the multimedia Karoake machine with voice-driven key selection, audiovisual feedback, and performance scoring. From left to right: splash screen; singing “happy birthday”; machine evaluation. 5 Future work Voice is the most natural and expressive medium of human communication. Tapping the full potential of this medium remains a grand challenge for researchers in artificial intelligence (AI) and human-computer interaction. In most situations, a speaker’s intentions are derived not only from the literal transcription of his speech, but also from prosodic cues, such as pitch, stress, and rhythm. The real time processing of such cues thus represents a fundamental challenge for autonomous, voice-driven agents. Indeed, a machine that could learn from speech as naturally as a newborn infant—responding to prosodic cues but recognizing in fact no words—would constitute a genuine triumph of AI. We are pursuing the ideas in this paper with this vision in mind, looking beyond the immediate applications to voice-to-midi synthesis and audiovisual Karaoke. The algorithm in this paper was purposefully limited to clean speech from non-overlapping speakers. While the algorithm works well in this domain, we view it mainly as a vehicle for experimenting with non-traditional methods that avoid FFTs and autocorrelation and that (ultimately) might be applied to more complicated signals. We have two main goals for future work: first, to add more sophisticated types of human-computer interaction to our voice-driven agents, and second, to incorporate the novel elements of our pitch tracker into a more comprehensive front end for auditory scene analysis[2, 3]. The agents need to be sufficiently complex to engage humans in extended interactions, as well as sufficiently robust to handle corrupted speech and overlapping speakers. From such agents, we expect interesting possibilities to emerge. References [1] P. C. Bagshaw, S. M. Hiller, and M. A. Jack. Enhanced pitch tracking and the processing of f0 contours for computer aided intonation teaching. In Proceedings of the 3rd European Conference on Speech Communication and Technology, volume 2, pages 1003–1006, 1993. [2] A. S. Bregman. Auditory scene analysis: the perceptual organization of sound. M.I.T. Press, Cambridge, MA, 1994. [3] M. Cooke and D. P. W. Ellis. The auditory organization of speech and other sources in listeners and computational models. Speech Communication, 35:141–177, 2001. [4] P. de la Cuadra, A. Master, and C. Sapp. Efficient pitch detection techniques for interactive music. In Proceedings of the 2001 International Computer Music Conference, La Habana, Cuba, September 2001. [5] B. Gold and L. R. Rabiner. Parallel processing techniques for estimating pitch periods of speech in the time domain. Journal of the Acoustical Society of America, 46(2,2):442–448, August 1969. [6] W. M. Hartmann. Pitch, periodicity, and auditory organization. Journal of the Acoustical Society of America, 100(6):3491–3502, 1996. [7] W. Hess. Pitch Determination of Speech Signals: Algorithms and Devices. Springer, 1983. [8] Y. Medan, E. Yair, and D. Chazan. Super resolution pitch determination of speech signals. IEEE Transactions on Signal Processing, 39(1):40–48, 1991. [9] A. M. Noll. Cepstrum pitch determination. Journal of the Acoustical Society of America, 41(2):293–309, 1967. [10] A. M. Noll. Pitch determination of human speech by the harmonic product spectrum, the harmonic sum spectrum, and a maximum likelihood estimate. In Proceedings of the Symposium on Computer Processing in Communication, pages 779–798, April 1969. [11] M. S. Phillips. A feature-based time domain pitch tracker. Journal of the Acoustical Society of America, 79:S9–S10, 1985. [12] J. G. Proakis, C. M. Rader, F. Ling, M. Moonen, I. K. Proudler, and C. L. Nikias. Algorithms for Statistical Signal Processing. Prentice Hall, 2002. [13] L. R. Rabiner. On the use of autocorrelation analysis for pitch determination. IEEE Transactions on Acoustics, Speech, and Signal Processing, 25:22–33, 1977. [14] L. R. Rabiner and B. H. Juang. Fundamentals of Speech Recognition. Prentice Hall, Englewoods Cliffs, NJ, 1993. [15] M. R. Schroeder. Period histogram and product spectrum: new methods for fundamental frequency measurement. Journal of the Acoustical Society of America, 43(4):829–834, 1968. [16] B. G. Secrest and G. R. Doddington. An integrated pitch tracking algorithm for speech systems. In Proceedings of the 1983 IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 1352–1355, Boston, 1983. [17] K. Stevens. Acoustic Phonetics. M.I.T. Press, Cambridge, MA, 1999.
2002
70
2,277
An Impossibility Theorem for Clustering Jon Kleinberg Department of Computer Science Cornell University Ithaca NY 14853 Abstract Although the study of clustering is centered around an intuitively compelling goal, it has been very difficult to develop a unified framework for reasoning about it at a technical level, and profoundly diverse approaches to clustering abound in the research community. Here we suggest a formal perspective on the difficulty in finding such a unification, in the form of an impossibility theorem: for a set of three simple properties, we show that there is no clustering function satisfying all three. Relaxations of these properties expose some of the interesting (and unavoidable) trade-offs at work in well-studied clustering techniques such as single-linkage, sum-of-pairs, k-means, and k-median. 1 Introduction Clustering is a notion that arises naturally in many fields; whenever one has a heterogeneous set of objects, it is natural to seek methods for grouping them together based on an underlying measure of similarity. A standard approach is to represent the collection of objects as a set of abstract points, and define distances among the points to represent similarities — the closer the points, the more similar they are. Thus, clustering is centered around an intuitively compelling but vaguely defined goal: given an underlying set of points, partition them into a collection of clusters so that points in the same cluster are close together, while points in different clusters are far apart. The study of clustering is unified only at this very general level of description, however; at the level of concrete methods and algorithms, one quickly encounters a bewildering array of different clustering techniques, including agglomerative, spectral, information-theoretic, and centroid-based, as well as those arising from combinatorial optimization and from probabilistic generative models. These techniques are based on diverse underlying principles, and they often lead to qualitatively different results. A number of standard textbooks [1, 4, 6, 9] provide overviews of a range of the approaches that are generally employed. Given the scope of the issue, there has been relatively little work aimed at reasoning about clustering independently of any particular algorithm, objective function, or generative data model. But it is not clear that this needs to be the case. To take a motivating example from a technically different but methodologically similar setting, research in mathematical economics has frequently formalized broad intuitive notions (how to fairly divide resources, or how to achieve consensus from individual preferences) in what is often termed an axiomatic framework — one enumerates a collection of simple properties that a solution ought to satisfy, and then studies how these properties constrain the solutions one is able to obtain [10]. In some striking cases, as in Arrow’s celebrated theorem on social choice functions [2], the result is impossibility — there is no solution that simultaneously satisfies a small collection of simple properties. In this paper, we develop an axiomatic framework for clustering. First, as is standard, we define a clustering function to be any function f that takes a set S of n points with pairwise distances between them, and returns a partition of S. (The points in S are not assumed to belong to any ambient space; the pairwise distances are the only data one has about them.) We then consider the effect of requiring the clustering function to obey certain natural properties. Our first result is a basic impossibility theorem: for a set of three simple properties — essentially scale-invariance, a richness requirement that all partitions be achievable, and a consistency condition on the shrinking and stretching of individual distances — we show that there is no clustering function satisfying all three. None of these properties is redundant, in the sense that it is easy to construct clustering functions satisfying any two of the three. We also show, by way of contrast, that certain natural relaxations of this set of properties are satisfied by versions of well-known clustering functions, including those derived from single-linkage and sum-of-pairs. In particular, we fully characterize the set of possible outputs of a clustering function that satisfies the scale-invariance and consistency properties. How should one interpret an impossibility result in this setting? The fact that it arises directly from three simple constraints suggests a technical underpinning for the difficulty in unifying the initial, informal concept of “clustering.” It indicates a set of basic trade-offs that are inherent in the clustering problem, and offers a way to distinguish between clustering methods based not simply on operational grounds, but on the ways in which they resolve the choices implicit in these trade-offs. Exploring relaxations of the properties helps to sharpen this type of analysis further — providing a perspective, for example, on the distinction between clustering functions that fix the number of clusters a priori and those that do not; and between clustering functions that build in a fundamental length scale and those that do not. Other Axiomatic Approaches. As discussed above, the vast majority of approaches to clustering are derived from the application of specific algorithms, the optima of specific objective functions, or the consequences of particular probabilistic generative models for the data. Here we briefly review work seeking to examine properties that do not overtly impose a particular objective function or model. Jardine and Sibson [7] and Puzicha, Hofmann, and Buhmann [12] have considered axiomatic approaches to clustering, although they operate in formalisms quite different from ours, and they do not seek impossibility results. Jardine and Sibson are concerned with hierarchical clustering, where one constructs a tree of nested clusters. They show that a hierarchical version of single-linkage is the unique function consistent with a collection of properties; however, this is primarily a consequence of the fact that one of their properties is an implicit optimization criterion that is uniquely optimized by single-linkage. Puzicha et al. consider properties of cost functions on partitions; these implicitly define clustering functions through the process of choosing a minimum-cost partition. They investigate a particular class of clustering functions that arises if one requires the cost function to decompose into a certain additive form. Recently, Kalai, Papadimitriou, Vempala, and Vetta have also investigated an axiomatic framework for clustering [8]; like the approach of Jardine and Sibson [7], and in contrast to our work here, they formulate a collection of properties that are sufficient to uniquely specify a particular clustering function. Axiomatic approaches have also been applied in areas related to clustering — particularly in collaborative filtering, which harnesses similarities among users to make recommendations, and in discrete location theory, which focuses on the placement of “central” facilities among distributed collections of individuals. For collaborative filtering, Pennock et al. [11] show how results from social choice theory, including versions of Arrow’s Impossibility Theorem [2], can be applied to characterize recommendation systems satisfying collections of simple properties. In discrete location theory, Hansen and Roberts [5] prove an impossibility result for choosing a central facility to serve a set of demands on a graph; essentially, given a certain collection of required properties, they show that any function that specifies the resulting facility must be highly sensitive to small changes in the input. 2 The Impossibility Theorem A clustering function operates on a set S of n ≥2 points and the pairwise distances among them. Since we wish to deal with point sets that do not necessarily belong to an ambient space, we identify the points with the set S = {1, 2, . . ., n}. We then define a distance function to be any function d : S × S →R such that for distinct i, j ∈S, we have d(i, j) ≥0, d(i, j) = 0 if and only if i = j, and d(i, j) = d(j, i). One can optionally restrict attention to distance functions that are metrics by imposing the triangle inequality: d(i, k) ≤d(i, j) + d(j, k) for all i, j, k ∈S. We will not require the triangle inequality in the discussion here, but the results to follow — both negative and positive — still hold if one does require it. A clustering function is a function f that takes a distance function d on S and returns a partition Γ of S. The sets in Γ will be called its clusters. We note that, as written, a clustering function is defined only on point sets of a particular size (n); however, all the specific clustering functions we consider here will be defined for all values of n larger than some small base value. Here is a first property one could require of a clustering function. If d is a distance function, we write α·d to denote the distance function in which the distance between i and j is αd(i, j). Scale-Invariance. For any distance function d and any α > 0, we have f(d) = f(α · d). This is simply the requirement that the clustering function not be sensitive to changes in the units of distance measurement — it should not have a built-in “length scale.” A second property is that the output of the clustering function should be “rich” — every partition of S is a possible output. To state this more compactly, let Range(f) denote the set of all partitions Γ such that f(d) = Γ for some distance function d. Richness. Range(f) is equal to the set of all partitions of S. In other words, suppose we are given the names of the points only (i.e. the indices in S) but not the distances between them. Richness requires that for any desired partition Γ, it should be possible to construct a distance function d on S for which f(d) = Γ. Finally, we discuss a Consistency property that is more subtle that the first two. We think of a clustering function as being “consistent” if it exhibits the following behavior: when we shrink distances between points inside a cluster and expand distances between points in different clusters, we get the same result. To make this precise, we introduce the following definition. Let Γ be a partition of S, and d and d′ two distance functions on S. We say that d′ is a Γ-transformation of d if (a) for all i, j ∈S belonging to the same cluster of Γ, we have d′(i, j) ≤d(i, j); and (b) for all i, j ∈S belonging to different clusters of Γ, we have d′(i, j) ≥d(i, j). Consistency. Let d and d′ be two distance functions. If f(d) = Γ, and d′ is a Γ-transformation of d, then f(d′) = Γ. In other words, suppose that the clustering Γ arises from the distance function d. If we now produce d′ by reducing distances within the clusters and enlarging distance between the clusters then the same clustering Γ should arise from d′. We can now state the impossibility theorem very simply. Theorem 2.1 For each n ≥2, there is no clustering function f that satisfies ScaleInvariance, Richness, and Consistency. We will prove Theorem 2.1 in the next section, as a consequence of a more general statement. Before doing this, we reflect on the relation of these properties to one another by showing that there exist natural clustering functions satisfying any two of the three properties. To do this, we describe the single-linkage procedure (see e.g. [6]), which in fact defines a family of clustering functions. Intuitively, single-linkage operates by initializing each point as its own cluster, and then repeatedly merging the pair of clusters whose distance to one another (as measured from their closest points of approach) is minimum. More concretely, single-linkage constructs a weighted complete graph Gd whose node set is S and for which the weight on edge (i, j) is d(i, j). It then orders the edges of Gd by non-decreasing weight (breaking ties lexicographically), and adds edges one at a time until a specified stopping condition is reached. Let Hd denote the subgraph consisting of all edges that are added before the stopping condition is reached; the connected components of Hd are the clusters. Thus, by choosing a stopping condition for the single-linkage procedure, one obtains a clustering function, which maps the input distance function to the set of connected components that results at the end of the procedure. We now show that for any two of the three properties in Theorem 2.1, one can choose a single-linkage stopping condition so that the resulting clustering function satisfies these two properties. Here are the three types of stopping conditions we will consider. • k-cluster stopping condition. Stop adding edges when the subgraph first consists of k connected components. (We will only consider this condition to be well-defined when the number of points is at least k.) • distance-r stopping condition. Only add edges of weight at most r. • scale-α stopping condition. Let ρ∗denote the maximum pairwise distance; i.e. ρ∗= maxi,j d(i, j). Only add edges of weight at most αρ∗. It is clear that these various stopping conditions qualitatively trade offcertain of the properties in Theorem 2.1. Thus, for example, the k-cluster stopping condition does not attempt to produce all possible partitions, while the distance-r stopping condition builds in a fundamental length scale, and hence is not scale-invariant. However, by the appropriate choice of one of these stopping conditions, one can achieve any two of the three properties in Theorem 2.1. Theorem 2.2 (a) For any k ≥1, and any n ≥k, single-linkage with the k-cluster stopping condition satisfies Scale-Invariance and Consistency. (b) For any positive α < 1, and any n ≥3, single-linkage with the scale-α stopping condition satisfies Scale-Invariance and Richness. (c) For any r > 0, and any n ≥2, single-linkage with the distance-r stopping condition satisfies Richness and Consistency. 3 Antichains of Partitions We now state and prove a strengthening of the impossibility result. We say that a partition Γ′ is a refinement of a partition Γ if for every set C′ ∈Γ′, there is a set C ∈Γ such that C′ ⊆C. We define a partial order on the set of all partitions by writing Γ′ ⪯Γ if Γ′ is a refinement of Γ. Following the terminology of partially ordered sets, we say that a collection of partitions is an antichain if it does not contain two distinct partitions such that one is a refinement of the other. For a set of n ≥2 points, the collection of all partitions does not form an antichain; thus, Theorem 2.1 follows from Theorem 3.1 If a clustering function f satisfies Scale-Invariance and Consistency, then Range(f) is an antichain. Proof. For a partition Γ, we say that a distance function d (a, b)-conforms to Γ if, for all pairs of points i, j that belong to the same cluster of Γ, we have d(i, j) ≤a, while for all pairs of points i, j that belong to different clusters, we have d(i, j) ≥b. With respect to a given clustering function f, we say that a pair of positive real numbers (a, b) is Γ-forcing if, for all distance functions d that (a, b)-conform to Γ, we have f(d) = Γ. Let f be a clustering function that satisfies Consistency. We claim that for any partition Γ ∈Range(f), there exist positive real numbers a < b such that the pair (a, b) is Γ-forcing. To see this, we first note that since Γ ∈Range(f), there exists a distance function d such that f(d) = Γ. Now, let a′ be the minimum distance among pairs of points in the same cluster of Γ, and let b′ be the maximum distance among pairs of points that do not belong to the same cluster of Γ. Choose numbers a < b so that a ≤a′ and b ≥b′. Clearly any distance function d′ that (a, b)-conforms to Γ must be a Γ-transformation of d, and so by the Consistency property, f(d′) = Γ. It follows that the pair (a, b) is Γ-forcing. Now suppose further that the clustering function f satisfies Scale-Invariance, and that there exist distinct partitions Γ0, Γ1 ∈Range(f) such that Γ0 is a refinement of Γ1. We show how this leads to a contradiction. Let (a0, b0) be a Γ0-forcing pair, and let (a1, b1) be a Γ1-forcing pair, where a0 < b0 and a1 < b1; the existence of such pairs follows from our claim above. Let a2 be any number less than or equal to a1, and choose ε so that 0 < ε < a0a2b−1 0 . It is now straightforward to construct a distance function d with the following properties: For pairs of points i, j that belong to the same cluster of Γ0, we have d(i, j) ≤ε; for pairs i, j that belong to the same cluster of Γ1 but not to the same cluster of Γ0, we have a2 ≤d(i, j) ≤a1; and for pairs i, j the do not belong to the same cluster of Γ1, we have d(i, j) ≥b1. The distance function d (a1, b1)-conforms to Γ1, and so we have f(d) = Γ1. Now set α = b0a−1 2 , and define d′ = α ·d. By Scale-Invariance, we must have f(d′) = f(d) = Γ1. But for points i, j in the same cluster of Γ0 we have d′(i, j) ≤εb0a−1 2 < a0, while for points i, j that do not belong to the same cluster of Γ0 we have d′(i, j) ≥ a2b0a−1 2 ≥b0. Thus d′ (a0, b0)-conforms to Γ0, and so we must have f(d′) = Γ0. As Γ0 ̸= Γ1, this is a contradiction. The proof above uses our assumption that the clustering function f is defined on the set of all distance functions on n points. However, essentially the same proof yields a corresponding impossibility result for clustering functions f that are defined only on metrics, or only on distance functions arising from n points in a Euclidean space of some dimension. To adapt the proof, one need only be careful to choose the constant a2 and distance function d to satisfy the required properties. We now prove a complementary positive result; together with Theorem 3.1, this completely characterizes the possible values of Range(f) for clustering functions f that satisfy Scale-Invariance and Consistency. Theorem 3.2 For every antichain of partitions A, there is a clustering function f satisfying Scale-Invariance and Consistency for which Range(f) = A. Proof. Given an arbitrary antichain A, it is not clear how to produce a stopping condition for the single-linkage procedure that gives rise to a clustering function f with Range(f) = A. (Note that the k-cluster stopping condition yields a clustering function whose range is the antichain consisting of all partitions into k sets.) Thus, to prove this result, we use a variant of the sum-of-pairs clustering function (see e.g. [3]), adapted to general antichains. We focus on the case in which |A| > 1, since the case of |A| = 1 is trivial. For a partition Γ ∈A, we write (i, j) ∼Γ if both i and j belong to the same cluster in Γ. The A-sum-of-pairs function f seeks the partition Γ ∈A that minimizes the sum of all distances between pairs of points in the same cluster; in other words, it seeks the Γ ∈A minimizing the objective function Φd(Γ) = P (i,j)∼Γ d(i, j). (Ties are broken lexicographically.) It is crucial that the minimization is only over partitions in A; clearly, if we wished to minimize this objective function over all partitions, we would choose the partition in which each point forms its own cluster. It is clear that f satisfies Scale-Invariance, since Φα·d(Γ) = αΦd(Γ) for any partition Γ. By definition we have Range(f) ⊆A, and we argue that Range(f) ⊇A as follows. For any partition Γ ∈A, construct a distance function d with the following properties: d(i, j) < n−3 for every pair of points i, j belonging to the same cluster of Γ, and d(i, j) ≥1 for every pair of points i, j belonging to different clusters of Γ. We have Φd(Γ) < 1; and moreover Φd(Γ′) < 1 only for partitions Γ′ that are refinements of Γ. Since A is an antichain, it follows that Γ must minimize Φd over all partitions in A, and hence f(d) = Γ. It remains only to verify Consistency. Suppose that for the distance function d, we have f(d) = Γ; and let d′ be a Γ-transformation of d. For any partition Γ′, let ∆(Γ′) = Φd(Γ′) −Φd′(Γ′). It is enough to show that for any partition Γ′ ∈A, we have ∆(Γ) ≥∆(Γ′). But this follows simply because ∆(Γ) = P (i,j)∼Γ d(i, j) −d′(i, j), while ∆(Γ′) = X (i,j)∼Γ′ d(i, j) −d′(i, j) ≤ X (i,j)∼Γ′ and (i,j)∼Γ d(i, j) −d′(i, j) ≤∆(Γ), where both inequalities follow because d′ is a Γ-transformation of d: first, only terms corresponding to pairs in the same cluster of Γ are non-negative; and second, every term corresponding to a pair in the same cluster of Γ is non-negative. 4 Centroid-Based Clustering and Consistency In a widely-used approach to clustering, one selects k of the input points as centroids, and then defines clusters by assigning each point in S to its nearest centroid. The goal, intuitively, is to choose the centroids so that each point in S is close to at least one of them. This overall approach arises both from combinatorial optimization perspectives, where it has roots in facility location problems [9], and in maximumlikelihood methods, where the centroids may represent centers of probability density functions [4, 6]. We show here that for a fairly general class of centroid-based clustering functions, including k-means and k-median, none of the functions in the class satisfies the Consistency property. This suggests an interesting tension between between Consistency and the centroid-based approach to clustering, and forms a contrast with the results for single-linkage and sum-of-pairs in previous sections. Specifically, for any natural number k ≥2, and any continuous, non-decreasing, and unbounded function g : R+ →R+, we define the (k, g)-centroid clustering function as follows. First, we choose the set of k “centroid” points T ⊆S for which the objective function Λg d(T ) = P i∈S g(d(i, T )) is minimized. (Here d(i, T ) = minj∈T d(i, j).) Then we define a partition of S into k clusters by assigning each point to the element of T closest to it. The k-median function [9] is obtained by setting g to be the identity function, while the objective function underlying k-means clustering [4, 6] is obtained by setting g(d) = d2. Theorem 4.1 For every k ≥2 and every function g chosen as above, and for n sufficiently large relative to k, the (k, g)-centroid clustering function does not satisfy the Consistency property. Proof Sketch. We describe the proof for k = 2 clusters; the case of k > 2 is similar. We consider a set of points S that is divided into two subsets: a set X consisting of m points, and a set Y consisting of γm points, for a small number γ > 0. The distance between points in X is r, the distance between points in Y is ε < r, and the distance from a point in X to a point in Y is r + δ, for a small number δ > 0. By choosing γ, r, ε, and δ appropriately, the optimal choice of k = 2 centroids will consist of one point from X and one from Y , and the resulting partition Γ will have clusters X and Y . Now, suppose we divide X into sets X0 and X1 of equal size, and reduce the distances between points in the same Xi to be r′ < r (keeping all other distances the same). This can be done, for r′ small enough, so that the optimal choice of two centroids will now consist of one point from each Xi, yielding a different partition of S. As our second distance function is a Γ-transformation of the first, this violates Consistency. 5 Relaxing the Properties In addition to looking for clustering functions that satisfy subsets of the basic properties, we can also study the effect of relaxing the properties themselves. Theorem 3.2 is a step in this direction, showing that the sum-of-pairs function satisfies Scale-Invariance and Consistency, together with a relaxation of the Richness property. As an another example, it is interesting to note that single-linkage with the distance-r stopping condition satisfies a natural relaxation of Scale-Invariance: if α > 1, then f(α · d) is a refinement of f(d). We now consider some relaxations of Consistency. Let f be a clustering function, and d a distance function such that f(d) = Γ. If we reduce distances within clusters and expand distances between clusters, Consistency requires that f output the same partition Γ. But one could imagine requiring something less: perhaps changing distances this way should be allowed to create additional sub-structure, leading to a new partition in which each cluster is a subset of one of the original clusters. Thus, we can define Refinement-Consistency, a relaxation of Consistency, to require that if d′ is an f(d)-transformation of d, then f(d′) should be a refinement of f(d). We can show that the natural analogue of Theorem 2.1 still holds: there is no clustering function that satisfies Scale-Invariance, Richness, and Refinement-Consistency. However, there is a crucial sense in which this result “just barely” holds, rendering it arguably less interesting to us here. Specifically, let Γ∗ n denote the partition of S = {1, 2, . . ., n} in which each individual element forms its own cluster. Then there exist clustering functions f that satisfy Scale-Invariance and Refinement-Consistency, and for which Range(f) consists of all partitions except Γ∗ n. (One example is single-linkage with the distance-(αδ) stopping condition, where δ = mini,j d(i, j) is the minimum inter-point distance, and α ≥1.) Such functions f, in addition to Scale-Invariance and Refinement-Consistency, thus satisfy a kind of Near-Richness property: one can obtain every partition as output except for a single, trivial partition. It is in this sense that our impossibility result for Refinement-Consistency, unlike Theorem 2.1, is quite “brittle.” To relax Consistency even further, we could say simply that if d′ is an f(d)transformation of d, then one of f(d) or f(d′) should be a refinement of the other. In other words, f(d′) may be either a refinement or a “coarsening” of f(d). It is possible to construct clustering functions f that satisfy this even weaker variant of Consistency, together with Scale-Invariance and Richness. Acknowledgements. I thank Shai Ben-David, John Hopcroft, and Lillian Lee for valuable discussions on this topic. This research was supported in part by a David and Lucile Packard Foundation Fellowship, an ONR Young Investigator Award, an NSF Faculty Early Career Development Award, and NSF ITR Grant IIS-0081334. References [1] M. Anderberg, Cluster Analysis for Applications, Academic Press, 1973. [2] K. Arrow, Social Choice and Individual Values, Wiley, New York, 1951. [3] M. Bern, D. Eppstein, “Approximation algorithms for geometric prolems,” in Approximation Algorithms for NP-Hard Problems, (D. Hochbaum, Ed.), PWS Publishing, 1996. [4] R. Duda, P. Hart, D. Stork, Pattern Classification (2nd edition), Wiley, 2001. [5] P. Hansen, F. Roberts, “An impossibility result in axiomatic location theory,” Mathematics of Operations Research 21(1996). [6] A. Jain, R. Dubes, Algorithms for Clustering Data, Prentice-Hall, 1981. [7] N. Jardine, R. Sibson, Mathematical Taxonomy Wiley, 1971. [8] A. Kalai, C. Papadimitriou, S. Vempala, A. Vetta, personal communication, June 2002. [9] P. Mirchandani, R. Francis, Discrete Location Theory, Wiley, 1990. [10] M. Osborne A. Rubinstein, A Course in Game Theory, MIT Press, 1994. [11] D. Pennock, E. Horvitz, C.L. Giles, “Social choice theory and recommender systems: Analysis of the axiomatic foundations of collaborative filtering,” Proc. 17th AAAI, 2000. [12] J. Puzicha, T. Hofmann, J. Buhmann “A Theory of Proximity Based Clustering: Structure Detection by Optimization,” Pattern Recognition, 33(2000).
2002
71
2,278
Visual Development Aids the Acquisition of Motion Velocity Sensitivities Robert A. Jacobs Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 robbie@bcs.rochester.edu Melissa Dominguez Department of Computer Science University of Rochester Rochester, NY 14627 melissad@cs.rochester.edu Abstract We consider the hypothesis that systems learning aspects of visual perception may benefit from the use of suitably designed developmental progressions during training. Four models were trained to estimate motion velocities in sequences of visual images. Three of the models were “developmental models” in the sense that the nature of their input changed during the course of training. They received a relatively impoverished visual input early in training, and the quality of this input improved as training progressed. One model used a coarse-to-multiscale developmental progression (i.e. it received coarse-scale motion features early in training and finer-scale features were added to its input as training progressed), another model used a fine-to-multiscale progression, and the third model used a random progression. The final model was nondevelopmental in the sense that the nature of its input remained the same throughout the training period. The simulation results show that the coarse-to-multiscale model performed best. Hypotheses are offered to account for this model’s superior performance. We conclude that suitably designed developmental sequences can be useful to systems learning to estimate motion velocities. The idea that visual development can aid visual learning is a viable hypothesis in need of further study. 1 Introduction With relatively few exceptions, relationships between development and learning have largely been ignored by the neural computation community. This is surprising because development may be nature’s way of biasing biological learning systems so that they achieve better performance. Development may also represent an effective means for engineers to bias machine learning systems. Learning systems are inherently faced with the biasvariance dilemma [1]. Systems with little or no bias tend to interpolate in unpredictable ways and, thus, have highly variable generalization performance. Systems with larger bias, in contrast, tend to show better generalization performance when exposed to those training sets that they can adequately learn. Development may be an effective means of adding suitable bias to a system thereby enhancing the generalization performance of that system. In previous work, we studied the effects of different types of developmental sequences on the performances of systems trained to estimate the binocular disparities present in pairs of visual images [2]. Systems consisted of three components. The first component was a pair of right-eye and left-eye images. For example, the images may have depicted a light or dark object against a gray background. The second component was a set of binocular energy filters. These filters are widely used to model the binocular sensitivities of simple and complex cells in primary visual cortex of primates [3]. Based on local patches of the right-eye and left-eye images, each filter acted as a disparity feature detector at a coarse, medium, or fine scale depending on whether the filter was tuned to a low, medium, or high spatial frequency, respectively. The third component was an artificial neural network. The outputs of the binocular energy filters were the inputs to this network. The network was trained to estimate the disparity of the object which was defined as the amount that the object was shifted between the right-eye and left-eye images. A non-developmental system was compared to three developmental systems. The network of the non-developmental system received the outputs of all binocular energy filters throughout the entire training period. The networks of the developmental systems, in contrast, were trained in three stages. The network of the coarse-to-multiscale system received the outputs of binocular energy filters tuned to a low spatial frequency during the first training stage. It received the outputs of filters tuned to low and medium spatial frequencies during the second training stage, and it received the outputs of all filters during the third training stage. The network of the fine-to-multiscale system was trained in an analogous way, though its filters were added in the opposite order. This network received the outputs of filters tuned to a high frequency during the first training stage, and the outputs of medium and then low frequency filters were added during subsequent stages. The network of the random developmental model was also trained in stages, though its inputs were chosen at random at each stage and, thus, were not organized by spatial frequency content. The results show that the coarse-to-multiscale and fine-to-multiscale systems consistently outperformed the non-developmental and random developmental systems. The fact that they outperformed the non-developmental system is important because this demonstrates that models that undergo a developmental maturation can acquire a more advanced perceptual ability than one that does not. The fact that they outperformed the random developmental system is important because this demonstrates that not all developmental sequences can be expected to provide performance benefits. To the contrary, only sequences whose characteristics are matched to the task should lead to superior performance. In conjunction with other results not described here, these findings suggest that the most successful systems at learning to detect binocular disparities are systems that are exposed to visual inputs at a single scale early in training, and for which the resolution of their inputs progresses in an orderly fashion from one scale to a neighboring scale during the course of training. At a more general level, these results suggest that the idea that visual development aids visual learning is a viable hypothesis in need of further study. This paper studies this hypothesis in the context of visual motion velocity estimation. Our simulations show that the tasks of disparity estimation and velocity estimation yield similar, though not identical, patterns of results. Although a developmental approach to the velocity estimation task is shown to be beneficial, it is not the case that all developmental progressions that lead to performance advantages on the disparity estimation task also lead to advantages on the velocity estimation task. In particular, a coarse-to-multiscale developmental system outperformed non-developmental and random developmental systems on the velocity estimation task, but a fine-to-multiscale system did not. We hypothesize that the performance advantage of the coarse-to-multiscale system relative to the fine-to-multiscale system is due to the fact that the coarse-to-multiscale system learned to make greater use of motion energy filters tuned to a low spatial frequency. Analyses suggest that coarse-scale motion features are more informative for the velocity estimation task than fine-scale features. 2 Developmental and Non-developmental Systems The structure of the developmental and non-developmental systems was as follows. The input to each system was a sequence of 88 retinal images where each image was a onedimensional array 40 pixels in length. As described below, this sequence depicted an object moving at a constant velocity in front of a stationary background. The retinal array was treated as if it were shaped like a circle in the sense that the leftmost and rightmost pixels were regarded as neighbors. This wraparound of the left and right edges was done to avoid edge artifacts in the spatial dimension. Although a one-dimensional retina is a simplification, its use is justified by the need to keep the simulation times within reason. The sequence of retinal images was filtered using motion energy filters. Based on neurophysiological results, Adelson and Bergen [4] proposed motion energy filters as a way of modeling the motion sensitivities of simple and complex cells in primary visual cortex. A sequence of one-dimensional images can be represented using a twodimensional array where one dimension encodes space and the other dimension encodes time. In this case, motion energy filters are two-dimensional filters which extract motion information in local patches of the spatiotemporal space. The receptive field profile of a simple cell can be described mathematically as a Gabor function which is a sinusoid multiplied by a Gaussian envelope. A quadrature pair of such functions with even and odd phases tuned to leftward (-) and rightward (+) directions of motion is given by                !#"%$  &(') +* (1)  ,              !.-0/1"%$  &(') +* (2) where  and  are the spatial and temporal distances to the center of the Gaussian,  and   are the spatial and temporal variances of the Gaussian, and $ and $  are the spatial and temporal frequencies of the sinusoids. The ratio $ +2 $ determines the orientation of a Gabor function in the spatiotemporal space which, in turn, determines the velocity sensitivity of the function. The activity of a simple cell is given by the square of the convolution of the cell’s receptive field profile with the spatiotemporal pattern. The activities of simple cells with even and odd phases are summed in order to form the activity of a complex cell. This activity is known as a motion energy. In our simulations, we used a subset of the possible receptive-field locations in the twodimensional (40 pixels 3 88 time frames) spatiotemporal space. This subset formed a 20 3 4 uniform grid such that receptive fields were centered on odd-numbered pixels and odd-numbered time frames. This grid was located in the center of the space with respect to the temporal dimension. An advantage of this choice of locations was that edge artifacts were avoided because all receptive-fields fell entirely within the spatiotemporal space. Fifteen complex cells corresponding to three spatial frequencies and five temporal frequencies were centered at each receptive-field location. The spatial and temporal frequencies were each separated by an octave. Temporal frequencies were chosen so that the set of cells at each spatial frequency had the same pattern of velocity tunings. Specifically, the sets tuned to low (0.0625 cycles/pixel), medium (0.125 cycles/pixel), and high (0.25 cycles/pixel) spatial frequencies had velocity tunings of 0.25, 0.5, 1.0, 2.0, and 4.0 pixels per time frame. All cells were tuned to rightward motion because we restricted our data sets to only include objects that were moving to the right. A cell’s spatial and temporal standard deviations were set to be inversely proportional to its spatial and temporal frequencies, respectively. The outputs of the complex cells within each spatial frequency band were normalized using a softmax nonlinearity. Consequently, complex cells tended to respond to relative contrast in the image sequence rather than absolute contrast [5] [6]. The normalized outputs of the complex cells were the inputs to an artificial neural network. The network had 1200 input units (the complex cells had 80 receptive-field locations and there were 15 cells at each location). The network’s hidden layer contained 18 hidden units which were organized into 3 groups of 6 units each. The connectivity of the hidden units was set so that each group had a limited receptive field, and so that neighboring groups had overlapping receptive fields. A group of hidden units received inputs from thirty-two receptive field locations at the complex cell level, and the receptive fields of neighboring groups overlapped by eight receptive-field locations. The hidden units used a logistic activation function. The output layer consisted of a single linear unit; this unit’s output was an estimate of the object velocity depicted in the sequence of retinal images. The weights of an artificial neural network were initialized to small random values, and were adjusted during the course of training to minimize a sum of squared error cost function using a conjugate gradient optimization procedure [7]. Weight sharing was implemented at the hidden unit level so that corresponding units within each group of hidden units had the same incoming and outgoing weight values, and so that a hidden unit had the same set of weight values from each receptive field location at the complex unit level. This provided the network with a degree of translation invariance, and also dramatically decreased the number of modifiable weight values in the network. It therefore decreased the number of data items needed to train the network, and the amount of time needed to train the network. Models were trained and tested using separate sets of training and test data items. Each set contained 250 randomly generated items. Training was terminated after 100 iterations through the training set. The results reported below are based on the data items from the test set. Three developmental systems and one non-developmental system were simulated. The coarse-to-multiscale system, or model C2M, was trained using a coarse-to-multiscale developmental sequence which was implemented as follows. The training period was divided into three stages. During the first stage, the neural network portion of the model only received the outputs of complex cells tuned to the low spatial frequency (the outputs of other complex cells were set to zero). During the second stage, the network received the outputs of complex cells tuned to low and medium spatial frequencies; it received the outputs of all complex cells during the third stage. The training of the fine-to-multiscale system, or model F2M, was identical to that of model C2M except that its training used a fine-to-multiscale developmental sequence. During the first stage of training, its network received the outputs of complex cells tuned to the high spatial frequency. This network received the outputs of complex cells tuned to high and medium spatial frequencies during the second stage, and received the outputs of all complex cells during the third stage. The training of the random developmental system, or model RD, also used a developmental sequence, though this sequence was generated randomly and, thus, was not based on the spatial frequency tunings of the complex cells. The collection of complex cells was randomly partitioned into three equal-sized subsets with the constraint that each subset included one-third of the cells at each receptive-field location. During the first stage of training, the neural network portion of the model only received the outputs of the complex cells in the first subset. It received the outputs of the cells in the first and second subsets during the second stage of training, and received the outputs of all complex cells during the third stage. In contrast, the training period of the non-developmental system, or model ND, was not divided into separate stages; its neural network received the outputs of all complex cells throughout the entire training period. Solid object data item Noisy object data item Figure 1: Ten frames of an image sequence from the solid object data set (top) and ten frames of an image sequence from the noisy object data set (bottom). 3 Data Sets and Simulation Results The performances of the four models were evaluated on two data sets. In all cases the images were gray scale with luminance values between 0 and 1, and motion velocities were rightward with magnitudes between 0 and 4 pixels per time frame. Fifteen simulations of each model on each data set were conducted. In the solid object data set, images consisted of a moving light or dark object in front of a stationary gray background. The object’s gray-scale values were randomly chosen to either be in the range from 0.0 to 0.1 or from 0.9 and 1.0, whereas the gray-scale value of the background was always 0.5. The size of the object was randomly chosen to be an integer between 6 and 12 pixels, its initial location was a randomly chosen pixel on the retina, and its velocity was randomly chosen to be a real value between 0 and 4 pixels per time frame. Given a sequence of images, the task of a model was to estimate the object’s velocity. The top portion of Figure 1 gives an example of ten frames of an image sequence from the solid object data set. The bar graph in Figure 2 illustrates the results. The horizontal axis gives the model, and the vertical axis gives the root mean squared error (RMSE) on the data items from the test set at the end of training (the error bars give the standard error of the mean). The labels for the developmental models C2M, F2M, and RD include a number. Recall that the training of these models was divided into three training stages (or developmental stages). The number in the label gives the length of developmental stages 1 and 2 (the length of developmental stage 3 can be calculated using the fact that the entire training period lasted 100 iterations). For example, the label ‘C2M-5’ corresponds to a version of model C2M in which the 0.40 0.45 0.50 0.55 RMSE solid object data set ND RD-20 C2M-5 C2M-10 C2M-20 C2M-30 F2M-5 F2M-10 F2M-20 F2M-30 Figure 2: The root mean squared errors (RMSE) on the test set data items for model ND, the best performing version of model RD, and different versions of models C2M and F2M after training on the solid object data set (the error bars give the standard error of the mean). first stage was 5 iterations, the second stage was 5 iterations, and the third stage was 90 iterations. In regard to model RD, we simulated four versions of this model (RD-5, RD10, RD-20, and RD-30). For the sake of brevity, only the version that performed best is included in the graph. Model C2M significantly outperformed all other models. The version of this model which performed best was version C2M-20 which had an 11.5% smaller generalization error than model ND (t = 2.50, p 0.02). In addition, C2M-20 had a 9.6% smaller error than the best version of model F2M (t = 3.57, p 0.01), and a 7.2% smaller error than the best version of model RD (t = 2.30, p 0.05). The images in the second data set, referred to as the noisy object data set, were meant to resemble random-dot kinematograms frequently used in behavioral experiments. Images contained a noisy object which was moving to the right and a noisy background which was stationary. The gray-scale values of the object pixels and the background pixels were set to random numbers between 0 and 1. The size of the object was randomly chosen to be an integer between 6 and 12 pixels, its initial location was a randomly chosen pixel on the retina, and its velocity was randomly chosen to be an integer between 0 and 4 pixels per time frame. As before, the task was to map an image sequence to an estimate of an object velocity. The bottom portion of Figure 1 gives an example of ten frames of an image sequence from the noisy object data set. The results are shown in Figure 3. Model C2M, once again, outperformed the other models. Relative to model ND, all versions of model C2M showed superior performance (ND vs. C2M-5: t = 2.69, p 0.02; ND vs. C2M-10: t = 2.78, p 0.01; ND vs. C2M-20: t = 3.03, p 0.01; ND vs. C2M-30: t = 4.14, p 0.001). The version of model C2M which performed best was version C2M-30. On average, this version had an 8.9% smaller generalization error than model ND, a 6.1% smaller error than the best version of model F2M, and a 4.3% smaller error than the best version of model RD. 0.65 0.70 0.75 0.80 RMSE noisy object data set ND RD-20 C2M-5 C2M-10 C2M-20 C2M-30 F2M-5 F2M-10 F2M-20 F2M-30 Figure 3: The root mean squared errors (RMSE) on the test set data items for model ND, the best performing version of model RD, and different versions of models C2M and F2M after training on the noisy object data set (the error bars give the standard error of the mean). Why did model C2M show the best performance? Simulation results described in Jacobs and Dominguez [8] suggest that coarse-scale motion features are more informative for the velocity estimation task than fine-scale features. For example, networks that received only the outputs of complex cells tuned to a low spatial frequency consistently outperformed networks that received only the outputs of mid frequency complex cells or only the outputs of high frequency complex cells. We speculate that coarse-scale motion features are more informative for a number of reasons. First, complex cells tuned to the lowest spatial frequency have the largest receptive fields. As discussed by Weiss and Adelson [9], motion signals tend to be less ambiguous when the stimulus is viewed for a long duration and more ambiguous when the stimulus is viewed for a short duration. This type of reasoning also applies to the activities of complex cells with receptive fields in the spatiotemporal domain. That is, there is comparatively less ambiguity in the activities of complex cells with larger receptive fields than in the activities of cells with smaller receptive fields. Because cells tuned to a low spatial frequency tend to have larger receptive fields than cells tuned to a high spatial frequency, low frequency tuned cells tend to be more reliable for the purposes of motion velocity estimation. Second, model C2M may have benefited from the fact that complex cells with large, overlapping receptive fields provide a high resolution coarse-code of the spatiotemporal space [10]-[12]. This code could provide model C2M with accurate information as to the location of the moving object at each moment in time. For example, the activities of the population of these cells may have coded with high accuracy the fact that the moving object was at location at time  and at location  at time  . If so, the model’s neural network could have easily learned to accurately estimate the object velocity by calculating "   * 2 "      * . Model C2M would have an advantage over other models because it received this high resolution coarse-code throughout training. In contrast, model F2M, for example, received early in training only the outputs of complex cells with smaller, less-overlapping receptive fields. The activities of a population of these cells form a lower resolution coarse-code of the spatiotemporal space. As described above, in earlier work we found that the most successful systems at learning a binocular disparity estimation task were those that: (1) received inputs at a single frequency scale early in training, and (2) for which the resolution of their inputs progressed in an orderly fashion from one scale to a neighboring scale during the course of training [2]. Condition (1) allowed a system to combine and compare input features at an early training stage without the need to compensate for the fact that these features could be at different spatial scales. If condition (2) was satisfied, when a system received inputs at a new spatial scale, it was close to a scale with which the system was already familiar. Although not described here (see Jacobs and Dominguez [8]), we tested the importance on the motion velocity estimation task for the resolution of a system’s inputs to progress in an orderly fashion from one scale to a neighboring scale. The results suggest that this factor is moderately important, but not highly important, for a developmental system learning to estimate motion velocities. Overall, it is more important for a system to receive the outputs of the low spatial frequency complex cells as early in training as possible. Based on the entire set of simulations, we conclude that suitably designed developmental sequences can be useful to systems learning to estimate motion velocities. The idea that visual development can aid visual learning is a viable hypothesis in need of further study. Acknowledgments This work was supported by NIH research grant RO1-EY13149. References [1] Geman, S., Bienenstock, E., and Doursat, R. (1995) Neural networks and the bias/variance dilemma. Neural Computation, 4, 1-58. [2] Dominguez, M. and Jacobs, R.A. (2003) Developmental constraints aid the acquisition of binocular disparity sensitivities. Neural Computation, in press. [3] Ohzawa, I., DeAngelis, G.C., and Freeman, R.D. (1990) Stereoscopic depth discrimination in the visual cortex: Neurons ideally suited as disparity detectors. Science, 249, 1037-1041. [4] Adelson, E.H. and Bergen, J.R. (1985) Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America A, 2, 284-299. [5] Heeger, D.J. (1992) Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9, 181-197. [6] Nowlan, S.J. and Sejnowski, T.J. (1994) Filter selection model for motion segmentation and velocity integration. Journal of the Optical Society of America A, 11, 3177-3200. [7] Press, W.H., Teukolsky, S.A., Vetterling, W.T., and Flannery, B.P. (1992) Numerical Recipes in C: The Art of Scientific Computing. Cambridge, UK: Cambridge University Press. [8] Jacobs, R.A. and Dominguez, M. (2003) Visual development and the acquisition of motion velocity sensitivities. Neural Computation, in press. [9] Weiss, Y. and Adelson, E.H. (1998) Slow and smooth: A Bayesian theory for the combination of local motion signals in human vision. Center for Biological and Computational Learning Paper Number 158, Massachusetts Institute of Technology, Cambridge, MA. [10] Milner, P.M. (1974) A model for visual shape recognition. Psychological Review, 81, 521-535. [11] Hinton, G.E. (1981) Shape representation in parallel systems. In A. Drina (Ed.), Proceedings of the Seventh International Joint Conference on Artificial Intelligence. [12] Ballard, D.H. (1986) Cortical connections and parallel processing: Structure and function. Behavioral and Brain Sciences, 9, 67-120.
2002
72
2,279
On the Complexity of Learning the Kernel Matrix Olivier Bousquet, Daniel J. L. Herrmann MPI for Biological Cybernetics Spemannstr. 38, 72076 T¨ubingen Germany olivier.bousquet, daniel.herrmann  @tuebingen.mpg.de Abstract We investigate data based procedures for selecting the kernel when learning with Support Vector Machines. We provide generalization error bounds by estimating the Rademacher complexities of the corresponding function classes. In particular we obtain a complexity bound for function classes induced by kernels with given eigenvectors, i.e., we allow to vary the spectrum and keep the eigenvectors fix. This bound is only a logarithmic factor bigger than the complexity of the function class induced by a single kernel. However, optimizing the margin over such classes leads to overfitting. We thus propose a suitable way of constraining the class. We use an efficient algorithm to solve the resulting optimization problem, present preliminary experimental results, and compare them to an alignment-based approach. 1 Introduction Ever since the introduction of the Support Vector Machine (SVM) algorithm, the question of choosing the kernel has been considered as crucial. Indeed, the success of SVM can be attributed to the joint use of a robust classification procedure (large margin hyperplane) and of a convenient and versatile way of pre-processing the data (kernels). It turns out that with such a decomposition of the learning process into preprocessing and linear classification, the performance highly depends on the preprocessing and much less on the linear classification algorithm to be used (e.g. the kernel perceptron has been shown to have comparable performance to SVM with the same kernel). It is thus of high importance to have a criterion to choose the suitable kernel for a given problem. Ideally, this choice should be dictated by the data itself and the kernel should be ’learned’ from the data. The simplest way of doing so is to choose a parametric family of kernels (such as polynomial or Gaussian) and to choose the values of the parameters by crossvalidation. However this approach is clearly limited to a small number of parameters and requires the use of extra data. Chapelle et al. [1] proposed a different approach. They used a bound on the generalization error and computed the gradient of this bound with respect to the kernel parameters. This allows to perform a gradient descent optimization and thus to effectively handle a large number of parameters. More recently, the idea of using non-parametric classes of kernels has been proposed by Cristianini et al. [2]. They work in a transduction setting where the test data is known in advance. In that setting, the kernel reduces to a positive definite matrix of fixed size (Gram matrix). They consider the set of kernel matrices with given eigenvectors and to choose the eigenvalues using the ’alignment’ between the kernel and the data. This criterion has the advantage of being easily computed and optimized. However it has no direct connection to the generalization error. Lanckriet et al. [5] derived a generalization bound in the transduction setting and proposed to use this bound to choose the kernel. Their parameterization is based on a linear combination of given kernel matrices and their bound has the advantage of leading to a convex criterion. They thus proposed to use semidefinite programming for performing the optimization. Actually, if one wants to have a feasible optimization, one needs the criterion to be nice (e.g. differentiable) and the parameterization to be nice (e.g. the criterion is convex with respect to the parameters). The criterion and parameterization proposed by Lanckriet et al. satisfy these requirements. We shall use their approach and develop it further. In this paper, we try to combine the advantages of previous approaches. In particular we propose several classes of kernels and give bounds on their Rademacher complexity. Instead of using semidefinite programming we propose a simple, fast and efficient gradientdescent algorithm. In section 2 we calculate the complexity of different classes of kernels. This yields a convex optimization problem. In section 3 we propose to restrict the optimization of the spectrum such that the order of the eigenvalues is preserved. This convex constraint is implemented by using polynomials of the kernel matrix with non–negative coefficients only. In section 4 we use gradient descent to implement the optimization algorithm. Experimental results on standard data sets (UCI Machine Learning Repository) show in section 5 that indeed overfitting happens if we do not keep the order of the eigenvalues. 2 Bounding the Rademacher Complexity of Matrix Classes Let us introduce some notation. Let be a measurable space (the instance space) and      . We consider here the setting of transduction where the data is generated as follows. A fixed sample of size is given         and a permutation  of     is chosen at random (uniformly). The algorithm is given   !#" %$     !&" $ and '(   !&" )$   '   !#" $ , i.e. it has access to all instances but to the labels of the first instances only. The algorithm picks some classifier *,+.-0/ and the goal is to minimize the error of this classifier on the test instances. Let us denote by 123*4 the error of * on the testing instances, 123*45+   76  8:9 ;<(= >' 8 *? 8 5@BAC . The empirical Rademacher complexity of a set D of functions from to / is defined as E FDG5+ IHKJMLONPQ R ST   U 8:9 WV 8 *? 8 YX  where the expectation is taken with respect to the independent Rademacher random variables V 8 ( Z[ V 8     Z[ V 8      \ ). For a vector ] , ]_^`A means that all the components of ] are non-negative. For a matrix a , ab^cA means that a is positive definite. 2.1 General Bound We denote by the function defined as <(   for  @.A , WF   A for   and <(     otherwise. From the proof of Theorem 1 in [5] we obtain the lemma below. Lemma 1 Let D be a set of real-valued functions. For any IA , with probability at least    , for all * D we have 123*4 @   U 8:9  W>' 8 *? 8 ) HKJ L N)PQ R ST   U 8 9  V 8 <F' 8 *? 8  X    \      Using the comparison inequality for Rademacher processes in [6] we immediately obtain the following corollary. Corollary 1 Let D be a set of real-valued functions. For any cA , with probability at least    , for all * D we have 123*4 @   U 8:9  <>' 8 *? 8 ) E D[    \      Now we will apply this bound to several different classes of functions. We will thus compute E FDG for each of those classes. For a positive definite kernel  +  -b/ one considers usually the RKHS formed by the closure of NQ 4   +C  with respect to the inner product defined by 4    4   "!     #  . Since we will vary the kernel it is convenient to distinguish between the vectors in the RKHS and their geometric relation. We first define the abstract real vectors space $ .N)Q# &%M+ '  where (% is the evaluation functional at point  . Then we define for a given kernel  the Hilbert space $5) as the closure of $ with respect to the scalar product given by * %   %,+ !  4  -  . In this way we can vary  , i.e. the geometry, without changing the vector space structure of any finite dimensional subspace of the form NQ &%&.   /%/0  . We can identify the RKHS above with $  via  %  4 (1  and  %   %/+ !  24 (1      /1 3! . Lemma 2 Let  be a kernel on , let     4Iand a  4 8  65 ) 8 5 . For all 7  A we have H J L N)PQ 8 S9 ": $2;=< 8 <?>43@BA  U 8:9  V 8 *C  (DFEG!3X   7 H JIH2J  V  a V !BK  Proof: We have NPQ < 8 <?>43@BA  U 8:9 WV 8 LC   D E3!  NPQ < 8 <3>43@BANM C   U 8 9 V 8  D E(O   7QP P P P P  U 8:9 WV 8  D E P P P P P   7 J  V  a V !  The second equality holds due to Cauchy–Schwarz inequality which becomes here an equality because of the supremum. Notice that  V  a V ! is always non–negative since a is positive definite. Taking expectations concludes the proof. R The expression in lemma 2 is up to a factor  \ the Rademacher complexity of the class of functions in $  with margin 7 . It is important to notice that this is equal to the Rademacher complexity of the subspace of $5) which is spanned by the data. Indeed, let us consider the space $  +  N)Q /%&.    /%/0  then this is a Hilbert subspace of $  . Moreover, we have N)PQ < 8 <?> "@BAS; 8 S9 0  U 8 9  V 8 LC   D E3!   7 P P P P P  U 8:9  V 8  D ETP P P P P 9 0   7 J  V  a V !  This proves that we are actually capturing the right complexity since we are computing the complexity of the set of hyperplanes whose normal vector can be expressed as a linear combination of the data points. Now, let’s assume that we allow the kernel to change, that is, we have a set of possible kernels , or equivalently a set of possible kernel matrices. Let $  FaM be $  with the inner product induced by a and let D denote the class of hyperplanes with margin 7 in the space $4 >a  , and D_+   S ?D . Using lemma 2 we have E D   HKJ  NPQ     0     .   U 8 9  V 8 LC   D EG! !   7 HKJ#"N)PQ  S  J  V  a V !%$  (1) Let &Oa'&  denote the Frobenius norm of a , i.e. &Oa'&    6 8 ; 5 a  8 ; 5  Recall that for symmetric positive definite matrices, the Frobenius norm is equal to the -norm of the spectrum, i.e. &Oa(&    6  8:9 )  8  Also, recall that the trace of such a matrix is equal to the  -norm of its spectrum, i.e. *+ a  6  8:9  a 8 ; 8  6  8 9 ,) 8  Finally, recall that for a positive definite matrix a the operator norm &Oa'& is given by &Oa'& `N)PQ <-T<?>4 ]/.a ]  0 21 )     )    We will denote &Oa(&43 +  &a'& and &Oa'&  +  *5+a . It is easy to see that for a fixed kernel matrix a , we have E D6  @  *5+ a \ 7 . Also, it is useful to keep in mind that for certain kernels like the RBF kernel, the trace of the kernel matrix grows approximately linearly in the number of examples, while even if the problem is linearly separable, the margin decreases in the best case to a fixed strictly positive constant. This means that we have  *5+a \ 7 87   \    2.2 Complexity of 9 -balls of kernel matrices The first class that one may consider is the class of all positive definite matrices with 9 -norm bounded by some constant. Theorem 1 Let : BA and 9  <;  . Define =  a ^BAG+>&Oa(& = @(:  , then E FD @?    7  A:O  Proof: Using (1) we thus have to compute H J H NPQ  S  ? J  V  a V ! K . Since we can always find some a B= having an eigenvector V with eigenvalue : we obtain NPQ  S  ? J  V  a V !   :C& V &   A:O  which concludes the proof. R Remark: Observe that (D ED 3 for the same value of : . However they have the same Rademacher complexity. From the proof we see that for the calculation of the complexity only the contribution of a in direction of V matters. Therefore for every V the worst case element is contained in all three classes. Recall that in the case of the RBF kernel we have  : \ 7 F7   \   which means that we would obtain in this case a Rademacher complexity which does not decrease with . It seems clear that proper learning is not possible in such a class, at least from the view point of this way of measuring the complexity. 2.3 Complexity of the convex hull of kernel matrices Lanckriet et al. [5] considered positive definite linear combinations of G kernel matrices, i.e. the class  a  H U 8:9  I 8 a 8 +A*+a  :  a ^IA   (2) We rather consider the (smaller) class  a  H U 8 9  I 8 a 8 +2*+a  :  I ^BA   (3) which has simple linear constraints on the feasible parameter set and allows us to use a straightforward gradient descent algorithm. Notice that is the convex hull of the matrices a    a H where a 8  :a 8 \ *+ a 8 . We obtain the following bound on the Rademacher complexity of this class. Theorem 2 Let a    a H be some fixed kernel matrices and as defined in (3) then E D   @  7 A:  0 A1 8 9  ; ; H &a 8 & *+a 8 Proof: Applying Jensen inequality to equation (1) we calculate first NPQ  S  H U 8:9  I 8  V  a 8 V !  : 0 21 8 9  ; ; H  V  a 8 V ! *5+ a 8 @ A:( 0 A1 8 9  ; ; H &a 8 & *5+a 8  Indeed, consider the sum as a dot product and identify the domain of I . Then one recognizes that the first equality holds since the supremum is obtained for I at one of the vectors FA   A  : \ *+ a 8  A   A . The second part is due to the fact  V  a 8 V ! @ &a 8 & & V &   &Oa 8 & . R Remark: For a large class of kernel functions the trace of the induced kernel matrix scales linearly in the sample size . Therefore we have to scale : linearly with . On the other hand the operator norm of the induced kernel matrix grows sublinearly in . If the margin is bounded we can therefore ensure learning. With other words, if the kernels inducing a    a  are consistent, then the convex hull of the kernels is also consistent. Remark: The bound on the complexity for this class is less then the one obtained by Lanckriet et al. [5] for their class. Furthermore, it contains only easily computable quantities. Recognize that in the proof of the above theorem there appears a quantity similar to the maximal alignment of a kernel to arbitrary labels. It is interesting to notice also that the Rademacher complexity somehow measures the average alignment of a kernel to random labels. 2.4 Complexity of spectral classes of kernels Although the class defined in (3) has smaller complexity than the one in (2), we may want to restrict it further. One way of doing so is to consider a set of matrices which have the same eigenvectors. Generally speaking, the kernel encodes some prior about the data and we may want to retain part of this prior and allow the rest to be tuned from the data. A kernel matrix can be decomposed into two parts: its set of eigenvectors and its spectrum (set of eigenvalues). We will fix the eigenvectors and tune the spectrum from the data. For a kernel matrix a    and : IA we consider the spectral class of a  , given by  a +@*+ a  :   a  is diag.   *?Fa 5+@*5+ *?Fa   :  *  GF/ ;    (4) Notice that this class can be considered as the convex hull of the matrices :O] 8 ] 8 where ] 8 are the eigenvectors (columns of  ). Remark: We assume that all eigenvalues are different, otherwise the above sets do not agree. Note that Cristianini et al. proposed to optimize the alignment over this class. We obtain the following bound on the complexity of such a class. Theorem 3 Let : ^IA , let  be some fixed unitary matrix and as defined in (4), then for all 7 IA E FD  5@  7  A:   Proof: As before we start with Equation (1). If we denote ]   V and obtain H J  NPQ  S      U 8 9  ) 8 ]  8  !I@  : H J L 0 21 8:9  ; ;  ]  8 X   : H J " 0 21 8 9  ; ;  ] 8  $  Note that ] 8  6  5 9  8 5 V 5 so that, using Lemma 2.2 in [3] and the fact that 6  8:9   8 5   , we obtain the result. R Remark: As a corollary, we obtain that for any number G of kernel matrices a    a H which commute, the same bound holds on the complexity of their convex hull. 3 Optimizing the Kernel In order to choose the right kernel, we will now consider the bound of Corollary 1. For a fixed kernel, the complexity term in this bound is proportional to  *5+a \ 7 . We will consider a class of kernels and pick the one that minimizes this bound. This suggests to keep the trace fixed and to maximize the margin. Using Corollary 1 with the bounds derived in Section 2 we immediately obtain a generalization bound for such a procedure. Theorem 3 suggests that optimizing the whole spectrum of the kernel matrix does not significantly increase the complexity. However experiments (see Section 5) show that overfitting occurs. We present here a possible explanation for this phenomenon. Loosely speaking, the kernel encodes some prior information about how the labels two data points should be coupled. Most often this prior corresponds to the knowledge that two similar data points should have a similar label. Now, when optimizing over the spectrum of a kernel matrix, we replace the prior of the kernel function by information given by the data points. It turns out that this leads to overfitting in practical experiments. In section 2.4 we have shown that the complexity of the spectral class is not significantly bigger than the complexity for a fixed kernel, thus the complexity is not a sufficient explanation for this phenomenon. It is likely that when optimizing the spectrum, some crucial part of the prior knowledge is lost. To verify this assumption, we ran some experiments on the real line. We have to separate two clouds of points in / . When the clouds are well separated, a Gaussian kernel easily deals with the task while if we optimize the spectrum of this kernel with respect to the margin criterion, the classification has arbitrary jumps in the middle of the clouds. A possible way of retaining more of the spatial information contained in the kernel is to keep the order of the eigenvalues fixed. It turns out that in the same experiments, when the eigenvalues are optimized keeping their original order, no spurious jumps occur. We thus propose to add the extra constraint of keeping the order of the eigenvalues fix. This constrain is fulfilled by restricting the functions in (4) to polynomials of degree G  C    with non–negative coefficients, i.e. we consider spectral optimization by convex, non–decreasing functions. For a given kernel matrix a  , we thus define  a  H U 9  I 8 a 8  + *5+a  :  I ^IA   (5) Indeed, recent results shows that the Rademacher complexity is reduced in this way [7]. 4 Implementation Following Lanckriet et al. [5] one can formulate the problem of optimizing the margin error bound optimization as a semidefinite programming problem. Here we considered classes of kernels that can be written as linear combinations of kernel matrices with non-negative coefficients and fixed trace. In that case, one obtains the following problem (the subscript  indicates that we keep the block corresponding to the training data only) 0   ; ; ;  subject to H U 8:9  I 8 *5+a 8  :   ^IA  I ^IA  '  6 H 8:9   '    )      )    ^BA  It turns out that implementing this semidefinite program is computationally quite expensive. We thus propose a different approach based on the work of [1]. Indeed, the goal is to minimize a bound of the form   A so that if we fix the trace, we simply have to minimize the squared norm of the solution vector  . It has been proven in [1] that the gradient of & &  can be computed as  & &     . '  a    (6) The algorithm we suggest can thus be described as follows 1. Train an SVM to find the optimal value of  with the current kernel matrix. 2. Make a gradient step according to (6). Here,  < <    E   .' Fa 8  '   3. Enforce the constraints on the coefficients (normalization and non-negativity). 4. Return to 1 unless a termination criterion is reached. It turns out that this algorithm is very efficient and much simpler to implement than semidefinite programming. Moreover, the semidefinite programming formulations involve a large amount of (redundant) variables, so that a typical SDP solver will take 10 to 100 times longer to perform the same task since it will not use the specific symmetries of the problem. 5 Experiments In order to compare our results we use the same setting as in [5]: we consider the Breast cancer and Sonar databases from the UCI repository and perform 30 random splits with 60% of the data for training and 40% for testing. a  denotes the matrix induced by the polynomial kernel    )       1    , a  the matrix induced by the Gaussian kernel   )   1#Q   &O M &  \ V  , and a"! the matrix by the linear kernel #!  )    1 . First we compare two classes of kernels, linear combinations defined by (2) and convex combination by (3). Figure 1 shows that optimizing the margin on both classes yields roughly the same performance while optimizing the alignment with the ideal kernel is worse. Furthermore, considering the class defined in (3) yields a large improvement on computational efficiency. Next, we compare the optimization of the margin over the classes (3), (4) and (5) with degree $ polynomials. Figure 1 indicates that tuning the full spectrum leads to overfitting while keeping the order of the eigenvalues gives reasonable performance (this performance is retained when the degree of the polynomial is increased). a  a  a ! a  a a a a Breast cancer I  V     *+a \  7  25.1 1.09 0.54 0.55 0.53 0.42 0.9 test error (%) 7.1 10.8 4.2 3.8 3.3 30.8 10.9 Sonar I  V  :  *+a \  7  9.65 1.34 49.0 1.14 1.22 1.17 0.92 1.23 test error (%) 18.8 25.1 27.4 16.4 24.4 18.0 33.0 21.4 Figure 1: Performance of optimized kernels for different kernel classes and optimization procedures (methods proposed in the present paper are typeset in bold face). a,  a  and a ! indicate fixed kernels, see text. a  given by (2) and maximized margin, cf. [5]; a  given by (3) and maximized alignment with the ideal kernel cf. [2]; a  given by (3) and maximized margin; a  given by (4), i.e. whole spectral class of a  and maximized margin; a given by (5) with G  $ , i.e. keeping the order of the eigenvalues in the spectral class and maximized margin. The performance of a  is much better than of a  . 6 Conclusion We have derived new bounds on the Rademacher complexity of classes of kernels. These bounds give guarantees for the generalization error when optimizing the margin over a function class induced by several kernel matrices. We propose a general methodology for implementing the optimization procedure for such classes which is simpler and faster than semidefinite programming while retaining the performance. Although the bound for spectral classes is quite tight, we encountered overfitting in the experiments. We overcome this problem by keeping the order of the eigenvalues fix. The motivation of this additional convex constraint is to maintain more information about the similarity measure. The condition to fix the order of the eigenvalues is a new type of constraint. More work is needed to understand this constrain and its relation to the prior knowledge contained in the corresponding class of similarity measures. The complexity of such classes seems also to be much smaller. Therefore we will investigate the generalization behavior on different natural and artificial data sets in future work. Another direction for further investigation is to refine the bounds we obtained, using for instance local Rademacher complexities. References [1] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46(1):131–159, 2002. [2] N. Cristianini, J. Kandola, A. Elisseeff, and J. Shawe-Taylor. On optimizing kernel alignment. Journal of Machine Learning Research, 2002. To appear. [3] L. Devroye and G. Lugosi. Combinatorial Methods in Density Estimation. SpringerVerlag, New York, 2000. [4] J. Kandola, J. Shawe-Taylor and N. Cristianini. Optimizing Kernel Alignment over Combinations of Kernels. In Int Conf Machine Learning, 2002. In press. [5] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the kernel matrix with semidefinite programming. In Int Conf Machine Learning, 2002. In press. [6] M. Ledoux and M. Talagrand. Probability in Banach Spaces. Springer-Verlag, 1991. [7] O. Bousquet, and D. J. L. Herrmann. Towards Structered Kernel Maschines. Work in Progress.
2002
73
2,280
Clustering with the Fisher Score Koji Tsuda, Motoaki Kawanabe  and Klaus-Robert M¨uller   AIST CBRC, 2-41-6, Aomi, Koto-ku, Tokyo, 135-0064, Japan  Fraunhofer FIRST, Kekul´estr. 7, 12489 Berlin, Germany  Dept. of CS, University of Potsdam, A.-Bebel-Str. 89, 14482 Potsdam, Germany koji.tsuda@aist.go.jp,  nabe,klaus  @first.fhg.de Abstract Recently the Fisher score (or the Fisher kernel) is increasingly used as a feature extractor for classification problems. The Fisher score is a vector of parameter derivatives of loglikelihood of a probabilistic model. This paper gives a theoretical analysis about how class information is preserved in the space of the Fisher score, which turns out that the Fisher score consists of a few important dimensions with class information and many nuisance dimensions. When we perform clustering with the Fisher score, K-Means type methods are obviously inappropriate because they make use of all dimensions. So we will develop a novel but simple clustering algorithm specialized for the Fisher score, which can exploit important dimensions. This algorithm is successfully tested in experiments with artificial data and real data (amino acid sequences). 1 Introduction Clustering is widely used in exploratory analysis for various kinds of data [6]. Among them, discrete data such as biological sequences [2] are especially challenging, because efficient clustering algorithms e.g. K-Means [6] cannot be used directly. In such cases, one naturally considers to map data to a vector space and perform clustering there. We call the mapping a “feature extractor”. Recently, the Fisher score has been successfully applied as a feature extractor in supervised classification [5, 15, 14, 13, 16]. The Fisher score is derived as follows: Let us assume that a probabilistic model     is available. Given a parameter estimate  from training samples, the Fisher score vector is obtained as      "!   # $ &%"' )()(*( + "! , , $ -%/. 102( The Fisher kernel refers to the inner product in this space [5]. When combined with high performance classifiers such as SVMs, the Fisher kernel often shows superb results [5, 14]. For successful clustering with the Fisher score, one has to investigate how original classes are mapped into the feature space, and select a proper clustering algorithm. In this paper, it will be claimed that the Fisher score has only a few dimensions which contains the class information and a lot of unnecessary nuisance dimensions. So K-Means type clustering [6] is obviously inappropriate because it takes all dimensions into account. We will propose a clustering method specialized for the Fisher score, which exploits important dimensions with class information. This method has an efficient EM-like alternating procedure to learn, and has the favorable property that the resultant clusters are invariant to any invertible linear transformation. Two experiments with an artificial data and an biological sequence data will be shown to illustrate the effectiveness of our approach. 2 Preservation of Cluster Structure Before starting, let us determine several notations. Denote by  the domain of objects (discrete or continuous) and by  )(*()(  the set of class labels. The feature extraction is denoted as  ,&   . . Let     be the underlying joint distribution and assume that the class distributions  ,   are well separated, i.e.    & is close to 0 or 1. First of all, let us assume that the marginal distribution   & is known. Then the problem is how to find a good feature extractor, which can preserve class information, based on the prior knowledge of  ,& . In the Fisher score, it amounts to finding a good parametric model   $   . . This problem is by no means trivial, since it is in general hard to infer anything about the possible   & from the marginal    without additional assumptions [12]. A loss function for feature extraction In order to investigate how the cluster structure is preserved, we first have to define what the class information is. We regard that the class information is completely preserved, if a set of predictors in the feature space can recover the true posterior probability  $ & . This view makes sense, because it is impossible to recover the posteriors when classes are totally mixed up. As a predictor of posterior probability in the feature space, we adopt the simplest one, i.e. a linear estimator:     0   ,&      . !  " ( The prediction accuracy of  ,& for   # & is difficult to formulate, because parameters $  and   are learned from samples. To make the theoretical analysis possible, we consider the best possible linear predictors. Thus the loss of feature extractor  for  -th class is defined as %     &('*) +-,.0/21 34,. 57698  0    :<;=   & 4>@?  (2.1) where 5 6 denote the expectation with the true marginal distribution    . The overall loss is just the sum over all classes %   BAC ED ' %    . Even when the full class information is preserved, i.e. %   GF , clustering in the feature space may not be easy, because of nuisance dimensions which do not contribute to clustering at all. The posterior predictors make use of an at most  dimensional subspace out of the H -dimensional Fisher score, and the complementary subspace may not have any information about classes. K-means type methods [6] assume a cluster to be hyperspherical, which means that every dimension should contribute to cluster discrimination. For such methods, we have to try to minimize the dimensionality H while keeping %   small. When nuisance dimensions cannot be excluded, we will need a different clustering method that is robust to nuisance dimensions. This issue will be discussed in Sec. 3. Optimal Feature Extraction In the following, we will discuss how to determine   $ . First, a simple but unrealistic example is shown to achieve %   +#F , without producing nuisance dimensions at all. Let us assume that   $ is determined as a mixture model of true class distributions: JI   K  CL ' M ED 'ON      P  Q ; CL ' M ED 'RN  S    )  N UT  (2.2) where T V@K# A CL ' ED ' N XW " NZY\[ F ^] V )(*()(_<;` . Obviously this model realizes the true marginal distribution   & , when N  a   b  N  c d"*()(*(e;f"( When the Fisher score is derived at the true parameter, it achieves %   F . Lemma 1. The Fisher score      "!  I   K achieves %   PF . (proof) To prove the lemma, it is sufficient to show the existence of e;`  H matrix  and b; dimensional vector  such that   "!  I   K       d  *()()(*  b;  1 0 ( (2.3) The Fisher score for  I , K is  "!  I , K  N       N  ;   P" &  ;cA CL ' ED ' N    #")(*()(_b; ( Let  7Ib  CL ' 1 CL ' where 7I+O' !   N ' *()()(*  N CL '     ; A CL ' Y D ' N Y and  denotes  matrix filled with ones. Then   "!  I   K     d & )(*()()   e;f & 10;  CL ' 1 ' ( When we set   L ' and   L ' CL ' 1 ' , (2.3) holds. Loose Models and Nuisance Dimensions We assumed that    is known but still we do not know the true class distributions     . Thus the model @I   K in Lemma 1 is never available. In the following, the result of Lemma 1 is relaxed to a more general class of probability models by means of the chain rule of derivatives. However, in this case, we have to pay the price: nuisance dimensions. Denote by ! a set of probability distributions !   2I JI , K K aT  . According to the information geometry [1], ! is regarded as a manifold in a Riemannian space. Let " denote the manifold of   $ : "      $   .  ( Now the question is how to determine a manifold " such that %   F , which is answered by the following theorem. Theorem 1. Assume that the true distribution    is contained in " :  ,&      I , K      where is the true parameter. If the tangent space of " at  ,& contains the tangent space of ! at the same point (Fig. 1), then the Fisher score  derived from , satisfies %   F ( (proof) To prove the theorem, it is sufficient to show the existence of   ;a  H matrix  and b; dimensional vector  such that #  "!   $       # & )()(*(  Pb;f & 1 0 ( (2.4) When the tangent space of ! is contained in that of " around    , we have the following by the chain rule: +  ! JI   K  N   . M Y D '   "! , &% Y &% Y  N  $ $ $ $&%(' D %*) ' ( (2.5) Let + -, .0/ Y21 where .3/ Y 547698 4 %;: $ $ $ %(: D %) : ( With this notation, (2.5) is rewritten as +<#   !       V  *()()()  b;  1 10;  CL ' 1 ' The equation (2.4) holds by setting   L ' + and   L ' CL ' 1 ' . x Q M p Figure 1: Information geometric picture of a probabilistic model whose Fisher score can fully extract the class information. When the tangent space of ! is contained in " , the Fisher score can fully extract the class information, i.e. %   PF . Details explained in the text. Nuisance Important Figure 2: Feature space constructed by the Fisher score from the samples with two distinct clusters. The  and  -axis corresponds to an nuisance and an important dimension, respectively. When the Euclidean metric is used as in K-Means, it is difficult to recover the two “lines” as clusters. In determination of , $ , we face the following dilemma: For capturing important dimensions (i.e. the tangent space of ! ), the number of parameters H should be sufficiently larger than  . But a large H leads to a lot of nuisance dimensions, which are harmful for clustering in the feature space. In typical supervised classification experiments with hidden markov models [5, 15, 14], the number of parameters is much larger than the number of classes. However, in supervised scenarios, the existence of nuisance dimensions is not a serious problem, because advanced supervised classifiers such as the support vector machine have a built-in feature selector [7]. However in unsupervised scenarios without class labels, it is much more difficult to ignore nuisance dimensions. Fig. 2 shows how the feature space looks like, when the number of clusters is two and only one nuisance dimension is involved. Projected on the important dimension, clusters will be concentrated into two distinct points. However, when the Euclidean distance is adopted as in K-Means, it is difficult to recover true clusters because two “lines” are close to each other. 3 Clustering Algorithm for the Fisher score In this section, we will develop a new clustering algorithm for the Fisher score. Let 2 /   / D ' Vbe a set of class labels assigned to   /   / D '   , respectively. The purpose of clustering is to obtain @ /   / D ' only from samples   /   / D ' . As mensioned before, in clustering with the Fisher score, it is necessary to capture important dimensions. So far, it has been implemented as projection pursuit methods [3], which use general measures for interestingness, e.g. nongaussianity. However, from the last section’s analysis, we know more than nongaussianity about important dimensions of the Fisher score. Thus we will construct a method specially tuned for the Fisher score. Let us assume that the underlying classes are well separated, i.e.   B  / is close to 0 or 1 for each sample  /   . When the class information is fully preserved, i.e. %   F , there are  bases in the space of the Fisher score, such that the samples in the  -th cluster are projected close to 1 on the  -th basis and the others are projected close to 0. The objective function of our clustering algorithm is designed to detect such bases: &(' ) E1 1  &X' ) + 1 1 +  &('*) 3 E1 1 3  C M ED '  M / D ' 8  0     /   ; ;/  >2?  (3.1) where  1 is the indicator function which is 1 if the condition holds and 0 otherwise. Notice that the optimal result of (3.1) is invariant to any invertible linear transformation        7- . In contrast, K-means type methods are quite sensitive to linear transformation or data normalization [6]. When linear transformation is notoriously set, K-means can end up with a false result which may not reflect the underlying structure.1 The objective function (3.1) can be minimized by the following EM-like alternating procedure: 1. Initialization: Set 2(/Q  / D ' to initial values. Compute  '  A  / D '  ,& and  L '  '  A  / D '    /    / 0 ;0 L ' for later use. 2. Repeat 3. and 4. until the convergence of 2 /   / D ' . 3. Fix 2 /   / D ' and minimize with respect to 2   C ED ' and     C ED ' . Each   _  is obtained as the solution of the following problem: ,      1   ! &(' ) +71 3  M / D ' 8  0  , / !;  /  >2? ( This problem is analytically solved as     L '     M / D '  /     / ;  2 !    ;    M / D '   10    /  where   '  A  / D '  /  ( 4. Fix 2   C ED ' ,    C ED ' and minimize with respect to 2 /   / D ' . Each  / is obtained by solving the following problem  / P&('*) C M JD ' 8  0     /   ;   > ? The solution can be obtained by exhaustive search. Steps 1, 3, 4 take   H ,    H ? ,    ? H computational costs, respectively. Since the computational cost of algorithm is linear in  , it can be applied to problems with large sample sizes. This algorithm requires  H time for inverting the matrix  , which may only be an obstacle for an application in an extremely high dimensional data setting. 4 Clustering Artificial Data We will perform a clustering experiment with artificially generated data (Fig. 3). Since this data has a complicated structure, the Gaussian mixture with    components is used as a probabilistic model for the Fisher score:   $  A  Y D ' Y   Y   Y  where  ,   denotes the Gaussian distribution with mean and covariance matrix  . The parameters are learned with the EM algorithm and the marginal distribution is accurately estimated as shown in Fig. 3 (upperleft). We applied the proposed algorithm and K-Means to the Fisher score calculated by taking derivatives with respect to Y . In order to have an initial partition, we first divided the points into 8 subclusters by the posterior probability to each Gaussian. In K-means and our approach defined in Sec. 3, initial clusters are constructed by randomly combining these subclusters. For each method, we chose the best result which achieved the minimum loss among the local minima obtained from 100 clustering experiments. As a result, the proposed method obtained clearly separated clusters (Fig. 3, upper right) but K-Means failed to recover the “correct” clusters, which is considered as the effect of nuisance dimensions (Fig. 3, lower left). When the Fisher score is whitened (i.e. linear transformation to have mean 0 and unit covariance matrix), the result of K-Means changed to Fig. 3 (lowerright) but the solution of our method stayed the same as discussed in Sec. 3. Of course, this kind of problem can be solved by many state-of-the-art methods (e.g. [9, 8]) 1When the covariance matrix of each cluster is allowed to be different in K-Means, it becomes invariant to normalization. However this method in turn causes singularities, where a cluster shrinks to the delta distribution, and difficult to learn in high dimensional spaces. Figure 3: (Upperleft) Toy dataset used for clustering. Contours show the estimated density with the mixture of 8 Gaussians. (Upperright) Clustering result of the proposed algorithm. (Lowerleft) Result of K-Means with the Fisher score. (Lowerright) Result of K-Means with the whitened Fisher score. because it is only two dimensional. However these methods typically do not scale to large dimensional or discrete problems. Standard mixture modeling methods have difficulties in modeling such complicated cluster shapes [9, 10]. One straightforward way is to model each cluster as a Gaussian Mixture:   &  APC JD ' N  A  ' D '         ( However, special care needs to be taken for such a “mixture of mixtures” problem. When the parameters N       and   are jointly optimized in a maximum likelihood process, the solution is not unique. In order to have meaningful results e.g. in our dataset, one has to constrain the parameters such that 8 Gaussians form 2 groups. In the Bayesian framework, this can be done by specifying an appropriate prior distributions on parameters, which can become rather involved. Roberts et. al. [10] tackled this problem by means of the minimum entropy principle using MCMC which is somewhat more complicated than our approach. 5 Clustering Amino Acid Sequences In this section, we will apply our method to cluster bacterial gyrB amino acid sequences, where the hidden markov model (HMM) is used to derive the Fisher score. gyrB - gyrase subunit B - is a DNA topoisomerase (type II) which plays essential roles in fundamental mechanisms of living organisms such as DNA replication, transcription, recombination and repair etc. One more important feature of gyrB is its capability of being an evolutionary and taxonomic marker alternating popular 16S rRNA [17]. Our data set consists of 55 amino acid sequences containing three clusters (9,32,14). The three clusters correspond to three genera of high GC-content gram-positive bacteria, that is, Corynebacteria, Mycobacteria, Rhodococcus, respectively. Each sequence is represented as a sequence of 20 characters, each of which represents an amino acid. The length of each sequence is different from 408 to 442, which makes it difficult to convert a sequence into a vector of fixed dimensionality. In order to evaluate the partitions we use the Adjusted Rand Index (ARI) [4, 18]. Let + ' )(*()() + C be the obtained clusters and  ' )()(*(  be the ground truth clusters. Let  / Y be the number of samples which belongs to both + / and  Y . Also let  /  and   Y be the number of samples in + / and  Y , respectively. ARI is defined as A / 1 Y  / Y ;A /  /   A Y    Y    ' ?  A /  /   fA Y   Y   ;  A /  /   A Y   Y      The attractive point of ARI is that it can measure the difference of two partitions even when 2 3 4 5 0 0.2 0.4 0.6 0.8 Number of HMM States ARI Proposed K-Means Figure 4: Adjusted Rand indices of K-Means and the proposed method in a sequence classification experiment. the number of clusters is different. When the two partitions are exactly the same, ARI is 1, and the expected value of ARI over random partitions is 0 (see [4] for details). In order to derive the Fisher score, we trained complete-connection HMMs via the BaumWelch algorithm, where the number of states is changed from 2 to 5, and each state emits one of  F characters. This HMM has initial state probabilities, terminal state probabilities, ? transition probabilities and emission probabilities. Thus when   for example, a HMM has 75 parameters in total, which is much larger than the number of potential classes (i.e. 3). The derivative is taken with respect to all paramaters as described in detail in [15]. Notice that we did not perform any normalization to the Fisher score vectors. In order to avoid local minima, we tried 1000 different initial values and chose the one which achieved the minimum loss both in K-means and our method. In KMeans, initial centers are sampled from the uniform distribution in the smallest hypercube which contains all samples. In the proposed method, every $  / is sampled from the normal distribution with mean 0 and standard deviation 0.001. Every   is initially set to zero. Fig. 4 shows the ARIs of two methods against the number of HMM states. Our method shows the highest ARI (0.754) when the number of HMM states is 3, which shows that important dimensions are successfully discovered from the “sea” of nuisance dimensions. In contrast, the ARI of K-Means decreases monotonically as the number of HMM states increases, which shows the K-Means is not robust against nuisance dimensions. But when the number of nuisance dimensions are too many (i.e. +  ), our method was caught in false clusters which happened to appear in nuisance dimensions. This result suggests that prior dimensionality reduction may be effective (cf.[11]), but it is beyond the scope of this paper. 6 Concluding Remarks In this paper, we illustrated how the class information is encoded in the Fisher score: most information is packed in a few dimensions and there are a lot of nuisance dimensions. Advanced supervised classifiers such as the support vector machine have a built-in feature selector [7], so they can detect important dimensions automatically. However in unsupervised learning, it is not easy to detect important dimensions because of the lack of class labels. We proposed a novel very simple clustering algorithm that can ignore nuisance dimensions and tested it in artificial and real data experiments. An interesting aspect of our gyrB experiment is that the ideal scenario assumed in the theory section is not fulfilled anymore as clusters might overlap. Nevertheless our algorithm is robust in this respect and achieves highly promising results. The Fisher score derives features using the prior knowledge of the marginal distribution. In general, it is impossible to infer anything about the conditional distribution  $ & from the marginal    [12] without any further assumptions. However, when one knows the directions that the marginal distribution can move (i.e. the model of marginal distribution), it is possible to extract information about   & , even though it may be corrupted by many nuisance dimensions. Our method is straightforwardly applicable to the objects to which the Fisher kernel has been applied (e.g. speech signals [13] and documents [16]). Acknowledgement The authors gratefully acknowledge that the bacterial gyrB amino acid sequences are offered by courtesy of Identification and Classification of Bacteria (ICB) database team [17]. KRM thanks for partial support by DFG grant # MU 987/1-1. References [1] S. Amari and H. Nagaoka. Methods of Information Geometry, volume 191 of Translations of Mathematical Monographs. American Mathematical Society, 2001. [2] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press, 1998. [3] P.J. Huber. Projection pursuit. Annals of Statistics, 13:435–475, 1985. [4] L. Hubert and P. Arabie. Comparing partitions. J. Classif., pages 193–218, 1985. [5] T.S. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In M.S. Kearns, S.A. Solla, and D.A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages 487–493. MIT Press, 1999. [6] A.K. Jain and R.C. Dubes. Algorithms for Clustering Data. Prentice Hall, 1988. [7] K.-R. M¨uller, S. Mika, G. R¨atsch, K. Tsuda, and B. Sch¨olkopf. An introduction to kernel-based learning algorithms. IEEE Trans. Neural Networks, 12(2):181–201, 2001. [8] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14. MIT Press, 2002. [9] M. Rattray. A model-based distance for clustering. In Proc. IJCNN’00, 2000. [10] S.J. Roberts, C. Holmes, and D. Denison. Minimum entropy data partitioning using reversible jump markov chain monte carlo. IEEE Trans. Patt. Anal. Mach. Intell., 23(8):909–915, 2001. [11] V. Roth, J. Laub, J.M. Buhmann, and K.-R. M¨uller. Going metric: Denoising pairwise data. In NIPS02, 2003. to appear. [12] M. Seeger. Learning with labeled and unlabeled data. Technical report, Institute for Adaptive and Neural Computation, University of Edinburgh, 2001. http://www.dai.ed.ac.uk/homes/seeger/papers/review.ps.gz. [13] N. Smith and M. Gales. Speech recognition using SVMs. In T.G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14. MIT Press, 2002. [14] S. Sonnenburg, G. R¨atsch, A. Jagota, and K.-R. M¨uller. New methods for splice site recognition. In ICANN’02, pages 329–336, 2002. [15] K. Tsuda, M. Kawanabe, G. R¨atsch, S. Sonnenburg, and K.-R. M¨uller. A new discriminative kernel from probabilistic models. Neural Computation, 14(10):2397–2414, 2002. [16] A. Vinokourov and M. Girolami. A probabilistic framework for the hierarchic organization and classification of document collections. Journal of Intelligent Information Systems, 18(2/3):153– 172, 2002. [17] K. Watanabe, J.S. Nelson, S. Harayama, and H. Kasai. ICB database: the gyrB database for identification and classification of bacteria. Nucleic Acids Res., 29:344–345, 2001. [18] K.Y. Yeung and W.L. Ruzzo. Principal component analysis for clustering gene expression data. Bioinformatics, 17(9):763–774, 2001.
2002
74
2,281
Artefactual Structure from Least Squares Multidimensional Scaling Nicholas P. Hughes Department of Engineering Science University of Oxford Oxford, 0X1 3PJ, UK nph@robots.ox.ac.uk David Lowe Neural Computing Research Group Aston University Birmingham, B4 7ET, UK d.lowe@aston.ac.uk Abstract We consider the problem of illusory or artefactual structure from the visualisation of high-dimensional structureless data. In particular we examine the role of the distance metric in the use of topographic mappings based on the statistical field of multidimensional scaling. We show that the use of a squared Euclidean metric (i.e. the SSTRESS measure) gives rise to an annular structure when the input data is drawn from a highdimensional isotropic distribution, and we provide a theoretical justification for this observation. 1 Introduction The discovery of meaningful patterns and relationships from large amounts of multivariate data is a significant and challenging problem with close ties to the fields of pattern recognition and machine learning, and important applications in the areas of data mining and knowledge discovery in databases (KDD). For many real-world high-dimensional data sets (such as collections of images, or multichannel recordings of biomedical signals) there will generally be strong correlations between neighbouring observations, and thus we expect that the data will lie on a lower dimensional (possibly nonlinear) manifold embedded in the original data space. One approach to the aforementioned problem then is to find a faithful1 representation of the data in a lower dimensional space. Typically this space is chosen to be two- or three-dimensional, thus facilitating the visualisation and exploratory analysis of the intrinsic low-dimensional structure in the data (which would otherwise be masked by the dimensionality of the data space). In this context then, an effective dimensionality reduction algorithm should seek to extract the underlying relationships in the data with minimum loss of information. Conversely, any interesting patterns which are present in the visualisation space should be representative of similar patterns in the original data space, and not artefacts of the dimensionality reduction process. 1By “faithful” we mean that the underlying geometric structure in the data space, which characterises the informative relationships in the data, is preserved in the visualisation space. Although much effort has been focused on the former problem of optimal structure elucidation (see [7, 10] for recent approaches to dimensionality reduction), comparatively little work has been undertaken on the latter (and equally important) problem of artefactual structure. This shortcoming was recently highlighted in a controversial example of the application of visualisation techniques to neuroanatomical connectivity data derived from the primate visual cortex [12, 9, 13, 3]. In this paper we attempt to redress the balance by considering the visualisation of highdimensional structureless data through the use of topographic mappings based on the statistical field of multidimensional scaling (MDS). This is an important class of mappings which have recently been brought into the neural network domain [5], and have significant connections to modern kernel-based algorithms such as kernel PCA [11]. The organisation of the remainder of this paper is as follows: In section 2 we introduce the technique of multidimensional scaling and relate this to the field of topographic mappings. In section 3 we show how under certain conditions such mappings can give rise to artefactual structure. A theoretical analysis of this effect is then presented in section 4. 2 Multidimensional Scaling and Topographic Mappings The visualisation of experimental data which is characterised by pairwise proximity values is a common problem in areas such as psychology, molecular biology and linguistics. Multidimensional scaling (MDS) is a statistical technique which can be used to construct a spatial configuration of points in a (typically) two- or three-dimensional space given a matrix of pairwise proximity values between objects. The proximity matrix provides a measure of the similarity or dissimilarity between the objects, and the geometric layout of the resulting MDS configuration reflects the relationships between the objects as defined by this matrix. In this way the information contained within the proximity matrix can be captured by a more succinct spatial model which aids visualisation of the data and improves understanding of the processes that generated it. In many situations, the raw dissimilarities will not be representative of actual inter-point distances between the objects, and thus will not be suitable for embedding in a lowdimensional space. In this case the dissimilarities can be transformed into a set of values more suitable for embedding through the use of an appropriate transformation:    where represents the transformation function and    are the resulting transformed dissimilarities (which are termed “disparities”). The aim of metric MDS then is that the transformed dissimilarities   should correspond as closely as possible to the inter-point distances   in the resulting configuration2. Metric MDS can be formulated as a continuous optimisation problem through the definition of an appropriate error function. In particular, least squares scaling algorithms directly seek to minimise the sum-of-squares error between the disparities and the inter-point distances. This error, or STRESS3 measure, is given by: STRESS                !  "$# (1) 2This is in contrast to nonmetric MDS which requires that only the ordering of the disparities corresponds to the ordering of the inter-point distances (and thus that the disparities are some arbitrary monotonically increasing function of the distances). 3STRESS is an acronym for STandard REsidual Sum of Squares. where the term         is a normalising constant which reduces the sensitivity of the measure to the number of points and the scaling of the disparities, and the   are the weighting factors. It is straightforward to differentiate this STRESS measure with respect to the configuration points   and minimise the error through the use of standard nonlinear optimisation techniques. An alternative and commonly used error function, which is referred to as SSTRESS, is given by: SSTRESS                       " # (2) which represents the sum-of-squares error between squared disparities and squared distances. The primary advantage of the SSTRESS measure is that it can be efficiently minimised through the use of an alternating least squares procedure4 [1]. Closely related to the field of Metric MDS is Sammon’s mapping [8], which takes as its input a set of high-dimensional vectors and seeks to produce a set of lower dimensional vectors such that the following error measure is minimised:                        #   (3) where the    are the inter-point Euclidean distances in the data space:         , and the   are the corresponding inter-point Euclidean distances in the feature or map space:         . Ignoring the normalising constant, Sammon’s mapping is thus equivalent to least squares metric MDS with the disparities taken to be the raw inter-point distances in the data space and the weighting factors given by       . Lowe (1993) termed such a mapping based on the minimisation of an error measure of the form           # a topographic mapping, since this constraint “optimally preserves the geometric structure in the data” [5]. Interestingly the choice of the STRESS or SSTRESS measure in MDS has a more natural interpretation when viewed within the framework of Sammon’s mapping. In particular, STRESS corresponds to the use of the standard Euclidean distance metric whereas SSTRESS corresponds to the use of the squared Euclidean distance metric. In the next section we show that this choice of metric can lead to markedly different results when the input data is sampled from a high-dimensional isotropic distribution. 3 Emergence of Artefactual Structure In order to investigate the problem of artefactual structure we consider the visualisation of high-dimensional structureless data (where we use the term “structureless” to indicate that the data density is equal in all directions from the mean and varies only gradually in any direction). Such data can be generated by sampling from an isotropic distribution (such as a spherical Gaussian), which is characterised by a covariance matrix that is proportional to the identity matrix, and a skewness of zero. We created four structureless data sets by randomly sampling 1000 i.i.d. points from unit hypercubes of dimensions   5, 10, 30 and 100. For each data set, we generated a pair 4The SSTRESS measure now forms the basis of the ALSCAL implementation of MDS, which is included as part of the SPSS software package for statistical data analysis. −0.5 0 0.5 1 1.5 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 (a)  5 −1 −0.5 0 0.5 1 1.5 2 −0.5 0 0.5 1 1.5 (b)  10 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 (c)  30 −4 −3 −2 −1 0 1 2 3 4 5 −3 −2 −1 0 1 2 3 4 (d)  100 Figure 1: Final map configurations produced by STRESS mappings of data uniformly randomly distributed in unit hypercubes of dimension  . of 2-D configurations by minimising5 STRESS and SSTRESS error measures of the form           # and             # respectively. The process was repeated fifty times (for each individual error function and data set) using different initial configurations of the map points, and the configuration with the lowest final error was retained. As previously noted, the choice of the STRESS or SSTRESS error measure is best viewed as a choice of distance metric, where STRESS corresponds to the standard Euclidean metric and SSTRESS corresponds to the squared Euclidean metric. Figure 1 shows the resulting configurations from the STRESS mappings. It is clear that each configuration has captured the isotropic nature of the associated data set, and there are no spurious patterns or clusters evident in the final visualisation plots. −0.5 0 0.5 1 1.5 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 (a)  5 −0.5 0 0.5 1 1.5 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 (b)  10 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 −1 −0.5 0 0.5 1 1.5 2 (c)  30 −3 −2 −1 0 1 2 3 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 (d)  100 Figure 2: Final map configurations produced by SSTRESS mappings of data uniformly randomly distributed in unit hypercubes of dimension  . Figure 2 shows the resulting configurations from the SSTRESS mappings. The configurations exhibit significant artefactual structure, which is characterised by a tendency for the map points to cluster in a circular fashion. Furthermore, the degree of clustering increases with increasing dimensionality of the data space  (and is clearly evident for  as low as 10). Although the tendency for SSTRESS configurations to cluster in a circular fashion has been noted in the MDS literature [2], the connection between artefactual structure and the choice of distance metric has not been made. Indeed, in the next section we show analytically that the use of the squared Euclidean metric leads to a globally optimal solution corresponding to an annular structure. To date, the most significant work on this problem is that of Klock and Buhmann [4], who proposed a novel transformation of the dissimilarities (i.e. the squared inter-point distances 5We used a conjugate gradients optimisation algorithm. in the data space) such that “the final disparities are more suitable for Euclidean embedding”. However this transformation assumes that the input data are drawn from a spherical Gaussian distribution6, which is inappropriate for most real-world data sets of interest. 4 Theoretical Analysis of Artefactual Structure In this section we present a theoretical analysis of the artefactual structure problem. A dimensional map configuration is considered to be the result of a SSTRESS mapping of a data set of  i.i.d. points drawn from a  dimensional isotropic distribution (where  ). The set of data points is given by the  x  matrix      #     T and similarly the set of map points is given by the  x matrix      #     T. We begin by defining the derivative of the SSTRESS error measure               # with respect to a particular map vector   :                        (4) The inter-point distances     and    are given by:            #   $    T        T     T      T              #   $    T        T     T      T    Equation (4) can therefore be expanded to:             T     T      $           T     T         "!      T      #$!      T      #%!      T       !      T       We can immediately simplify some of these terms as follows:      T               T        T             T               T    $    T             T                T   &          T         T                T   &          T    Thus at a stationary point of the error (i.e. ')( ' *,+ .- ), we have:  T     T    0/1  $          234          T   !  T     $    6In this case the squared inter-point distances will follow a 5 6 distribution.     / 1    T           T                 T   $         T    2 3 (5) Since the error  is a function of the inter-point distances only, we can centre both the data points and the map points on the origin without loss of generality. For large  we have:           -             -             T              T             T     tr           T     tr   where   is the  x zero matrix,   is the covariance matrix of the map vectors,   is the covariance matrix of the map vectors and the data vectors, and tr   is the matrix trace operator. Thus equation (5) reduces to:  T     T                 T               T           $ "     tr   tr      (6) This represents a general expression for the value of the map vector   at a stationary point of the SSTRESS error, regardless of the nature of the input data distribution. However we are interested in the case where the input data is drawn from a high-dimensional isotropic distribution. If the data space is isotropic then a stationary point of the error will correspond to a similarly isotropic map space7. Thus, at a stationary point, we have for large  :         -   tr    tr           where   is the x identity matrix, and    and    are the variances in the map space and the data space respectively. Finally, consider the expression:          T   )            T      The first term is the third order moment, which is zero for an isotropic distribution [6]. For high-dimensional data (i.e. large  ) the second term can be simplified to:          T   )                     "!                 -#   (7) 7This is true regardless of the initial distribution of the map points, although a highly non-uniform initial configuration would take significantly longer to reach a local minimum of the error function. Thus the equation governing the stationary points of the SSTRESS error is given by:   T     T                 .   At the minimum error configuration, we have:  T     T         $     Summing over all points  , gives:        T     T                       T            T               tr   tr         $                (8) Thus, for large  , the variance of the map points is related to the variance of the data points by a factor of    . Table 1 shows the values of the observed and predicted map variances for 1000 data points sampled randomly from uniform distributions in the interval      (i.e.     ! ) of dimensions   5, 10, 30, and 100. Clearly as the dimension of the data space  increases, so too does the accuracy of the approximation given by equation (7), and therefore the accuracy of equation (8). Dimension  Number of points     observed    predicted Percentage error 5 1000 0.166 0.139 16.4% 10 1000 0.303 0.278 8.1% 30 1000 0.864 0.835 3.4% 100 1000 2.823 2.783 1.4% Table 1: A comparison of the predicted and observed map variances. We can show that this mismatch in variances in the two spaces results in the map points clustering in a circular fashion by considering the expected squared distance of the map points from the origin (i.e. the expected squared radius # of the annulus):   #           T               (9) In addition we can derive an analytic expression for     . For simplicity, consider a two-dimensional map space     #  T. Then we have:             #   # #    #            #     # #      #      (10) where the expectation over  #   # # separates since  #  and  # # will be uncorrelated due to the isotropic nature of  . In general for a -dimensional map space we have that          . Thus the variance of # is given by:  #         #   # Hence for large  the optimal configuration will be an annulus or ring shape, as observed in figure 2. 5 Conclusions We have investigated the problem or artefactual or illusory structure from topographic mappings based upon least squares scaling algorithms from multidimensional scaling. In particular we have shown that the use of a squared Euclidean distance metric (i.e. the SSTRESS measure) gives rise to an annular structure when the input data is drawn from a highdimensional isotropic distribution. A theoretical analysis of this problem was presented and a simple relationship between the variance of the map and the data points was derived. Finally we showed that this relationship results in an optimal configuration which is characterised by the map points clustering in a circular fashion. Acknowledgments We thank Miguel Carreira-Perpi˜n´an for useful comments on this work. References [1] T. F. Cox and M. A. A. Cox. Multidimensional scaling. Chapman and Hall, London, 1994. [2] J. deLeeuw and B. Bettonvil. An upper bound for sstress. Psychometrika, 51:149 – 153, 1986. [3] G. J. Goodhill, M. W. Simmen, and D. J. Willshaw. An evaluation of the use of multidimensional scaling for understanding brain connectivity. Philosophical Transactions of the Royal Society, Series B, 348:256 – 280, 1995. [4] H. Klock and J. M. Buhmann. Multidimensional scaling by deterministic annealing. In M. Pelillo and E. R. Hancock, editors, Energy Minimization Methods in Computer Vision and Pattern Recognition, Proc. Int. Workshop EMMCVPR ’97, Venice, Italy, pages 246–260. Springer Lecture Notes in Computer Science, 1997. [5] D. Lowe and M. E. Tipping. Neuroscale: Novel topographic feature extraction with radial basis function networks. In M. C. Mozer, M. I. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems 9. Cambridge, MA: MIT Press, 1997. [6] K. V. Mardia, J. T. Kent, and J. M. Bibby. Multivariate analysis. Academic Press, 1997. [7] S. T. Roweis, L. K. Saul, and G. E. Hinton. Global coordination of local linear models. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14. Cambridge, MA: MIT Press, 2002. [8] J. W. Sammon. A nonlinear mapping for data structure analysis. IEEE Transactions On Computers, C-18(5):401 – 409, 1969. [9] M. W. Simmen, G. J. Goodhill, and D. J. Willshaw. Scaling and brain connectivity. Nature, 369:448–450, 1994. [10] J. B. Tenenbaum. Mapping a manifold of perceptual observations. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors, Advances in Neural Information Processing Systems 10. Cambridge, MA: MIT Press, 1998. [11] C. K. Williams. On a connection between kernel PCA and metric multidimensional scaling. In T. K. Leen, T. G. Diettrich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13. Cambridge, MA: MIT Press, 2001. [12] M. P. Young. Objective analysis of the topological organization of the primate cortical visual system. Nature, 358:152–155, 1992. [13] M. P. Young, J. W. Scannell, M. A. O’Neill, C. C. Hilgetag, G. Burns, and C. Blakemore. Non-metric multidimensional scaling in the analysis of neuroanatomical connection data and the organization of the primate cortical visual system. Philosophical Transactions of the Royal Society, Series B, 348:281 – 308, 1995.
2002
75
2,282
Adaptation and Unsupervised Learning Peter Dayan Maneesh Sahani Gr´egoire Deback Gatsby Computational Neuroscience Unit 17 Queen Square, London, England, WC1N 3AR. dayan, maneesh  @gatsby.ucl.ac.uk, gdeback@ens-lyon.fr Abstract Adaptation is a ubiquitous neural and psychological phenomenon, with a wealth of instantiations and implications. Although a basic form of plasticity, it has, bar some notable exceptions, attracted computational theory of only one main variety. In this paper, we study adaptation from the perspective of factor analysis, a paradigmatic technique of unsupervised learning. We use factor analysis to re-interpret a standard view of adaptation, and apply our new model to some recent data on adaptation in the domain of face discrimination. 1 Introduction Adaptation is one of the first facts with which neophyte neuroscientists and psychologists are presented. Essentially all sensory and central systems show adaptation at a wide variety of temporal scales, and to a wide variety of aspects of their informational milieu. Adaptation is a product (or possibly by-product) of many neural mechanisms, from short-term synaptic facilitation and depression,1 and spike-rate adaptation,28 through synaptic remodeling27 and way beyond. Adaptation has been described as the psychophysicist’s electrode, since it can be used as a sensitive method for revealing underlying processing mechanisms; thus it is both phenomenon and tool of the utmost importance. That adaptation is so pervasive makes it most unlikely that a single theoretical framework will be able to provide a compelling treatment. Nevertheless, adaptation should be just as much a tool for theorists interested in modeling neural statistical learning as for psychophysicists interested in neural processing. Put abstractly, adaptation involves short or long term changes to aspects of the statistics of the environment experienced by a system. Thus, accounts of neural plasticity driven by such statistics, even if originally conceived as accounts of developmental (or perhaps representational) plasticity,19 are automatically candidate models for the course and function of adaptation. Conversely, thoughts about adaptation lay at the heart of the earliest suggestions that redundancy reduction and information maximization should play a central role in models of cortical unsupervised learning.4–6,8,23 Redundancy reduction theories of adaptation reached their apogee in the work of Linsker,26 Atick, Li & colleagues2,3,25 and van Hateren.40 Their mathematical framework (see section 2) is that of maximizing information transmission subject to various sources of noise and limitations on the strength of key signals. Noise plays the critical roles of rendering some signals essentially undetectable, and providing a confusing background against which other signals should be amplified. Adaptation, by affecting noise levels and informational content (notably probabilistic priors), leads to altered stimulus processing. Early work concentrated on the effects of sensory noise on visual receptive fields; more recent studies41 have used the same framework to study stimulus specific adaptation. Redundancy reduction is one major conceptual plank in the modern theory of unsupervised learning. However, there are various other important complementary ideas, notably genA B                 Figure 1: A) Redundancy reduction model.  is the explicit input, combining signal and noise ! ; " is the explicit output, to be corrupted by noise # to give $ . We seek the filter % that minimizes redundancy subject to a power constraint. B) Factor analysis model. Now " , with a white, Gaussian, prior, captures latent structure underlying the covariance & of  . The empirical mean is '  ; the uniquenesses (*) capture unmodeled variance and additional noise such as +-, . . Generative / and recognition % weights parameterize statistical inverses. erative models.19 Here, we consider adaptation from the perspective of factor analysis,15 which is one of the most fundamental forms of generative model. After describing the factor analysis model and its relationship with redundancy reduction models of adaptation in section 3, section 4 studies loci of adaptation in one version of this model. As examples, we consider adaptation of early visual receptive fields to light levels,38 orientation detection to a persistent bias (the tilt aftereffect),9,16 and a recent report of adaptation of face discrimination to morphed anti-faces.24 2 Information Maximization Figure 1,3 shows a linear model of, for concreteness, retinal processing. Here, 0 dimensional photoreceptor input 132547698 , which is the sum of a signal 4 and detector noise 8 , is filtered by a retinal matrix to produce an : -dimensional output ;52=<>1 for communication down the optic nerve ?@2A;B6DC , against a background of additional noise C . We assume that the signal is Gaussian, with mean E and covariance F , and the noise terms are white and Gaussian, with mean E and covariances GIH JLK and GMH NK , respectively; all are mutually independent. The input may be higher dimensional than the output, ie 0PO=: , as is true of the retina. Here, the signal is translation invariant, ie F is a circulant matrix11 with FRQTSVUXWDY[Z]\_^a` . This means that the eigenvectors of F are (discrete) sine and cosines, with eigenvalues coming from the Fourier series for W , whose terms we will write as b-c7debfHgdihThjhkOml (they are non-negative since F is a covariance matrix; we assume for simplicity that they are strictly positive). Given no input noise ( GMH J 2Bl ), the mutual information between 1n2>4 and ? is Kpo 4qr?tsu23v o ?tst\5v o ?w 4-sa2xY[y{zB| | <}F~<>B6_G H N K | | \€yz‚w G H N K w `„ƒ†… (1) where v is the entropy function (which, for a Gaussian distribution, is proportional to the yz determinant of its covariance matrix). We consider maximizing this with respect to < , a calculation which only makes sense in the face of a constraint, such as on the average power ‡ w ;ˆw HЉ‹2 tr Œ<}F~< Ž . It is a conventional result in principal components analysis12,20 that the solution to this constrained maximization problem involves whitening, ie making <92>B‘B’n“ with ‘i” diag • c –—™˜ q c –I—Šš qThjhThjq c –I—j›œ (2) where  is an arbitrary : -dimensional rotation matrix with >  2 K , ‘ is the :Bž: diagonal matrix with the given form, and ’ “ is an :Ÿ0 matrix whose rows are the first : (transposed) eigenvectors of F . This choice makes <}F~<  ” K , and effectively amplifies weak input channels (ie those with small bt  ) so as fully to utilize all the output channels. 10 0 10 1 10 2 10 −2 10 −1 10 0 60 90 120 −5 0 5 10 0 10 1 10 2 10 −2 10 −1 10 0 60 90 120 −5 0 5 A) RR B) FA C) RR D) FA filter power tilt aftereffect frequency frequency angle angle Figure 2: Simple adaptation. A;B) Filter power as a function of spatial frequency for the redundancy reduction (A: RR) and factor analysis (B: FA) solutions for the case of translation invariance, for low (solid: + , .  ) and high (dashed + , .  ) input noise and     , . Even though the optimal FA solution does have exactly identical uniquenesses, the difference is too small to figure. In (B),   factors were found for  inputs. C) Data9 (crosses) and RR solution41 (solid) for the tilt aftereffect. D) Data (crosses) and linear approximate FA solution (solid). For FA, angle estimation is based on the linear output of the single factor; linearity breaks down for      . Adaptation was based on reducing the uniquenesses (M) for units activated by the adapting stimulus (fitting the width and strength of this adaptation to the data). In the face of input noise, whitening is dangerous for those channels for which GH J  b-  , since noise rather than signal would be amplified by the ! ƒ#"Ÿb-  . One heuristic is to prefilter 1 using an 0 -dimensional matrix $ such that $I1 is the prediction of 4 that minimizes the average error %™w $*1x\54kw H& , and then apply the < of equation 2.14 Another conventional result12 is that $ has a similar form to < , except that D2 o ’('s  , and the diagonal entries of the equivalent of ‘ are bt  ƒ Y[bf 6 GtH J ` . This makes the full (approximate) filter <92>B‘B’ “ with ‘ ” diag • –*— ˜ —™˜*),+ š q –I—Šš —Šš.),+ š qThjhThaq –I—Š› —j›/)0+ š œ (3) Figure 2A shows the most interesting aspect of this filter in the case that b   21!™ƒ/2‹H , inspired by the statistics of natural scenes,36 for which 2 might be either a temporal or spatial frequency. The solid curve shows the diagonal components of ‘ for small input noise. This filter is a band-pass filter. Intermediate frequencies with input power well above the noise level GtH J are comparatively amplified against the output noise C , On the other hand, the dashed line shows the same components for high input noise. This filter is a low-pass filter, as only those few components with sufficient input power are significantly transmitted. The filter in equation 3 is based on a heuristic argument. An exact argument2,3 leads to a slightly more complicated form for the optimal filter, in which, depending on the power constraint and the exact value of G H J , there is a sharp cut-off in which some frequencies are not transmitted at all. However, the main pattern of dependence on GIH J is the same as in figure 2A; the differences lie well outside the realm of experimental test. Figure 2A shows a powerful form of adaptation.3 High relative input noise arises in cases of low illumination; low noise in cases of high illumination. The whole filtering characteristics of the retina should change, from low-pass (smoothing in time or space) to band-pass (differentiation in space or time) filtering. There is evidence that this indeed happens, with dendritic remodeling happening over times of the order of minutes.42 Wainwright41 (see also10) suggested an account along exactly these lines for more stimulusspecific forms of adaptation such as the tilt aftereffect shown in figure 2C. Here (conceptually), subjects are presented with a vertical grating ( 3k254 l76 ) for an adapting period of a few seconds, and then are asked, by one of a number of means, to assess the orientation of test gratings. The crosses in figure 2C shows the error in their estimates; the adapting orientation appears to repel nearby angles, so that true values of 3 near 4 l 6 are reported as being further away. Wainwright modeled this in the light of a neural population code for representing orientation and a filter related to that of equation 3. He suggested that during adaptation, the signal associated with 3k2 4 l#6 is temporarily increased. Thus, as in the solid line of figure 2A, the transmission through the adapted filter of this signal should be temporarily reduced. If the recipient structures that use the equivalent of ; to calculate the orientation of a test grating are unaware of this adaptation, then, as in the solid line of figure 2C, an estimation error like that shown by the subjects will result. 3 Factor Analysis and Adaptation We sought to understand the adaptation of equation 3 and figure 2A in a factor analysis model. Factor analysis15 is one of the simplest probabilistic generative schemes used to model the unsupervised learning of cortical representations, and underlies many more sophisticated approaches. The case of uniform input noise GIH J is particularly interesting, because it is central to the relationship between factor analysis and principal components analysis.20,34,39 Figure 1B shows the elements of a factor analysis model (see Dayan & Abbott12 for a relevant tutorial introduction). The (so-called) visible variable 1 is generated from the latent variable ; according to the two-step o ;ts o EMq K s o 1Rw ;Ms o  ;76 1ˆq s with }2 diag Y c qThThjhaq ' ` (4) where  o q ˆs is a multi-variate Gaussian distribution with mean and covariance matrix  ,  is a set of top-down generative weights,  1 is the mean of 1 , and  a diagonal matrix of uniquenesses, which are the variances of the residuals of 1 that are not represented in the covariances associated with ; . Marginalizing out ; , equation 4 specifies a Gaussian distribution for 1 o  1q   6s , and, indeed, the maximum likelihood values for the parameters given some input data 1 are to set  1 to the empirical mean of the 1 that are presented, and to set  and  by maximizing the likelihood of the empirical covariance matrix  of the 1 under a Wishart distribution with mean   6 . Note that  is only determined up to an :Ÿ: rotation matrix  , since Y  B`TY  }`  2   . The generative or synthetic model of equation 4 shows how ; determines 1 . In most instances of unsupervised learning, the focus is on the recognition or analysis model,30 which maps a presented input 1 into the values of the latent variable ; which might have generated it, and thereby form its possible internal representations. The recognition model is the statistical inverse of the generative model and specifies the Gaussian distribution: o ;ˆw 1*s o <xY1x\ 1`aq ks with L25Y K 6   c  `  c <92   c h (5) The mean value of ; can be derived from the differential equation31,32  ;n2x\;B6* c Y17\  1x\  ;` (6) in which 1 \  1 \  ; , which is the prediction error for 1 based on the current value of ; , is downweighted according to the inverse uniquenesses   c , mapped through bottomup weights  and left to compete against the contribution of the prior for ; (which is responsible for the \; term in equation 6). For this scheme to give the right answer, the bottom-up weights should be the transpose of the top-down weights  2   . However, we later consider forms of adaptation that weaken this dependency. In general, factor analysis and principal components analysis lead to different results. Indeed, although the latter is performed by an eigendecomposition of the covariance matrix of the inputs, the former requires execution of one of a variety of iterative procedures on the same covariance matrix.21,22,35 However, if the uniquenesses are forced to be equal, ie  2 Vq"!$# , then these procedures are almost the same.34,39 In this case, assuming that  1n2}E ,   2B>‘{’ “ with ‘u2 diag "BY c \ `aq "BY H \ `uqjhThjhTq "BY “ \ R`  (7) ~2DŒ  '   “ ) c $ Ž ƒ Y[0 \€:*` (8) with the same conventions as in equation 2, except that   are the (ordered) eigenvalues of the covariance matrix  of the visible variables 1 rather than explicitly of the signal. Here has the natural interpretation of being the average power of the unexplained components. Applying this in equation 5: <92>B‘B’ “ with ‘92 diag • – ˜   †˜ q – ™š   ™š qjhThThjq – ™›   Š› œ h (9) If 1 really comes from a signal and noise model as in figure 1, then   2Db  6 GtH J , and B2 — 69GtH J , where — is the residual uniqueness of equation 8 in the case that G*H J 2xl . This makes the recognition weights of equation 9 <92>B‘B’ “ with ‘92 diag • –— ˜   — ˜ ),+ š q –{— š   —jš ),+ š qThThjhaq –— ›   —Š›/),+ š œ h (10) The similarity between this and the approximate redundancy reduction expression of equation 3 is evident. Just like that filter, adaptation to high and low light levels (high and low signal/noise ratios), leads to a transition from bandpass to lowpass filtering in < . The filter of equation 3 was heuristic; this is exact. Also, there is no power constraint imposed; rather something similar derives from the generative model’s prior over the latent variables ; . This analysis is particularly well suited to the standard treatment of redundancy reduction case of figure 2A, since adding independent noise of the same strength GIH J to each of the input variables can automatically be captured by adding GIH J to the common uniqueness . However, even though the signal 4 is translation invariant in this case, it need not be that the maximum likelihood factor analysis solution has the property that  is proportional to K . However, it is to a close approximation, and figure 2B shows that the strength of the principal components of F in the maximum likelihood < (evaluated as in the figure caption) shows the same structure of adaptation as in the probabilistic principal components solution, as a function of GMH J . Figure 2D shows a version of the tilt illusion coming from a factor analysis model given population coded input (with Gaussian tuning curves with an orientation bandwidth of …†l 6 ) and a single factor. It is impossible to perform the full non-linear computation of extracting an angle from the population activity 1 in a single linear operation <5Y[1€\  1` . However, in a regime in which a linear approximationholds, the one factor can represent the systematic covariation in the activity of the population coming from the single dimension of angular variation in the input. For instance, around 3>2 4 l76 , this regime comprises roughly 3 o  l 6†q !™…†l 6 s . A close match in this model to Wainwright’s41 suggestion is that the uniquenesses  for the input units (around 3R254 l 6 ) that are reliably activated by an adapting stimulus should be decreased, as if the single factor would predict a greater proportion of the variability in the activation of those units.  This makes < of equation 5 more sensitive to small variations in 1 away from 32 4 l 6 , and so leads to a tilt aftereffect as an estimation bias. Figure 2D shows the magnitude of this effect in the linear regime. This is a rough match for the data in figure 2C. Our model also shows the same effect as Wainwright’s41 in orientation discrimination, boosting sensitivity near the adapted 3 and reducing it around half a tuning width away.33 4 Adaptation for Faces Another, and even simpler, route to adaptation is changing  1 towards the mean of the recently presented (ie the adapting) stimuli. We use this to model a recently reported effect of adaptation on face discrimination.24  Note that changing the mean '  according to the input has no effect on the factor. −0.2 0 0.2 0.4 0 0.5 1 −0.2 0 0.2 0.4 0 1 −0.2 0 0.2 0.4 0 1 −0.2 0 0.2 0.4 0 0.5 1 A) Data B) FA C) Data D) FA Adam identification Adam responses Adam strength Adam strength Henry strength Henry strength Figure 3: Face discrimination. Here, Adam and Henry are used for concreteness; all results are averages over all faces, and, for FA,   random draws. A) Experimental24 mean propensity to report Adam as a function of the strength of Adam in the input for no adaptation (‘o’); adaptation to anti-Adam (‘x’); and adaptation to anti-Henry (‘ ’). The curves are cumulative normal fits. B) Mean propensity in the factor analysis model for the same outcomes. The model, like some subjects, is more extreme than the mean of the subjects, particularly for test anti-faces. C;D) Experimental and model proportion of reports of Adam when adaptation was to anti-Adam; but various strengths of Henry are presented. The model captures the decrease in Adam given presentation of anti-Henry through a normalization pool (solid); although it does not decrease to quite the same extent as the data. Just reporting the face with the largest  ) (dashed) shows no decrease in reporting Adam given presentation of anti-Henry. Here +  /        (except for the dashed line in D, for which   /    to match the peak of the solid curve). Leopold and his colleagues24 studied adaptation in the complex stimulus domain of faces. Their experiment involved four target faces (associated with names ‘Adam’, ‘Henry’, ‘Jim’, ‘John’) which were previously unfamiliar to subjects, together with morphed versions of these faces lying on ‘lines’ going through the target faces and the average of all four faces. These interpolations were made visually sensible using a dense correspondence map between the faces. The task for the subjects was always to identify which of the four faces was presented; this is obviously impossible at the average face, but becomes progressively easier as the average face is morphed progressively further (by an amount called its strength) towards one of the target faces. The circles in figure 3A show the mean performance of the subjects in choosing the correct face as a function of its strength; performance is essentially perfect l of the way to the target face. A negative strength version of one of the target faces (eg anti-Adam) was then shown to the subjects for seconds before one of the positive strength faces was shown as a test. The other two lines in figure 3A show that the effect of adaptation is to boost the effective strength of the given face (Adam), since (crosses) the subjects were much readier to report Adam, even for the average face (which contains no identity information), and much less ready to report the other faces even if they were actually the test stimulus (shown by the squares). As for the tilt aftereffect, discrimination is biased away from the adapted stimulus. Figure 3C shows that adapting to anti-Adam offers the greatest boost to the event that Adam is reported to a test face (say Henry) that is not Adam, at the average face. Reporting Adam falls off if either increasing strengths of Henry or anti-Henry are presented. That presenting Henry should decrease the reporting of Adam is obvious, and is commented on in the paper. However, that presenting anti-Henry should decrease the reporting of Adam is less obvious, since, by removing Henry as a competitor, one might have expected Adam to have received an additional boost. Figure 3B;D shows our factor analysis model of these results. Here, we consider a case with … visible units, and factors, one for each face, with generative weights  2  Adam qThjhTh  governing the input activity associated with full strength versions of each face generated from independent  YE*q K ` distributions. In this representation, morphing is easy, consisting of presenting 1 2 Adam 6   where is the strength and    is noise (variance GMH ). The outputs ;D2 <B1 depend on , the angle between the   and the noise. Next, we need to specify how discrimination is based on the information provided by ; . For reasons discussed below, we considered a normalization pool17,37 for the outputs, treating Y k\  z  ` ƒ  Y @\  z ` as the probability that face # would be reported, where  is a discrimination parameters. Adaptation to anti-Adam was represented by setting  1n2x\  Adam, where  is the strength of the adapting stimulus. Figure 3B shows the model of the basic adaptation effect seen in figure 3A. Adapting to \  Adam clearly boosts the willingness of the model to report Adam, much as for the subjects. The model is a little more extreme than the average over the subjects. The results for two individual subjects presented in the paper24 are just as extreme; other subjects may have had softer decision biases. Figure 3D shows the model of figure 3C. The dashed line shows that without the normalization pool, presenting anti-Henry does indeed boost reporting of Adam, when anti-Adam was the adapting stimulus. However, under the above normalization, decreasing   boosts the relative strengths of Jim and John (through the minimization in the normalization pool), allowing them to compete, and so reduces the propensity to report Adam (solid line). 5 Discussion We have studied how plasticity associated with adaptation fits with regular unsupervised learning models, in particular factor analysis. It was obvious that there should be a close relationship; this was, however, obscured by aspects of the redundancy reduction models such as the existence of multiple sources of added noise and non-informational constraints. Uniquenesses in factor analysis are exactly the correct noise model for the simple information maximization scheme. We illustrated the model for the case of a simple, linear, model of the tilt aftereffect, and of adaptation in face discrimination. The latter had the interesting wrinkle that the experimental data support something like a normalization pool.17,37 Under this current conceptual scheme for adaptation, assumed changes in the input statistics  are fully compensated for by the factor analysis model (and the linear and Gaussian nature of the model implies that  1 can be changed without any consequence for the generative or recognition models). The dynamical form of the factor analysis model in equation 6 suggests other possible targets for adaptation. Of particular interest is the possibility that the top-down weights  and/or the uniquenesses  might change whilst bottom-up weights  remain constant. The rationale for this comes from suggestive neurophysiological evidence that bottom-up pathways show delayed plasticity in certain circumstances;13 and indeed it is exactly what happens in unsupervised learning techniques such as the wake-sleep algorithm.18,29 Given satisfaction of an eigenvalue condition that the differential equation 6 be stable, it will be interesting to explore the consequences of such changes. Of course, factor analysis is insufficiently powerful to be an adequate model for cortical unsupervised learning or indeed all aspects of adaptation (as already evident in the limited range of applicability of the model of the tilt aftereffect). However, the ideas about the extraction of higher order statistical structure in the inputs into latent variables, the roles of noise, and the way in equation 6 that predictive coding or explaining away controls cortical representations,32 survive into sophisticated complex unsupervised learning models,19 and offer routes for extending the present results. A paradoxical aspect of adaptation, which neither we nor others have addressed, is the way that the systems that are adapting interact with those to which they send their output. For instance, it would seem unfortunate if all cells in primary visual cortex have to know the light level governing adaptation in order to be able correctly to interpret the information coming bottom-up from the thalamus. In some cases, such as the approximate noise filter $ , there are alternative semantics for the adapted neural activity under which this is unnecessary; understanding how this generalizes is a major task for future work. Acknowledgements Funding was from the Gatsby Charitable Foundation. We are most grateful to Odelia Schwartz for discussion and comments. References [1] Abbott, LF, Varela, JA, Sen, K, & Nelson, SB (1997) Synaptic depression and cortical gain control. Science 275, 220-224. [2] Atick, JJ (1992) Could information theory provide an ecological theory of sensory processing? Network: Computation in Neural Systems 3, 213-251. [3] Atick, JJ, & Redlich, AN (1990) Towards a theory of early visual processing. Neural Computation 2, 308-320. [4] Attneave, F (1954) Some informational aspects of visual perception. Psychological Review 61, 183-193. [5] Barlow, HB (1961) Possible principles underlying the transformation of sensory messages. In WA Rosenblith, ed., Sensory Communication. Cambridge, MA: MIT Press. [6] Barlow, HB (1969) Pattern recognition and the responses of sensory neurones.,Annals of the New York Academy of Sciences 156, 872-881. [7] Barlow, HB (1989) Unsupervised learning, Neural Computation, 1, 295-311. [8] Barlow, H (2001) Redundancy reduction revisited. Network 12,:241-253. [9] Campbell, FW & Maffei, L (1971) The tilt after-effect: a fresh look. Vision Research 11, 833-40. [10] Clifford, CWG, Wenderoth, P & Spehar, B. (2000) A functional angle on some after-effects in cortical vision, Proceedings of the Royal Society of London, Series B 267, 1705-1710. [11] Davis, PJ (1979) Circulant Matrices. New York, NY: Wiley. [12] Dayan, P & Abbott, LF (2001). Theoretical Neuroscience. Cambridge, MA: MIT Press. [13] Diamond, ME, Huang, W & Ebner, FF (1994) Laminar comparison of somatosensory cortical plasticity. Science 265, 1885-1888. [14] Dong, DW, & Atick, JJ (1995) Temporal decorrelation: A theory of lagged and nonlagged responses in the lateral geniculate nucleus. Network: Computation in Neural Systems 6, 159-178. [15] Everitt, BS (1984) An Introduction to Latent Variable Models, London: Chapman and Hall. [16] Gibson, JJ & Radner, M (1937) Adaptation, after-effect and contrast in the perception of tilted lines. Journal of Experimental Psychology 20, 453-467. [17] Heeger, DJ (1992) Normalization of responses in cat striate cortex. Visual Neuroscience 9, 181-198. [18] Hinton, GE, Dayan, P, Frey, BJ, & Neal, RM (1995) The wake-sleep algorithm for unsupervised neural networks. Science 268, 1158-1160. [19] Hinton, GE & Sejnowski, TJ (1999) Unsupervised Learning. Cambridge, MA: MIT Press. [20] Jolliffe, IT (1986) Principal Component Analysis, New York: Springer. [21] J¨oreskog, KG (1967) Some contributions to maximum likelihood factor analysis, Psychometrika, 32, 443-482. [22] J¨oreskog, KG (1969) A general approach to confirmatory maximum likelihood factor analysis, Psychometrika, 34, 183-202. [23] Kohonen, T & Oja, E (1976) Fast adaptive formation of orthogonalizing filters and associative memory in recurrent networks of neuron-like elements. Biological Cybernetics 21, 85-95. [24] Leopold, DA, O’Toole, AJ, Vetter, T & Blanz, V (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience 4, :89-94. [25] Li, Z & Atick, JJ (1994a) Efficient stereo coding in the multiscale representation. Network: Computation in Neural Systems 5, 157-174. [26] Linsker, R (1988) Self-organization in a perceptual network, Computer, 21, 105-128. [27] Maguire G, Hamasaki DI (1994) The retinal dopamine network alters the adaptational properties of retinal ganglion cells in the cat.Journal of Neurophysiology, 72, 730-741. [28] McCormick, DA (1990) Membrane properties and neurotransmitter actions. In GM Shepherd, ed., The Synaptic Organization of the Brain. New York: Oxford University Press. [29] Neal, RM & Dayan, P (1997). Factor Analysis using delta-rule wake-sleep learning. Neural Computation, 9, 1781-1803. [30] Neisser, U (1967) Cognitive Psychology. New York: Appleton-Century-Crofts. [31] Olshausen, BA, & Field, DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607-609. [32] Rao, RPN, & Ballard, DH (1997) Dynamic model of visual recognition predicts neural response properties in the visual cortex. Neural Computation 9, 721-763. [33] Regan, D & Beverley, KI (1985) Postadaptation orientation discrimination. JOSA A, 2, 147-155. [34] Roweis, S & Ghahramani, Z (1999) A unifying review of linear gaussian models. Neural Computation 11, 305-345. [35] Rubin, DB & Thayer, DT (1982) EM algorithms for ML factor analysis, Psychometrika, 47, 69-76. [36] Ruderman DL & Bialek W (1994) Statistics of natural images: Scaling in the woods. Physical Review Letters 73, 814-817. [37] Schwartz, O & Simoncelli, EP (2001) Natural signal statistics and sensory gain control. Nature Neuroscience 4, 819-825. [38] Shapley, R & Enroth-Cugell, C (1984) Visual adaptation and retinal gain control. Progress in Retinal Research 3, 263-346. [39] Tipping, ME & Bishop, CM (1999) Mixtures of probabilistic principal component analyzers. Neural Computation 11, 443-482. [40] van Hateren, JH (1992) A theory of maximizing sensory information. Biological Cybernetics 68, 23-29. [41] Wainwright, MJ (1999) Visual adaptation as optimal information transmission. Vision Research 39, 3960-3974. [42] Weiler R & Wagner HJ (1984) Light-dependent change of cone-horizontal cell interactions in carp retina. Brain Resesarch 298, 1-9.
2002
76
2,283
Learning in Zero-Sum Team Markov Games Using Factored Value Functions Michail G. Lagoudakis Department of Computer Science Duke University Durham, NC 27708 mgl@cs.duke.edu Ronald Parr Department of Computer Science Duke University Durham, NC 27708 parr@cs.duke.edu Abstract We present a new method for learning good strategies in zero-sum Markov games in which each side is composed of multiple agents collaborating against an opposing team of agents. Our method requires full observability and communication during learning, but the learned policies can be executed in a distributed manner. The value function is represented as a factored linear architecture and its structure determines the necessary computational resources and communication bandwidth. This approach permits a tradeoff between simple representations with little or no communication between agents and complex, computationally intensive representations with extensive coordination between agents. Thus, we provide a principled means of using approximation to combat the exponential blowup in the joint action space of the participants. The approach is demonstrated with an example that shows the efficiency gains over naive enumeration. 1 Introduction The Markov game framework has received increased attention as a rigorous model for defining and determining optimal behavior in multiagent systems. The zero-sum case, in which one side’s gains come at the expense of the other’s, is the simplest and best understood case1. Littman [7] demonstrated that reinforcement learning could be applied to Markov games, albeit at the expense of solving one linear program for each state visited during learning. This computational (and conceptual) burden is probably one factor behind the relative dearth of ambitious Markov game applications using reinforcement learning. In recent work [6], we demonstrated that many previous theoretical results justifying the use of value function approximation to tackle large MDPs could be generalized to Markov games. We applied the LSPI reinforcement learning algorithm [5] with function approximation to a two-player soccer game and a router/server flow control problem and derived very good results. While the theoretical results [6] are general and apply to any reinforcement learning algorithm, we preferred to use LSPI because LSPI’s efficient use of data meant that we solved fewer linear programs during learning. 1The term Markov game in this paper refers to the zero-sum case unless stated otherwise. Since soccer, routing, and many other natural applications of the Markov game framework tend to involve multiple participants it would be very useful to generalize recent advances in multiagent cooperative MDPs [2, 4] to Markov games. These methods use a factored value function architecture and determine the optimal action using a cost network [1] and a communication structure which is derived directly from the structure of the value function. LSPI has been successfuly combined with such methods; in empirical experiments, the number of state visits required to achieve good performance scaled linearly with the number of agents despite the exponential growth in the joint action space [4]. In this paper, we integrate these ideas and we present an algorithm for learning good strategies for a team of agents that plays against an opponent team. In such games, players within one team collaborate, whereas players in different teams compete. The key component of this work is a method for computing efficiently the best strategy for a team, given an approximate factored value function which is a linear combination of features defined over the state space and subsets of the joint action space for both sides. This method integrated within LSPI yields a computationally efficient learning algorithm. 2 Markov Games A two-player zero-sum Markov game is defined as a 6-tuple (S, A, O, P, R, γ), where: S = {s1, s2, ..., sn} is a finite set of game states; A = {a1, a2, ..., am} and O = {o1, o2, ..., ol} are finite sets of actions, one for each player; P is a Markovian state transition model — P(s, a, o, s′) is the probability that s′ will be the next state of the game when the players take actions a and o respectively in state s; R is a reward (or cost) function — R(s, a, o) is the expected one-step reward for taking actions a and o in state s; and, γ ∈(0, 1] is the discount factor for future rewards. We will refer to the first player as the maximizer and the second player as the minimizer2. Note that if either player is permitted only a single action, the Markov game becomes an MDP for the other player. A policy π for a player in a Markov game is a mapping, π : S →Ω(A), which yields probability distributions over the maximizer’s actions for each state in S. Unlike MDPs, the optimal policy for a Markov game may be stochastic, i.e., it may define a mixed strategy for every state. By convention, for any policy π, π(s) denotes the probability distribution over actions in state s and π(s, a) denotes the probability of action a in state s. The maximizer is interested in maximizing its expected, discounted return in the minimax sense, that is, assuming the worst case of an optimal minimizer. Since the underlying rewards are zero-sum, it is sufficient to view the minimizer as acting to minimize the maximizer’s return. For any policy π, we can define Qπ(s, a, o) as the expected total discounted reward of the maximizer when following policy π after the players take actions a and o for the first step. The corresponding fixed point equation for Qπ is: Qπ(s, a, o) = R(s, a, o) + γ X s′∈S P(s, a, o, s′) min o′∈O X a′∈A Qπ(s′, a′, o′)π(s′, a′) . Given any Q function, the maximizer can choose actions so as to maximize its value: V (s) = max π′(s)∈Ω(A) min o∈O X a∈A Q(s, a, o)π′(s, a) . (1) We will refer to the policy π′ chosen by Eq. (1) as the minimax policy with respect to Q. 2Because of the duality, we adopt the maximizer’s point of view for presentation. This policy can be determined in any state s by solving the following linear program: Maximize: V (s) Subject to: ∀a ∈A, π′(s, a) ≥0 a∈A π′(s, a) = 1 ∀o ∈O, V (s) ≤ a∈A Q(s, a, o)π′(s, a) . If Q = Qπ, the minimax policy is an improved policy compared to π. A policy iteration algorithm can be implemented for Markov games in a manner analogous to policy iteration for MDPs by fixing a policy πi, solving for Qπi, choosing πi+1 as the minimax policy with respect to Qπi and iterating. This algorithm converges to the optimal minimax policy π∗. 3 Least Squares Policy Iteration (LSPI) for Markov Games In practice, the state/action space is too large for an explicit representation of the Q function. We consider the standard approach of approximating the Q function as the linear combination of k basis functions φj with weights wj, that is bQ(s, a, o) = φ(s, a, o)⊺w. With this representation, the minimax policy π for the maximizer is determined by π(s) = arg max π(s) ∈Ω(A) min o∈O X a∈A π(s, a)φ(s, a, o)⊺w , and can be computed by solving the following linear program Maximize: V (s) Subject to: ∀a ∈A, π(s, a) ≥0 a∈A π(s, a) = 1 ∀o ∈O, V (s) ≤ a∈A π(s, a)φ(s, a, o)  w . We chose the LSPI algorithm to learn the weights w of the approximate value function. Least-Squares Policy Iteration (LSPI) [5] is an approximate policy iteration algorithm that learns policies using a corpus of stored samples. LSPI applies also with minor modifications to Markov games [6]. In particular, at each iteration, LSPI evaluates the current policy using the stored samples and keeps the learned weights to represent implicitly the improved minimax policy for the next iteration by solving the linear program above. The modified update equations account for the minimizer’s action and the distribution over next maximizer actions since the minimax policy is, in general, stochastic. More specifically, at each iteration LSPI maintains two matrices, bA and bb, which are updated as follows:  A ←  A + φ(s, a, o)  φ(s, a, o) −γ a′∈A π(s′, a′)φ(s′, a′, o′)   ,  b ←  b + φ(s, a, o)r , for any sample (s, a, o, r, s′). The policy π′(s′) for state s′ is computed using the linear program above. The action o′ is the minimizing opponent action in computing π(s′) and can be identified by the tight constraint on V (s′). The weight vector w is computed at the end of each iteration as the solution to bAw = bb. The key step in generalizing LSPI to team Markov games is finding efficient means to perform these operations despite the exponentially large joint action space. 4 Least Squares Policy Iteration for Team Markov Games A team Markov game is a Markov game where a team of N maximizers is playing against a team of M minimizers. Maximizer i chooses actions from Ai, so the team chooses actions ¯a = (a1, a2, ..., aN) from ¯ A = A1 × A2 × ... × AN, where ai ∈Ai. Minimizer i chooses actions from Oi, so the minimizer team chooses actions ¯o = (o1, o2, ..., oM) from ¯O = O1 × O2 × ... × OM, where oi ∈Oi. Consider now an approximate value function bQ(s, ¯a, ¯o). The minimax policy π for the maximizer team in any given state s can be computed (naively) by solving the following linear program: Maximize: V (s) Subject to: ∀¯a ∈¯ A, π(s, ¯a) ≥0 ¯a∈¯ A π(s, ¯a) = 1 ∀¯o ∈¯O, V (s) ≤ ¯a∈¯ A π(s, ¯a)  Q(s, ¯a, ¯o) . Since | ¯ A| is exponential in N and | ¯O| is exponential in M, the linear program above has an exponential number of variables and constraints and would be intractable to solve, unless we make certain assumptions about bQ. We assume a factored approximation [2] of the Q function, given as a linear combination of k localized basis functions. Each basis function can be thought of as an individual player’s perception of the environment, so each φj need not depend upon every feature of the state or the actions taken by every player in the game. In particular, we assume that each φj depends only on the actions of a small subset of maximizers Aj and minimizers Oj, that is, φj = φj(s, ¯aj, ¯oj), where ¯aj ∈¯ Aj and ¯oj ∈¯Oj ( ¯ Aj is the joint action space of the palyers in Aj and ¯Oj is the joint action space of the palyers in Oj). For example, if φ4 depends only on the actions of maximizers {4, 5, 8}, and the actions of minimizers {3, 2, 7}, then ¯a4 ∈A4 × A5 × A8 and ¯o4 ∈O3 × O2 × O7. Under this locality assumption, the approximate (factored) value function is bQ(s, ¯a, ¯o) = k X j=1 φj(s, ¯aj, ¯oj)wj , where the assignments to the ¯aj’s and ¯oj’s are consistent with ¯a and ¯o. Given this form of the value function the linear program can be simplified significantly. We look at the constraints for the value of the state first: V (s) ≤ ¯a∈¯ A π(s, ¯a) k j=1 φj(s, ¯aj, ¯oj)wj V (s) ≤ k j=1 ¯a∈¯ A π(s, ¯a)φj(s, ¯aj, ¯oj)wj V (s) ≤ k j=1 ¯aj∈¯ Aj ¯a′∈¯ A\ ¯ Aj π(s, ¯a)φj(s, ¯aj, ¯oj)wj V (s) ≤ k j=1 wj ¯aj∈¯ Aj φj(s, ¯aj, ¯oj) ¯a′∈¯ A\ ¯ Aj π(s, ¯a) V (s) ≤ k j=1 wj ¯aj∈¯ Aj φj(s, ¯aj, ¯oj)πj(s, ¯aj) , where each πj(s, ¯aj) defines a probability distribution over the actions of the players that appear in φj. From the last expression, it is clear that we can use πj(s, ¯aj) as the variables of the linear program. The number of these variables will typically be much smaller than the number of variables π(s, ¯a), depending on the size of the Aj’s. However, we must add constraints to ensure that the local probability distributions πj(s) are consistent with a global distribution over the entire joint action space ¯ A. The first set of constraints are the standard ones for any probability distribution: ∀j = 1, ..., k : X ¯aj∈¯ Aj πj(s, ¯aj) = 1 ∀j = 1, ..., k : ∀¯aj ∈¯ Aj, πj(s, ¯aj) ≥0 . For consistency, we must ensure that all marginals over common variables are identical: ∀1 ≤j < h ≤k : ∀¯a′ ∈¯ Aj ∩¯ Ah, X ¯a′ j∈¯ Aj\ ¯ Ah πj(s, ¯aj) = X ¯a′ h∈¯ Ah\ ¯ Aj πh(s, ¯ah) . These constraints are sufficient if the running intersection property is satisfied by the πj(s)’s [3]. If not, it is possible that the resulting πj(s)’s will not be consistent with any global distribution even though they are locally consistent. However, the running intersection property can be enforced by introducing certain additional local distributions in the set of πj(s)’s. This can be achieved using a variable elimination procedure. First, we establish an elimination order for the maximizers and we let H1 be the set of all πj(s)’s and L = ∅. At each step i, some agent i is eliminated and we let Ei be the set of all distributions in Hi that involve the actions of agent i or have empty domain. We then create a new distribution ωi over the actions of all agents that appear in Ei and we place ωi in L. We then create ω′ i defined as the distribution over the actions of all agents that appear in ωi except agent i. Next, we update Hi+1 = Hi ∪{ω′ i} −Ei and repeat until all agents have been eliminated. Note that HN will necessarily be empty and L will contain at most N new local probability distributions. We can manipulate the elimination order in an attempt to keep the distributions in L small (local), however their size will be exponential in the induced tree width. As with Bayes nets, the existence and hardness of discovering efficient elimination orderings will depend upon the topology. The set H1 ∪L of local probability distributions satisfies the running intersection property and so we can proceed with this set instead of the original set of πj(s)’s and apply the constraints listed above. Even though we are only interested in the πj(s)’s, the existence of the additional distributions in the linear program will ensure that the πj(s)’s will be globally consistent. The number of constraints needed for the local probability distributions is much smaller than the original number of constraints. In summary, the new linear program will be: Maximize: V (s) Subject to: ∀j = 1, ..., k : ∀¯aj ∈¯ Aj, πj(s, ¯aj) ≥0 ∀j = 1, ..., k : ¯aj∈¯ Aj πj(s, ¯aj) = 1 ∀1 ≤j < h ≤k : ∀¯a′ ∈¯ Aj ∩¯ Ah, ¯a′ j∈¯ Aj\ ¯ Ah πj(s, ¯aj) = ¯a′ h∈¯ Ah\ ¯ Aj πh(s, ¯ah) ∀¯o ∈¯O, V (s) ≤ k j=1 wj ¯aj∈¯ Aj φj(s, ¯aj, ¯oj)πj(s, ¯aj) . At this point we have eliminated the exponential dependency from the number of variables and partially from the number of constraints. The last set of (exponentially many) constraints can be replaced by a single non-linear constraint: V (s) ≤min ¯o∈¯ O k j=1 wj ¯aj∈¯ Aj φj(s, ¯aj, ¯oj)πj(s, ¯aj) . We now show how this non-linear constraint can be turned into a number of linear constraints which is not exponential in M in general. The main idea is to embed a cost network inside the linear program [2]. In particular, we define an elimination order for the oi’s in ¯o and, for each oi in turn, we push the min operator for just oi as far inside the summation as possible, keeping only terms that have some dependency on oi or no dependency on any of the opponent team actions. We replace this smaller min expression over oi with a new function fi (represent by a set of new variables in the linear program) that depends on the other opponent actions that appear in this min expression. Finally, we introduce a set of linear constraints for the value of fi that express the fact that fi is the minimum of the eliminated expression in all cases. We repeat this elimination process until all oi’s and therefore all min operators are eliminated. More formally, at step i of the elimination, let Bi be the set of basis functions that have not been eliminated up to that point and Fi be the set of the new functions that have not been eliminated yet. For simplicity, we assume that the elimination order is o1, o2, ..., oM (in practice the elimination order needs to be chosen carefully in advance since a poor elimination ordering could have serious adverse effects on efficiency). At the very beginning of the elimination process, B1 = {φ1, φ2, ..., φk} and F1 is empty. When eliminating oi at step i, define Ei ⊆Bi ∪Fi to be those functions that contain oi in their domain or have no dependency on any opponent action. We generate a new function fi(¯¯oi) that depends on all the opponent actions that appear in Ei excluding oi: fi(¯¯oi) = min oi∈Oi   φj∈Ei wj ¯aj∈¯ Aj φj(s, ¯aj, ¯oj)πj(s, ¯aj) + fk∈Ei fk(¯¯ok)    . We introduce a new variable in the linear program for each possible setting of the domain ¯¯oi of the new function fi(¯¯oi). We also introduce a set of constraints for these variables: ∀oi ∈Oi, ∀¯¯oi : fi(¯¯oi) ≤ X φj∈Ei wj X ¯aj∈¯ Aj φj(s, ¯aj, ¯oj)πj(s, ¯aj) + X fk∈Ei fk(¯¯ok) These constraints ensure that the new function is the minimum over the possible choices for oi. Now, we define Bi+1 = Bi −Ei and Fi+1 = Fi −Ei + {fi} and we continue with the elimination of action oi+1. Notice that oi does not appear anywhere in Bi+1 or Fi+1. Notice also that fM will necessarily have an empty domain and it is exactly the value of the state, fM = V (s). Summarizing everything, the reduced linear program is Maximize: fM Subject to: ∀j = 1, ..., k : ∀¯aj ∈¯ Aj, πj(s, ¯aj) ≥0 ∀j = 1, ..., k : ¯aj∈¯ Aj πj(s, ¯aj) = 1 ∀1 ≤j < h ≤k : ∀¯a′ ∈¯ Aj ∩¯ Ah, ¯a′ j∈¯ Aj\ ¯ Ah πj(s, ¯aj) = ¯a′ h∈¯ Ah\ ¯ Aj πh(s, ¯ah) ∀i, ∀oi, ∀¯¯oi : fi(¯¯oi) ≤ φj∈Ei wj ¯aj∈¯ Aj φj(s, ¯aj, ¯oj)πj(s, ¯aj) + fk∈Ei fk(¯¯ok) Notice that the exponential dependency in N and M has been eliminated. The total number of variables and/or constraints is now exponentially dependent only on the number of players that appear together as a group in any of the basis functions or the intermediate functions and distributions. It should be emphasized that this reduced linear program solves the same problem as the naive linear program and yields the same solution (albeit in a factored form). To complete the learning algorithm, the update equations of LSPI must also be modified. For any sample (s, ¯a, ¯o, r, s′), the naive form would be  A ←  A + φ(s, ¯a, ¯o)  φ(s, ¯a, ¯o) −γ ¯a′∈¯ A π(s′, ¯a′)φ(s′, ¯a′, ¯o′)   ,  b ←  b + φ(s, ¯a, ¯o)r . The action ¯o′ is the minimizing opponent’s action in computing π(s′). Unfortunately, the number of terms in the summation within the first update equation is exponential in N. However, the vector φ(s, ¯a, ¯o) −γ P ¯a′∈¯ A π(s′, ¯a′)φ(s′, ¯a′, ¯o′) can be computed on a component-by-component basis avoiding this exponential blowup. In particular, the j-th component is: φj(s, ¯aj, ¯o) −γ ¯a′∈¯ A π(s′, ¯a′)φj(s′, ¯a′ j, ¯o′) = φj(s, ¯a, ¯o) −γ ¯a′ j∈¯ Aj ¯a′′ j ∈¯ A\ ¯ Aj π(s′, ¯a′)φj(s′, ¯a′ j, ¯o′) = φj(s, ¯a, ¯o) −γ ¯a′ j∈¯ Aj φj(s′, ¯a′ j, ¯o′) ¯a′′ j ∈¯ A\ ¯ Aj π(s′, ¯a′) = φj(s, ¯a, ¯o) −γ ¯a′ j∈¯ Aj φj(s′, ¯a′ j, ¯o′)πj(s′, ¯a′ j) , which can be easily computed without exponential enumeration. A related question is how to find ¯o′, the minimizing opponent’s joint action in computing π(s′). This can be done after the linear program is solved by going through the fi’s in reverse order (compared to the elimination order) and finding the choice for oi that imposes a tight constraint on fi(¯¯oi) conditioned on the minimizing choice for ¯¯oi that has been found so far. The only complication is that the linear program has no incentive to maximize fi(¯¯oi) unless it contributes to maximizing the final value. Thus, a constraint that appears to be tight may not correspond to the actual minimizing choice. The solution to this is to do a forward pass first (according to the elimination order) marking the fi(¯¯oi)’s that really come from tight constraints. Then, the backward pass described above will find the true minimizing choices by using only the marked fi(¯¯oi)’s. The last question is how to sample an action ¯a from the global distribution defined by the smaller distributions. We begin with all actions uninstantiated and we go through all πj(s)’s. For each j, we marginalize out the instantiated actions (if any) from πj(s) to generate the conditional probability and then we sample jointly the actions that remain in the distribution. We repeat with the next j until all actions are instantiated. Notice that this operation can be performed in a distributed manner, that is, at execution time only agents whose actions appear in the same πj(s) need to communicate to sample actions jointly. This communication structure is directly derived from the structure of the basis functions. 5 An Example The algorithm has been implemented and is currently being tested on a large flow control problem with multiple routers and servers. Since experimental results are still in progress, we demonstrate the efficiency gained over exponential enumeration with an example. Consider a problem with N = 5 maximizers and M = 4 minimizers. Assume also that each maximizer or minimizer has 5 actions to choose from. The naive solution would require solving a linear program with 3126 variables and 3751 constraints for any representation of the value function. Consider now the following factored value function: bQ(s, ¯a, ¯o) = φ1(s, a1, a2, o1, o2)w1 + φ2(s, a1, a3, o1, o3)w2 + φ3(s, a2, a4, o3)w3 + φ4(s, a3, a5, o4)w4 + φ5(s, a1, o3, o4)w5 . These basis functions satisfy the running intersection property (there is no cycle of length longer than 3), so there is no need for additional probability distributions. Using the elimination order {o4, o3, o1, o2} for the cost network, the reduced linear program contains only 121 variables and 215 constraints (we present only the 80 constraints on the value of the state that demonstrate the variable elimination procedure, omitting the common constrains for validity and consistency of the local probability distributions): Maximize: f2 Subject to: ∀o4 ∈O4, ∀o3 ∈O3, f4(o3) ≤ (a3,a5)∈A3×A5 w4φ4(s, a3, a5, o4)π4(s, a3, a5) + a1∈A1 w5φ5(s, a1, o3, o4)π5(s, a1) ∀o3 ∈O3, ∀o1 ∈O1, f3(o1) ≤ (a1,a3)∈A1×A3 w2φ2(s, a1, a3, o1, o3)π2(s, a1, a3) + (a2,a4)∈A2×A4 w3φ3(s, a2, a4, o3)π3(s, a2, a4) + f4(o3) ∀o1 ∈O1, ∀o2 ∈O2, f1(o2) ≤ (a1,a2)∈A1×A2 w1φ1(s, a1, a2, o1, o2)π1(s, a1, a2) + f3(o1) ∀o2 ∈O2, f2 ≤f1(o2) 6 Conclusion We have presented a principled approach to the problem of solving large team Markov games that builds on recent advances in value function approximation for Markov games and multiagent coordination in reinforcement learning for MDPs. Our approach permits a tradeoff between simple architectures with limited representational capability and sparse communication and complex architectures with rich representations and more complex coordination structure. It is our belief that the algorithm presented in this paper can be used successfully in real-world, large-scale domains where the available knowledge about the underlying structure can be exploited to derive powerful and sufficient factored representations. Acknowledgments This work was supported by NSF grant 0209088. We would also like to thank Carlos Guestrin for helpful discussions. References [1] R. Dechter. Bucket elimination: A unifying framework for reasoning. Artificial Intelligence, 113(1–2):41–85, 1999. [2] Carlos Guestrin, Daphne Koller, and Ronald Parr. Multiagent planning with factored MDPs. In Proceeding of the 14th Neural Information Processing Systems (NIPS-14), pages 1523–1530, Vancouver, Canada, December 2001. [3] Carlos Guestrin, Daphne Koller, and Ronald Parr. Solving factored POMDPs with linear value functions. In IJCAI-01 workshop on Planning under Uncertainty and Incomplete Information, 2001. [4] Carlos Guestrin, Michail G. Lagoudakis, and Ronald Parr. Coordinated reinforcement learning. In Proceedings of the 19th International Conference on Machine Learning (ICML-02), pages 227–234, Sydney, Australia, July 2002. [5] Michail Lagoudakis and Ronald Parr. Model free least squares policy iteration. In Proceedings of the 14th Neural Information Processing Systems (NIPS-14), pages 1547–1554, Vancouver, Canada, December 2001. [6] Michail Lagoudakis and Ronald Parr. Value function approximation in zero sum Markov games. In Proceedings of the 18th Conference on Uncertainty in Artificial Intelligence (UAI 2002), pages 283–292, Edmonton, Canada, 2002. [7] Michael L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the 11th International Conference on Machine Learning (ICML-94), pages 157– 163, San Francisco, CA, 1994. Morgan Kaufmann.
2002
77
2,284
Convergence Properties of some Spike-Triggered Analysis Techniques Liam Paninski Center for Neural Science New York University New York, NY 10003 liam@cns.nyu. edu http://www.cns.nyu.edu/rvliam Abstract vVe analyze the convergence properties of three spike-triggered data analysis techniques. All of our results are obtained in the setting of a (possibly multidimensional) linear-nonlinear (LN) cascade model for stimulus-driven neural activity. We start by giving exact rate of convergence results for the common spike-triggered average (STA) technique. Next, we analyze a spike-triggered covariance method, variants of which have been recently exploited successfully by Bialek, Simoncelli, and colleagues. These first two methods suffer from extraneous conditions on their convergence; therefore, we introduce an estimator for the LN model parameters which is designed to be consistent under general conditions. We provide an algorithm for the computation of this estimator and derive its rate of convergence. We close with a brief discussion of the efficiency of these estimators and an application to data recorded from the primary motor cortex of awake, behaving primates. 1 Introduction Systems-level neuroscientists have a few favorite problems, the most prominent of which is the "what" part of the neural coding problem: what makes a given neuron in a particular part of the brain fire? In more technical language, we want to know about the conditional probability distributions P(spikelX = x), the probability that our cell emits a spike, given that some observable signal X in the world takes value x. Because data is expensive, neuroscientists typically postulate a functional form for this collection of conditional distributions, and then fit experimental data to these functional models, in lieu of attempting to directly estimate P(spikelX = x) for each possible x. In this paper, we analyze one such phenomenological model whose popularity seems to be on the rise: p(spikelx) = f(< k1 , x>, < k2 , x>, ... ,< km , x ». (1) Here f is some arbitrary nonconstant, ~m-measurable, [O,l]-valued function, and {ki } are some linearly independent elements of the dual space, X', of some topological vector space, X the space of possible "input signals." Interpret f as a regular (3) (6) conditional distribution. Roughly, then, the neuron projects the signal xonto some m-dimensional subspace spanned by {ki}l<i<m (call this subspace K), then looks up its probability of firing based only on thIs-projection. This model is often called a "linear-nonlinear," or "LN," cascade model. It is also a probabilistic analog of a certain type of "Wiener cascade" model; this class of models has received extensive study in the systems identification literature. (Note that this model is not the same as a Volterra series model; these two classes of systems have very different uniform approximation properties.) The LN model has two important features. First, the spike trains of the cell are given by a conditionally (inhomogeneous) Poisson process given x; that is, there are no dynamics in this model beyond those induced by x and K. Second, equation (1) implies: p(spikelx) = p(spikelx+ y) V y 1- K. (2) In other words, the conditional probability of firing is constant along (hyper)planes in the input space. (The natural generalization of this is a model for which these surfaces of constant firing probability are manifolds of low codimension; however, we will stick to the linear case here.) This model is semiparametric in the sense that it separates the problem of learning p(spikelx) into two pieces: 1) learning the finite-dimensional parameter K, and 2) learning the infinite-dimensional parameter f. If K is given, the problem of learning f reduces to a density estimation problem, about which much is known. The problem of estimating K seems to be less wellunderstood, and we focus primarily on this problem here. We start with some notation. Let N, as usual, denote the number of available samples, drawn from the fixed stimulus distribution p(x) (in practice, of course, the samples from p(x) are not independent; for simplicity, we will stick to the i.i.d. case here, but most of our methods can be extended to the more general case). Then our basic results will take the following form: E (Error(K)) '" aN-).. + {3, as N becomes large. The estimator K is a deterministic map taking N observations of stimulus and spike data (where spikes are binary random variables, conditionally independent given the stimulus) into an estimate of the true underlying K: K : (X x {a, l})N -t Qm(X) (4) (XN,SN) -t K(XN,SN), (5) where (fEN, SN) denotes the N-sample data. Qm(X) is the m-Grassmann manifold of X, the space of all m-dimensional subspaces of X; the natural error metric, then, is the geodesic distance on Qm(X) (the "canonical angle") between the true subspace K and the estimated subspace K. For brevity, we will present most of our results in the m = 1 case only; here the metric takes the simple form A _ -1 < K,k1 > Error(K) = cos A ~ • IIKllllk111 The scalar terms A, a, and f3 in (3) each depend on .J, K, and p(x); A is a constant giving the order of magnitude of convergence (usually, but not always, equal to 1/2), a gives the precise convergence rate, and (3 gives the asymptotic error. We will be mostly concerned with giving exact values for a and A, and simply indicating when (3 is zero or positive (i.e., when K is consistent in probability or not, respectively). As usual, rate-of-convergence results clarify why a given estimator works well (in (7) the sense that a only a small number of samples is needed for reliable estimates) in certain cases and poorly (sometimes not at all) in others. We will discuss three estimators here; the first two are well-known, while the third is novel, and is consistent under much more general conditions. The first part of the paper will indicate how to derive representation (3), including the constants a, 13,. and A, for these three estimators. In the final two sections, we discuss lower bounds on the convergence rates of any possible K -estimator (these kinds of bounds provide a rigorous measure of the- difficulty of this estimation problem), and then give a brief illustration of the new estimator applied to data recorded in the primary motor cortex of awake, behaving monkeys. 2 Convergence rates All three of the estimators considered here can be naturally written as "Mestimators," that is, K(XN' SN) == argmaxVEQm (X) M(XN ,SN )(V), for some data-dependent function M N == M(XN ,SN ) on Ym(X). Most of the mathematical labor in this section comes down to an application of the standard "delta method" from the theory of ~v1-estimators [5]: typically the data-dependent (i.e., random) functions M N converge in some suitable sense, as N -? 00, to some limit function M. The asymptotics of the M-estimator are then reduced to a study of 1) the variability of M N around the limit M and 2) the local differential structure of M in a neighborhood of the true value of the underlying parameter K. This program can be carried out trivially for the first two estimators but is more interesting for the third (the first two require only the multivariate CLT; the third requires an infinite-dimensional CLT). 2.1 Spike-triggered averaging The first estimator, the spike-triggered average, is classical and very intuitive: KSTA is defined as the sample -mean of the spike-conditional stimulus distribution p(xl spike); since the spike signal is binary, this is the same as the cross-correlation between the spike and the stimulus signal. (We assume throughout, without loss of generality, that p(x) is centered, that is, E(x) == 0.) We will also consider the following "linear regression" modification: K LR == AKsTA' where A is an operator chosen to "divide out" correlations in the stimulus distribution p(x) (A is typically the (pseudo-) inverse of the stimulus correlation matrix, which we will denote as a 2(p(x))). The analysis for KSTA and K LR depends only on a straightforward application of the multivariate central limit theorem' (CLT). We begin with necessary and sufficient conditions for consistency. We assume throughout this paper that the stimulus distribution p(x) has finite second moments; this assumption seems entirely reasonable on physical grounds. Let q be a random variable with distribution given by P( ) == ( ~ k~ I Ok) _ f( < X, k1 > )p(< X, k1 » q - p < X, 1 > SP~ e ~ ~, JRf« x,k1 »p« x,k1 » with f as defined in (1) and p(< X, k1 » denoting the one-dimensional projection of p(x). The expectation of this random variable exists by the finite-variance assumption on p(x). Finally, as usual, we say p(x) is radially symmetric if p(B) == p(UB) for all Borel sets B and all unitary transformations U. Theorem 1 ((3(KSTA)). Ifp(x) (resp. p(Alj2x )) is radially symmetric and E(q) i0, then (3(KSTA ) ==°(resp. (3(KLR) == 0). Conversely, ifp(x) is radially symmetric and E(q) == 0, then (3 > 0, and if p(x) is not radially symmetric, then there exists an i for which {3 > 0. (Note that i is not required to be smooth, or even continuous.) The above sufficiency conditions seem to be somewhat well-known; for example, most of the sufficiency statement appeared (albeit in somewhat less precise form) in [1]. On the other hand, the converse is novel, to our knowledge, and is perhaps surprisingly stringent. The first part of the necessity statement will be obvious from the following discussion of a (and in fact appears implicitly in [1]), while the second part is a little harder, and seems to require (rather elementary) characteristic function techniques. The proof proceeds by showing that a distribution is symmetric iff it has the property that the conditional mean of x is zero on all planar "slices" < k, x>E B for some k E XI and real Borel set B. Next we have the rate of convergence: Theorem 2 (a(KSTA)). Assumep(i),is symmetric normal, with standard deviation a(p). If (3(KSTA) == 0, then N 1j2(KsTA - K) is asymptotically normal with mean zero (considered as a distribution on the tangent plane of Ym(X) at the true underlying value K), and Thus the performance of the spike-triggered average scales directly with the dimen~ sion of the ambient space and inversely with E(q), a measure of the asymmetry of the spike-triggered distribution along k1 . Note that we stated the result under the much stronger condition that p(i) is Gaussian. In this case, the form of a becomes quite simple, depending on the nonlinearity i only through E(q). The general case is proven by identical methods but results in a slightly more complicated (i-dependent) term in place of a(p). The proof follows by applying the multivariate central limit theorem to the sample mean random vectors drawn ij.d. from the spike-conditional stimulus distribution, p(xlspike). The proof also supplies the asymptotic distribution of Error(KsTA) (a noncentral F), which might be useful for hypothesis testing. The details are quite easy once the mean of this distribution is identified (as in [1], under the above sufficiency conditions), and we skip them to save room for more interesting results. One final note: in stating the above two results, we have been assuming implicitly that K is one-dimensional (since KSTA clearly returns a single vector, that is, a one-dimensional subspace of X). Nevertheless, the two theorems extend easily to the more general case, after Error(KsTA) is redefined to measure angles between m- and I-dimensional subspaces. (Of course, now E(KsTA) and limN-H:xJ KSTA depend strongly on the input distribution p(x), even for radi~lly symmetric p(x); see, e.g., [3] for an analysis of a special case of this effect.) 2.2 Covariance-based methods The next estimator was introduced in an effort to extend spike-triggered analysis to the m > 1 case (see, e.g., [3], and references therein). Where KSTA was based on the first moment of the spike-conditional stimulus distribution p(xlspike), KCORR is based on the second moment. We define A 2 -1. A KCORR == (0-) elg(.6.0-2 ), where eig(A) denotes the significantly non-zero eigenspace of the operator A, and .6.~2 is some estimate (typically the usual sample covariance estimate) of the "difference-covariance" matrix .6.0-2 , defined by Again, we start with {3: Theorem 3 ((3(KCORR)). Ifp(x) is Gaussian and Varp(xlspike) « k, x» =I- Varp(x) (< k, x» \:Ik E E K , for some orthogonal basis E K of K, then (3(KCORR) == o. Conversely, if p(x) is Gaussian and the variance condition is not satisfied for f, then (3 > 0, and if p(x) is non-Gaussian, then there exists an f for which {3 > o. As before, the sufficiency is fairly well-known, while the necessity appears to be novel and relies on characteristic function arguments. It is perhaps surprising that the conditions on p for the consistency of this estimator are even stricter than for the spike-triggered average. The essential fact here turns out to be that a distribution is normal iff, after a suitable change of basis, the conditional variance on all planar "slices" of the distribution is constant. We have, with Odelia SChwartz, developed a striking inconsistency example which is worth mentioning here: Example (Inconsistency of KCORR). There is a nonempty open set of nonconstant f and radially symmetric p(x) such that KCORR is asymptotically orthogonal to K almost surely as N ---7 00. (In fact, the f and p in this set can be taken to be infinitely differentiable.) The basic idea is that, for nonnormal p, the spike-triggered variance of < V, x > depends on f even for v-lk; we leave the details to the reader. We can derive a similar rate of convergence for these covariance-based methods. To reduce the notational load, we state the result for m == 1 only; in this case, we can define AAa-2 to be the (unique and nonzero by assumption) eigenvalue of .6,0-2 . Theorem 4 (a(KcoRR)). Assume p(x) is independent normal. If(3(KCORR) == 0, then N 1/ 2(KcoRR - K) is asymptotically normal with mean zero and (Again, while AAa-2 will not be exactly zero in practice, it can often be small enough that the asymptotic error remains prohibitively large for physiologically reasonable values of N.) The proof proceeds by applying the multivariate central limit theorem to the covariance matrix estimator, then examining the first-order Taylor expansion of the eigenspace map at .6,0-2 ; see the longer draft of this paper at http://www.cns.nyu.edu/r-.;liam for the more general statement and proof. 2.3 Empirical processes techniques We have seen that the two most common K-estimators are not consistent in general; that is, the asymptotic error (3 is bounded away from zero for many (nonpathological) combinations of p(x), f, and K. We now introduce a new estimator for which (3 == 0 under very general conditions (without, say, any symmetry or normality assumptions on p or any symmetry assumptions on f). The basic idea is that Ki is in a sense a sufficient statistic for i (that is, x- Ki spike forms a Markov chain). The data processing inequality suggests that we could estimate K by maximizing where DcjJ is a functional with suitable convexity properties, and qN is some estimate ofp. For example, we could let DcjJ be an information divergence and qN some kernel estimate, that is, a filtered version of the empirical measure (see [4] for an independent approach along these lines). This doesn't quite work, however, because the kernel induces an arbitrary scale; if this scale is larger than the natural scale of f and p(< V, X » for some V but not others, our estimate will be biased away from K. Therefore, DcjJ and PN have to be asymptotically scale-free in some sense. The simplest approach is to let the kernel width tend to zero as N becomes large; it is even possible to calculate the optimal rate of kernel shrinkage in N, depending on the smoothness of f. It also turns out to be helpful to use a bias-corrected version of MN (V); a standard jackknife correction is sufficient to obtain an estimator which converges at the standard VN rate. We have: Theorem 5 «(3(KcP )). lfp has a nonzero density with respect to Lebesgue measure, f is not constant a.e., and the kernel width goes to zero more slowly than NT-l, for some r > 0, then {3 == 0 for the kernel estimator KcP • In other words, this new estimator KcjJ works for very general neurons f and stimulus distributions p; in particular, K¢ is suitable for application to natural signal data. Clearly, the condition on f is minimal; we ask only that the neuron be tuned. The condition on p is quite weak (and can be relaxed further); we are simply ensuring that we are sampling from all of X, and in particular, the part of X on which the cell is tuned. Next we have the rate of convergence; in the following, the "approximation error" measures the difference between the true information divergence M cP (V) and its kernel-smoothed version, defined in the obvious way. Theorem 6 (1 and a for (K¢)). If the approximation error is of order aN, r > 1, then the jackknifed kernel or histogram versions of KcjJ, with bandwidth NS, -1 < s < -l/r, converge at an N-l / 2 rate. Moreover, N l / 2(K¢ - K) is asymptotically normal, with mean zero and easily calculable a (K¢) . The methods follow, e.g., example 3.2.12 of [5] basically, a generalization of the classical theorem on the asymptotic distribution of the maximum likelihood estimator in regular parametric families. Again, see the longer draft at http://www.cns.nyu.edu/rvliam for the precise definition of the approximation error and the full expression for a(K¢). We have developed an algorithm for the computation of argmaxvMN(V) , and numerical results show that K¢ can be competitive with spike-triggered average or covariance techniques even in cases in which f3(KSTA) and f3(KCORR) are zero. We present a brief application of K¢ in section 4. ·3 Lower bounds Lower bounds for convergence rates provide a rigorous measure of the difficulty of a given estimation problem, or of the efficiency of a given estimator. We give a few such results below. The first lower bound is local, in the sense that we assume that the true parameter is known a priori to be in some small neighborhood of parameter space. For simplicity, assume for the moment that p(x) is radially symmetric. Recall that the Hellinger metric between any two densities is defined as (half of) the L 2 distance between the square roots of the densities. Theorem 7 (Local (Hellinger) lower bound). For simplicity, let p be standard normal. For any fixed differentiable f, uniformly bounded away from 0 and 1 and with a uniformly bounded derivative f', and any Hellinger ball F around the true parameter (f, K), A ( 11'12 )-1 lW-!;e,f N 1/ 2 ikf s~ E(Error(K)) ~ a(p)(Ep ( 1(1 _ f) ))1/2 vctim X - 1. The second infimum above is taken over all possible estimators k. The right-hand side plays the role of the inverse Fisher information in the Cramer-Rao bound and is derived using a similarly local analysis; see [2] for details. Global bounds are more subtle. We want to prove something like: liminf aN iI!f sup E(Error(k)) ~ C(E), N-HXJ K :F(€) where F(E) is some large parameter set containing, say, all K and all f for which some relevant measure of tuning is greater than E, aN is the corresponding convergence rate, and C(E) plays the role of a(K) from the previous sections. So far, our most interesting results in this direction are negative: Theorem 8 (Information divergences are poor indices of K-difficulty). Let F(E) be the set of all (K, f) for which the ¢-divergence ((information" between x and spike is greater than E, that is, DcjJ(P(Kx, spike); p(spike)p(Kx)) > E. Then, for E > 0 small enough, for any putative convergence rate aN, liminf aN iI!f sup E(Error(k)) == 00. N-'Hx) K :F(€) In other words, strictly information-theoretic measures of tuning do not provide a useful index of the difficulty of the K-Iearning problem; the intuitive explanation of this result is that purely measure-theoretic distance functions, like ¢-divergences, ignore the topological and vector space structure of the. underlying probability measures, and it is exactly this structure that determines the convergence rates of any efficient K -estimator. To put it more simply, the learnability of K depends on the smoothness of f, just as we saw in the last section. 4 Application to primary motor cortex data We have applied these new spike-triggered analysis techniques to data collected in the primary motor cortex (MI) of awake, behaving monkeys in an effort to elucidate the neural encoding of time-varying hand position signals in MI. This analysis h~s led to several interesting findings on the encoding properties of these neurons, with immediate applications to the design of neural prosthetic devices. Here, we have room to mention only one result: the relevant K for MI cells appear to be largely one-dimensional. In other words, the conditional firing rate of these neurons, given a specific time-varying hand path, is well captured by the following model (Fig. 1): p(spikel£) == f(< ko,£ », where £ represents the two-dimensional hand position signal in a temporal neighborhood of the current time, ko is a cell-specific affine functional, and f is a cell-independent scalar function. 20 20 Figure 1: Example }(I<£) functions, computed from two different MI cells, with rank I< == 2; the x- and y-axes index < k1 , £ > and < k2 ,x >, respectively, while the color axis indicates the value of j (the conditional firing rate given K £), in Hz. The scale on the x- and y-axes is arbitrary and has been omitted. k was computed using the q'J-divergence estimator, and j was estimated using an adaptive kernel within the circular region shown (where sufficient data was available for reliable estimates). Note that the contours of this function are approximately linear; that is, }(I<£) ~ fo« ko,£ », where ko is the vector orthogonal to the contour lines and fa is a suitably chosen scalar function on the line. Acknowledgements We thank the Simoncelli lab for interesting discussions, and N. Rust and T. Sharpee for preliminary discussions of [4]. The MI experiments were done with M. Fellows, N. Hatsopoulos, and J. Donoghue. LP is supported by a HHMI predoctoral fellowship. References [1] Chichilnisky, E. Network 12: 199-213 (2001). [2] Gill, R. & Levit, B. Bernoulli, 1/2: 59-79 (1995). [3] Schwartz, 0., Chichilnisky, E. & Simoncelli, E. NIPS 14 (2002). [4] Sharpee, T., Bialek, W. & Rust, N. This volume (2003). [5] van der Vaart, A. & Wellner, J. Weak convergence and empirical processes. Springer-Verlag, New York (1996).
2002
78
2,285
Learning to Take Concurrent Actions Khashayar Rohanimanesh Department of Computer Science University of Massachusetts Amherst, MA 01003 khash@cs.umass.edu Sridhar Mahadevan Department of Computer Science University of Massachusetts Amherst, MA 01003 mahadeva@cs.umass.edu Abstract We investigate a general semi-Markov Decision Process (SMDP) framework for modeling concurrent decision making, where agents learn optimal plans over concurrent temporally extended actions. We introduce three types of parallel termination schemes – all, any and continue – and theoretically and experimentally compare them. 1 Introduction We investigate a general framework for modeling concurrent actions. The notion of concurrent action is formalized in a general way, to capture both situations where a single agent can execute multiple parallel processes, as well as the multi-agent case where many agents act in parallel. Concurrency clearly allows agents to achieve goals more quickly: in making breakfast, we interleave making toast and coffee with other activities such as getting milk; in driving, we search for road signs while controlling the wheel, accelerator and brakes. Most previous work on concurrency has focused on parallelizing primitive (unit step) actions. Reiter developed axioms for concurrent planning using the situation calculus framework [4]. Knoblock [3] and Boutilier [1] modify the STRIPS representation of actions to allow for concurrent actions. These approaches assume deterministic effects. Prior work in decision-theoretic planning includes work on multi-dimensional vector action spaces [2], and models based on dynamic merging of multiple MDPs [6]. There is also a massive literature on concurrent processes, dynamic logic, and temporal logic. Parts of these lines of research deal with the specification and synthesis of concurrent actions, including probabilistic ones [8]. In contrast, we focus on parallelizing temporally extended actions. The concurrency framework described below significantly extends our previous work [5]. We provide a detailed analysis of three termination schemes for composing parallel action structures. The three schemes – any, all, and continue – are illustrated in Figure 1. We characterize the class of policies under each scheme. We also theoretically compare the optimality of the concurrent policies under each scheme with that of the typical sequential case. The theoretical results are complemented by an experimental study, which illustrate the trade-offs between optimality and convergence speed, and the advantages of concurrency over sequentiality. 2 Concurrent Action Model Building on SMDPs, we introduce the Concurrent Action Model (CAM) (S, A, T , R), where S is a set of states, A is a set of primary actions, T is a transition probability distribution S × ℘(A) × S × N →[0, 1], where ℘(A) is the power-set of the primary actions and N is the set of natural numbers, and R is the reward function mapping S →ℜ. Here, a concurrent action is simply represented as a set of primary actions (hereafter called a multi-action), where each primary action is either a single step action, or a temporally extended action (e.g., modeled as a closed loop policy over single step actions [7]). We denote the set of multi-actions that can be executed in a state s by A(s). In practice, this function can capture resource constraints that limit how many actions an agent can execute in parallel. Thus, the transition probability distribution in practice may be defined over a much smaller subset than the power-set of primary actions (e.g., in the grid world example in Figure 3, the power set is > 100, but the set of concurrent actions is only ≈10). a 2 a 4 St a 2 a 4 multi-action a = {a1, , } a 3 , a 1 a 1 a 3 t t+k d dn+1 n interrupted t terminated a 2 a 4 St t+k dn+1 a 2 a 4 multi-action a = {a1, , } a 3 , a 1 a 3 t dn t terminated a 2 a 4 St a 1 a 1 a 2 a 4 = {a1, , } a 3 , a t+k a 2 a 4 a 1 a 3 t t+k d dn+1 n Next multi-action Current multi-action = {a1, , } a 3 , a Continue to run t terminated Figure 1: Left: Tany termination scheme. Middle: Tall termination scheme. Right: Tcontinue termination scheme. A principal goal of this paper is to understand how to define decision epochs for concurrent processes, since the primary actions in a multi-action may not terminate at the same time. The event of termination of a multi-action can be defined in many ways. Three termination schemes are illustrated in Figure 1. In the Tany termination scheme (Figure 1, left), the next decision epoch is when the first primary action within the multi-action currently being executed terminates, where the rest of the primary actions that did not terminate naturally are interrupted (the notion of interruption is similar to [7]). In the Tall termination scheme (Figure 1, middle), the next decision epoch is the earliest time at which all the primary actions within the multi-action currently being executed have terminated. We can design other termination schemes by combining Tany and Tall : for example, another termination scheme called continue is one that always terminates based on the Tany termination scheme, but lets those primary actions that did not terminate naturally continue running, while initiating new primary actions if they are going to be useful (Figure 1, right). A deterministic Markovian (memoryless) policy in CAMs is defined as the mapping π : S →℘(A). Note that even though the mapping is defined independent of the termination scheme, the behavior of a multi-action policy depends on the termination scheme that is used in the model. To illustrate this, let < π, τ > (called a policy-termination construct) denote the process of executing the multi-action policy π using the termination scheme τ ∈{Tany, Tall}. To simplify notation, we only use this form whenever we want to explicitly point out what termination scheme is being used for executing the policy π. For a given Markovian policy, we can write the value of that policy in an arbitrary state given the termination mechanism used in the model. Let Θ(π, st, τ) denote the event of initiating the multi-action π(st) at time t and terminating it according to the τ ∈{Tany, Tall} termination scheme. Also let π∗τ denote the optimal multi-action policy within the space of policies over multi-actions that terminate according to the τ ∈{Tany, Tall} termination scheme. To simplify notation, we may alternatively use ∗τ to denote optimality with respect to the τ termination scheme. Then the optimal value function can be written as: V ∗τ (st) = E{rt+1 + γrt+2 + ... + γk−1rt+k + γk max a∈A(st+k) Q∗τ (st+k, a) | Θ(π∗τ , st, τ)} where Q∗τ (st+k, a) denotes the multi-action value of executing a in state st+k (terminated using τ) and following the optimal policy π∗τ thereafter. The policy associated with the continue termination scheme is a history dependent policy, since for a given state st, the continue policy will select a multi-action such that it includes the set of all the primary actions of the multi-action executed in the previous decision epoch that did not terminate naturally in the current state st (we refer to this set as the continue-set represented by ht). The continue policy is defined as the mapping πcont : S × H →℘(A) in which H is a set of continue-sets ht. Note that the value function definition for the continue policy should be defined over both state st and the continue-set ht (represented by ≺st, ht ≻), i.e., V πcont(≺st, ht ≻). Let the function A(st, ht) return the set of multi-actions that can be executed in state st that include the continuing primary actions in ht. Then the continue policy is formally defined as: πcont(≺st, ht ≻) = arg maxa∈A(st,ht) Qπcont(≺st, ht ≻, a). To illustrate this, assume that the current state is st and the multi-action at = {a1, a2, a3, a4} is executed in state st. Also, assume that the primary action a1 is the first action that terminates after k steps in state st+k. According to the definition of the continue termination scheme (that terminates based on Tany), the multi-action at is terminated at time t + k and we need to select a new multiaction to execute in state st+k (with the continue-set ht+k = {a2, a3, a4}). The continue policy will select the best multi-action at+k that includes the primary actions {a2, a3, a4}, since they did not terminate in state st+k (see Figure 1, right). 3 Theoretical Results In this section we present some of our theoretical results comparing the optimality of various policies under different termination schemes introduced in the previous section. In all of these theorems we use the partial ordering relation V π1 ≤V π2 ↔ π1 ≤π2, in order to compare different policies. For lack of space, we abbreviated the proofs. Note that in theorems 1 and 3 which compare the continue policy with π∗any and π∗all policies, the value function is written over the pair ≺st, ht ≻to be consistent with the definition of the continue policy. This does not influence the original definition of the value function for the optimal policies in Tany and Tall termination schemes, since they are independent of the continue-set ht. First, we compare the optimal multi-action policies based on the Tany termination scheme and the continue policy. Theorem 1: For every state st ∈S, and all continue-set ht ∈H, V πcont(≺st, ht ≻) ≤V ∗any(≺st, ht ≻). Proof: By writing the value function definition for each case we have: V πcont(≺st, ht ≻) = max a∈A(st,ht) Qπcont(≺st, ht ≻, a) ≤ max a∈A(st) Qπcont(≺st, ht ≻, a) ≤ max a∈A(st) Q∗any(≺st, ht ≻, a) = V ∗any(≺st, ht ≻) The inequality holds since the maximization in πcont is over a smaller set (i.e., A(st, ht)) which is a subset of the larger set A(st) that is maximized over, in the π∗any case. Next, we show that the optimal plans with multi-actions that terminate according to the Tany termination scheme are better compared to the optimal plans with multi-actions that terminate according to the Tall termination scheme: Theorem 2: For every state s ∈S, V ∗all(s) ≤V ∗any(s). Proof: The proof is based on the following lemma which states that if we alter the execution of the optimal multi-action policy based on Tall (i.e., π∗all) in such a way that at every decision epoch the next multi-action is still selected from π∗all, but we terminate it based on Tany then the new policy-termination construct represented by < ∗all, any > is better than the π∗all policy. Intuitively this makes sense, since if we interrupt π∗all(s) when the first primary action ai ∈a = π∗all(s) terminates in some future state s′, due to the optimality of π∗all, executing π∗all(s′) is always better than or equal to continuing some other policy such as the one in progress (i.e., π∗all(s)). Note that the proof is not as simple as in the first theorem since the two different policies discussed in this theorem (i.e., π∗any and π∗all) are not being executed using the same termination method. Lemma 1: For every state s ∈S, V ∗all(s) ≤V <∗all,any>(s). Proof: Let V ∗all n,any(s) denote the value of following the optimal π∗all policy in state s, where for the first n decision epochs we use the Tany termination scheme and for the rest we use the Tall termination scheme. By induction on n, we can show that V ∗all(s) ≤V ∗all n,any(s), ∀s ∈S and for all n. This suggests that if we always terminate a multi-action π∗all(st) according to the Tany termination scheme, we achieve a better return; or mathematically V ∗all(s) ≤limn→∞V ∗all n,any(s) = V <∗all,any>(s). Using Lemma 1, and the optimality of π∗any in the space of policies with termination scheme according to Tany , it follows that V ∗all(s) ≤V <∗all,any>(s) ≤V ∗any(s). Next, we show that if we execute the continue policy in which at any decision epoch we always execute the best set of primary actions along with those ones that were executed in the previous decision epoch and have not terminated yet, we achieve a better return compared to the case in which we execute the best set of primary actions, but always wait until all of the primary actions terminate before making a new decision: Theorem 3: For every state st ∈S, and all continue-set ht ∈H, V ∗all(≺st, ht ≻) ≤V πcont(≺st, ht ≻). Proof: In π∗all policies, multi-actions are executed until all of the primary actions of that multi-action terminate. The continue policy, however, may also initiate new useful primary action in addition to those already running which may achieve a better return. Let V ∗all n,cont(≺st, ht ≻) denote the value of the altered policy π∗all that works as follows: for a given state and continue-set ≺st, ht ≻, the policy π∗all(≺st, ht ≻) is executed while for the first n decision epochs we use the continue termination scheme (which means terminating according to Tany , and selecting the next multi-action according to the continue policy) and for the rest we use the Tall termination scheme. By induction on n, it can be shown that V ∗all(≺st, ht ≻) ≤V ∗all n,cont(≺st, ht ≻) for all n. This suggests that as we increase n, the altered policy behaves more like the continue policy and thus in the limit we have V ∗all(≺st, ht ≻) ≤limn→∞V ∗all n,cont(≺st, ht ≻) = V πcont(≺st, ht ≻) which proves the theorem. Finally we show that the optimal multi-action policies based on Tall termination scheme are as good as the case where the agent always executes a single primary action at a time, as it is the case in standard SMDPs. Note that this theorem does not state that concurrent plans are always better than sequential ones; it simply says that if in a problem, the sequential execution of the primary actions is the best policy, CAM is able to represent and find that policy. Let π∗seq represent the optimal policy in the sequential case, where only one primary action can be executed at a time: Theorem 4: For every state s ∈S, V ∗seq(s) ≤V ∗all(s), in which V ∗seq(s) is the value of the optimal policy when the primary actions are executed one at a time sequentially. Proof: It suffices to show that sequential policies are within the space of concurrent policies. This holds since a single primary action can be considered as a multi-action containing only one primary action whose termination is consistent with either of the multi-action termination schemes (i.e., in the sequential case both Tany and Tall termination schemes are same). Corollary 1 summarizes our theoretical results. It shows how different policies in a concurrent action model using different termination schemes compare to each other in terms of optimality. Corollary 1: In a concurrent action model and a set of termination schemes {Tany, Tall, continue}, the following partial ordering holds among the optimal policy based on Tany , the optimal policy based on Tall , the continue policy and the optimal sequential policy: π∗seq ≤π∗all ≤πcont ≤π∗any. Proof: This follows immediately from the above theorems. Figure 2 visually describes the summary of results that we presented in Corollary 1. According to this figure, the optimal multi-action policies based on Tany and Tall , and also continue multi-action policies dominate (with respect to the partial ordering relation defined over policies) the optimal policies over the sequential case. Furthermore, policies based on continue multi-actions dominate the optimal multiaction policies based on Tall termination scheme, while themselves being dominated by the optimal multi-action policies based on Tany termination scheme. Tall Tany Continue multi-action policies Multi-action policies using Multi-action policies using Policies over sequential actions Figure 2: Comparison of policies over multi-actions and sequential primary actions using different termination schemes. 4 Experimental Results In this section we present experimental results using a grid world task comparing various termination schemes (see Figure 3). Each hallway connects two rooms, and has a door with two locks. An agent has to retrieve two keys and hold both keys at the same time in order to open both locks. The process of picking up keys is modeled as a temporally extended action that takes different amount of times for each key. Moreover, keys cannot be held indefinitely, since the agent may drop a key occasionally. Therefore the agent needs to find an efficient solution for picking up the keys in parallel with navigation to act optimally. This is an episodic task, in which at the beginning of each episode the agent is placed in a fixed position (upper left corner) and the goal of the agent is to navigate to a fixed position goal (hallway H3). - 4 stochastic primitive actions (Up, Down, Left and Right) - Fail 10% of times, when fails it will move randomly to one of the neighbors - 3 stochastic primitive actions for keys (get-key, key-nop and putback-key) - Drop each key 30% of times when holding it - 2 multi-step key actions (pickup-key), one for each key - 8 multi-step navigation actions (to each room’s 2 hallways) - One primitive no-op action Agent H0 H2 H1 H3 (Goal) Figure 3: A navigation problem that requires concurrent plans. There are two locks on each door, which need to be opened simultaneously. Retrieving each key takes different amounts of time. The agent can execute two types of action concurrently: (1) navigation actions, and (2) key actions. Navigation actions include a set of one-step stochastic navigation actions (Up, Left, Down and Right) that move the agent in the corresponding direction with probability 0.9 and fail with probability 0.1. Upon failure the agent moves instead in one of the other three directions, each with probability 1 30. There is also a set of temporally extended actions defined over the one step navigation actions that transport the agent from within the room to one of the two hallway cells leading out of the room (Figure 4 (left)). Key actions are defined to manipulate each key (get-key, putback-key, pickup-key, etc). Among them pickup-key is a temporally extended action (Figure 4 (right)). Note that each key has its own set of actions. Inside the room Door is open Door is closed & both keys are ready Door is closed & keys are not ready Outside the room Multi-step hallway action can be taken Multi-step hallway action can not be taken Target Hallway S2 Primitive action "putback-key" Primitive action "key-nop" Primitive action "get-key" Key 1 Key 2 Multi-step action "pickup-key" 0.7 Key Ready Key Dropped 0.3 10 0 1 10 7 0.7 0.3 Key Ready Key Dropped S0 S6 S0 S6 1 1 S1 1 . . . 1 1 1 1 1 S . . . S S 1 1 1 1 1 . . . S1 1 1 1 1 1 . . . S S 1 Figure 4: Left: the policy associated with one of the hallway temporally extended actions. Right: representation of the key pickup actions for each key process. In this example, navigation actions can be executed concurrently with key actions. Actions that manipulate different keys can be also executed concurrently. However, the agent is not allowed to execute more than one navigation action, or more than one key action (from the same key action set) concurrently. In order to properly handle concurrent execution of actions, we have used a factored state space defined by state variables position (104 positions), key1-state (11 states) and key2-state (7 states). In our previous work we showed that concurrent actions formed an SMDP over primitive actions [5], which turns out to hold for all the termination schemes described above. Thus, we can use SMDP Q-learning to compare concurrent policies over different termination schemes with the use of this method for purely sequential policy learning [7]. After each decision epoch where the multi-action a is taken in some state s and terminates in state s′, the following update rule is used: Q(s, a) ←Q(s, a) + α  r + γk maxa′∈A(s′) Q(s′, a′) −Q(s, a)  , where k denotes the number of time steps since initiation of the multi-action a at state s and its termination at state s′, and r denotes the cumulative discounted reward over this period. The agent is punished by −1 for each primitive action. Figure 5 (left) compares the number of primitive actions taken until success, and Figure 5 (right) shows the median number of decision epochs per trial, where for trial n, it is the median of all trials from 1 to n. These data are averaged over 10 episodes, each consisting of 500, 000 trials. As shown in figure 5 (left), concurrent actions over any termination scheme yield a faster plan than sequential execution. Moreover, the policies learned based on Tany (i.e. both π∗any and πcont) are also faster than Tall . Also, π∗any achieves higher optimality than πcont, however the difference is small. We conjecture that sequential execution and Tall converge faster compared to Tany, due to the frequency with which multi-actions are terminated. As shown in Figure 5 (right), Tall makes fewer decisions, compared to Tany. This is intuitive since Tall terminates only when all of the primary actions in a multi-action are completed, and hence it involves less interruption compared to learning based on Tany. Note πcont converges faster than π∗any and it is nearly as good as Tany. . We can think of 20 25 30 35 40 45 50 55 60 65 70 0 100000 200000 300000 400000 500000 Median/Trials (steps to goal) Trial Sequential Actions Concurrent Actions: optimal, T-all Concurrent Actions: optimal, T-any Concurrent Actions: continue 0 10 20 30 40 50 60 70 0 100000 200000 300000 400000 500000 Median/Trials (# of decision epochs) Trial Sequential Actions Concurrent Actions: optimal, T-all Concurrent Actions: optimal, T-any Concurrent Actions: continue Figure 5: Left: moving median of number of steps to the goal. Right: moving median of number of multi-action level decision epochs taken to the goal. πcont as a blend of Tall and Tany . Even though it uses the Tany termination scheme, it continues executing primary actions that did not terminate naturally when the first primary action terminates, making it similar to Tall . 5 Future Work Even though specifying the A(s) set of applicable multi-actions might significantly reduce the set of choices, we still may need additional mechanisms for efficiently searching the space of multi-actions that can run in parallel. Also, we can additionally exploit the hierarchical structure of multi-actions to compile them into an effective policy over primary actions. These are some of the practical issues that we will investigate in future work. References [1] Craig Boutilier and Ronen Brafman. Planning with concurrent interacting actions. In Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI ’97), 1997. [2] P. Cichosz. Learning multidimensional control actions from delayed reinforcements. In Eighth International Symposium on System-Modelling-Control (SMC-8), Zakopane, Poland, 1995. [3] C. A. Knoblock. Generating parallel execution plans with a partial-order planner. In Proceedings of the Second International Conference on Artificial Intelligence Planning Systems , Chicago, IL, 1994., 1994. [4] Ray Reiter. Natural actions, concurrency and continuous time in the situation calculus. Principles of Knowledge Representation and Reasoning: Proceedings of the Fifth International Conference (KR’96), Cambridge MA., November 5-8, 1996, 1996. [5] Khashayar Rohanimanesh and Sridhar Mahadevan. Decision-theoretic planning with concurrent temporally extended actions. In Proceedings of the 17th Conference on Uncertainty in Artificial Intelligence, 2001. [6] S. Singh and David Cohn. How to dynamically merge markov decision processes. Proceedings of NIPS 11, 1998. [7] R. Sutton, D. Precup, and S. Singh. Between MDPs and Semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, pages 181–211, 1999. [8] Glynn Winskel. Topics in concurrency: Part ii comp. sci. lecture notes. Computer Science course at the University of Cambridge, 2002.
2002
79
2,286
Extracting Relevant Structures with Side Information Gal Chechik and Naftali Tishby ggal,tishby  @cs.huji.ac.il School of Computer Science and Engineering and The Interdisciplinary Center for Neural Computation The Hebrew University of Jerusalem, 91904, Israel Abstract The problem of extracting the relevant aspects of data, in face of multiple conflicting structures, is inherent to modeling of complex data. Extracting structure in one random variable that is relevant for another variable has been principally addressed recently via the information bottleneck method [15]. However, such auxiliary variables often contain more information than is actually required due to structures that are irrelevant for the task. In many other cases it is in fact easier to specify what is irrelevant than what is, for the task at hand. Identifying the relevant structures, however, can thus be considerably improved by also minimizing the information about another, irrelevant, variable. In this paper we give a general formulation of this problem and derive its formal, as well as algorithmic, solution. Its operation is demonstrated in a synthetic example and in two real world problems in the context of text categorization and face images. While the original information bottleneck problem is related to rate distortion theory, with the distortion measure replaced by the relevant information, extracting relevant features while removing irrelevant ones is related to rate distortion with side information. 1 Introduction A fundamental goal of machine learning is to find regular structures in a given empirical data, and use it to construct predictive or comprehensible models. This general goal, unfortunately, is very ill defined, as many data sets contain alternative, often conflicting, underlying structures. For example, documents may be classified either by subject or by writing style; spoken words can be labeled by their meaning or by the identity of the speaker; proteins can be classified by their structure or function - all are valid alternatives. Which of these alternative structures is “relevant” is often implicit in the problem formulation. The problem of identifying “the” relevant structures is commonly addressed in supervised learning tasks, by providing a “relevant” label to the data, and selecting features that are discriminative with respect to this label. An information theoretic generalization of this supervised approach has been proposed in [9, 15] through the information bottleneck method (IB). In this approach, relevance is introduced through another random variable (as is the label in supervised learning) and the goal is to compress one (the source) variable, while maintaining as much information about the auxiliary (relevance) variable. This framework has proven powerful for numerous applications, such as clustering the objects of sentences with respect to the verbs [9], documents with respect to their terms [1, 6, 14], genes with respect to tissues [8, 11], and stimuli with respect to spike patterns [10]. An important condition for this approach to work is that the auxiliary variable indeed corresponds to the task. In many situations, however, such “pure” variable is not available. The variable may in fact contain alternative and even conflicting structures. In this paper we show that this general and common problem can be alleviated by providing “negative information”, i.e. information about “unimportant”, or irrelevant, aspects of the data that can interfere with the desired structure during the learning. As an illustration, consider a simple nonlinear regression problem. Two variables and  are related through a functional form   , where  is in some known function class and is noise with some distribution that depends on . When given a sample of  pairs with the goal of extracting the relevant dependence  , the noise which may contain information on and thus interfere with extracting  - is an irrelevant variable. Knowing the joint distribution of   can of course improve the regression result. A more “real life” example can be found in the analysis of gene expression data. Such data, as generated by the DNA-chips technology, can be considered as an empirical joint distribution of gene expression levels and different tissues, where the tissues are taken from different biological conditions and pathologies. The search for expressed genes that testify for the existence of a pathology may be obscured by genetic correlations that exist also in other conditions. Here again a sample of irrelevant expression data, taken for instance from a healthy population, can enable clustering analysis to focus on the pathological features only, and ignore spurious structures. These two examples, and numerous others, are all instantiations of a common problem: in order to better extract the relevant structures information about the irrelevant components of the data should be incorporated. Naturally, various solutions have been suggested to this basic problem in many different contexts (e.g. spectral subtraction, weighted regression analysis). The current paper presents a general unified information theoretic framework for such problems, extending the original information bottleneck variational problem to deal with discriminative tasks of that nature, by observing its analogy with rate distortion theory with side information. 2 Information Theoretic Formulation To formalize the problem of extracting relevant structures consider first three categorical variables  ,  and  whose co-occurrence distributions are known. Our goal is to uncover structures in  ! , that do not exist in "# . The distribution %$ &! may contain several conflicting underlying structures, some of which may also exist in "' . These variables stand for example for a set of terms  , a set of documents   whose structure we seek, and an additional set of documents   , or a set of genes and two sets of tissues with different biological conditions. In all these examples  and  are conditionally independent given  . We thus make the assumption that the joint distribution factorizes as: ('    )*( +(' -, ( .,  . The relationship between the variables can be expressed by a Venn diagram (Figure 1A), where the area of each circle corresponds to the entropy of a variable (see e.g. [2] p.20 and [3] p.50 for discussion of this type of diagrams) and the intersection of two circles corresponds to their mutual information. The mutual information of two random variables is the familiar symmetric functional of their joint distribution, /0%$ 1 2354 6 ( 87:9;<>=@? 34 6BA =C? 3DA =@? 6EAGF . A. B. Figure 1: A. A Venn diagram illustrating the relations between the entropy and mutual information of the variables  ,   ,   . The area of each circle corresponds to the entropy of a variable, while the intersection of two circles corresponds to their mutual information. As & and  are independent given  , their mutual information vanishes when is known, thus all their overlap is included in the circle of  . B. A graphical model representation of IB with side information. Given the three variables  , & ,  , we seek a compact stochastic representation of  which preserves information about   but removes information about   . In this graph   and   are indeed conditionally independent given  . To identify the relevant structures in the joint distribution (' ! , we aim to extract a compact representation of the variable  with minimal loss of mutual information about the relevant variable   , and at the same time with maximal loss of information about the irrelevance variable   . The goal of information bottleneck with side information (IBSI) is therefor to find a stochastic map of  to a new variable , (' , , in a way that maximizes its mutual information with   and minimizes the mutual information about  . In general one can achieve this goal perfectly only asymptotically and the finite case leads to a sub optimal compression, an example of which is depicted in the blue region in figure 1. These constrains can be cast into a single variational functional,  / %$  / 1$    / &$   (1) where the Lagrange parameter  determines the tradeoff between compression and information extraction while the parameter determines the tradeoff between preservation of information about the relevant & variable and loss of information about the irrelevant one   . In some applications, such as in communication, the value of may be determined by the relative cost of transmitting the information about   by other means. The information bottleneck variational problem, introduced in [15], is a special case of our current variational problem with  , namely, no side or irrelevant information is available. In that case only the distributions (' , , ( and (  ,  are determined. 3 Solution Characterization The complete Lagrangian of this constrained optimization problem is given by  ( ,   / %$  / &$   /01$    3  ( , (2) where   , are the normalization Lagrange multipliers. Here, the minimization is performed with respect to the stochastic mapping ( ,  , taking into account its probabilistic relations to ( , (  ,  and (' ' . Interestingly, performing the minimization over ( ,  G$ ( B$('  ,  G$('  ,  as independent variables leads to the same solution of selfconsistent equations. Proposition 1 The extrema of  obey the following self consistent equations (' ,  (      =@? 6  3 A  =C? 6  A     =@? 6 3 A  =@? 6  A + (3) ('   3 (' , ( (  ,    (  3 (  , +(' , ( (  ,    (  3 (  , ( , ( where  ( 2   #  (  ,  ,:, ('  ,      (  , , , (  ,  +  is a normalization factor and    ( , , !  2 3 (' 87 9 ; =@? 3 A " ? 3DA is the Kullback-Leibler divergence [2], Proof: Following the Markovian relation ( ,   !%( , , we write (  ! 2 3 (  ,  +('  2 3 (' , G ( , (  2 3 ( , ( , ( and obtain for the second term of Eq. 3 # # ( , / 1$    # # (' ,    6  3 (  , ( , ( 87:9; $ ('  ,  ('  &% (4)  ('  6 (  , 87:9; $ (  ,  (  , (  ,  (  %   (' ' ('  ,  ,:, (  ,   ) ( ( (  , , , (   *) Similar differentiation for the other terms yield # # ( ,    ( 87:9; $ ( ,  ( % (5)  ( ,+- ('  , , , (  ,   . (  , , , (  ,  / (      where     ' !10 ? 3 A =@? 3 A     (  ,  ,:, (' !     (0 ,  ,:, (' ' + , holds all terms independent of  . Equating the derivative to zero then yields the first equation of proposition 1. The formal solutions of the above variational problem have an exponential form which is a natural generalization of the solution of the original IB problem. As in the original IB, when  goes to infinity the Lagrangian reduces to / 1$ !  / 1$ ' , and the exponents collapse to a hard clustering solution, where ( , become binary cluster membership probabilities. Further intuition about the operation of IBSI can be obtained by rewriting the second term in Eq. 2, / &$&!  / &$ '  2  2 6 26  (G    87:9; < =@? 6 2  A =C? 6 A F  2  26 26  ('G  0 7 9 ; < =@? 6  A =@? 6  A F  3 7 9 ; <1=@? 6 2  A =@? 6   A54 =@? 6 A 4 =@? 6 A)F76 =C?  4 6 4 6  A . For   and a fixed level of /0%$ , IBSI thus operates to extract a compact representation that maximizes the mean log likelihood ratio 3 7 9 ; < =@? 6   A =@? 6   AGF86 =@?  4 6 4 6  A , measuring the discriminability between the distribution of (  ,  and (  ,  . The above setup can be extended to the case of multiple variables on which multiinformation should be preserved   ) ) ):    and variables on which multi-information should be removed    ) ) )      , as discussed in [8]. This yields ( , (     2       =@? 6  3DA'  =C? 6   A   2        =@? 6   3DA  =@? 6    A  (6) which can be solved together with the other self-consistent conditions, similarly to Eq. 4. 4 Relation to Rate Distortion Theory with Side Information The problem formulated above is related to the theory of rate distortion with side information ([17],[2] p. 439). In rate distortion theory (RDT) a source variable  is stochastically encoded into a variable , which is decoded at the other side of the channel with some distortion. The achievable code rate,  at a given distortion level  , is bounded by the optimal rate, also known as the rate distortion function,    . The optimal encoding is determined by the stochastic map ( ,  , where the representation quantization is found by minimizing the average distortion. For the optimal code /0%$     . This rate can be improved by utilizing side information in the form of another variable,  , that is known at both ends of the channel. In this case, an improved rate can be achieved by avoiding sending information about  that can be extracted from  . Indeed, in this case the rate distortion function with this side information has a lower lower-bound, given by   !/ %$  /01$ , where is the optimal quantization of  in this case, under the distortion constraint (see [17] for details). In the information bottleneck framework the average distortion is replaced by the mutual information about the relevant variable, while the rate-distortion function is turned into a convex curve that characterizes the complexity of the relation between the variables, (see [15, 13]). Similarly, IBSI avoids differentiating instances of  that are informative about  if they contain information also about   . The variable   is analogous to the side information variable  , while  is just the “informative”  of the original IB. While the formal analogy between these problems helps in their mathematical formulation, it is important to emphasize that these are very different problems both in motivation and scope. Whereas RDT with side information is a specific communication problem with some given (often arbitrary) distortion function, our problem is a general statistical non-parametric analysis technique that depends solely by the choice of the variables  ,  and # . Many different pattern recognition and discriminative learning problems can be cast into this general information theoretic framework - far beyond the original setting of RDT with side information. 5 Algorithms The set of self-consistent equations (Eq. 4), can be solved by iterating the equations, given initial distributions, similar to the algorithm presented for the IB [15, 8], with similar convergence proofs. Unlike the original IB equations, convergence of the algorithm is no longer allways guaranteed, simply because the problem is not guaranteed to have feasible solutions for all values. However, there exist a non empty set of values for which this algorithm is guaranteed to converge. As in the case of IB, various heuristics can be applied, such as deterministic annealing in which increasing the parameter  is used to obtain finer clusters; greedy agglomerative hard clustering [13]; or a sequential K-means like algorithm [12]. The latter provides a good compromise between top-down annealing and agglomerative greedy approaches and achieves excellent performance. This is the algorithm we adopted in this paper, modifying the algorithm detailed in [12], by using a target function    /01$  / 1$ 1!  /01$  . 6 Applications We describe two applications of our method: a simple synthetic example, and a “real world” problem of hierarchical text categorization. We also used IBSI to extract relevant features in face images, but these results will be published elsewhere due spavce considerations. 6.1 A synthetic example To demonstrate the ability of our approach to uncover weak but interesting hidden structures in data, we designed a co-occurrences matrix contains two competing sub-structures (see figure 2A). For demonstration purposes, the matrix was created such that the stronger structure can be observed on the left and the weaker structure on the right. Compressing  into two clusters while preserving information on   using IB (   ), yields the clustering of figure 2B, in which the upper half of ’s are all clustered together. This clustering follows from the strong structure on the left of 2A. We now created a second co-occurrencematrix, to be used for identifying the relevant structure, in which each half of  yield similar distributions   , . Applying sequentialIBSI now successfully ignores the strong but irrelevant structure in   and retrieves the weak structure. Importantly, this is done in an unsupervised manner, without explicitly pointing to the strong but irrelevant structure. This example was designed for demonstration purposes, thus the irrelevant structures is strongly manifested in %$ . The next example shows that our approach is also useful for real data, in which structures are much more covert. A. B. C. D. P(X,Y+) Y+ X P(X,Y−) Y− P(X,T) T P(X,T) T Figure 2: Demonstration of IBSI operation. A. A joint distribution "  that contains two distinct and conflicting structure. B. Clustering  into two clusters using the information bottleneck method separates upper and lower values of  , according to the stronger structure. C. A joint distribution   that contains a single structure, similar in nature to the stronger structure " ! . D. Clustering  into two clusters using IBSI successfully extract the weaker structure in  ! . 0 10 20 30 40 50 60 0.5 0.55 0.6 0.65 n chosen clusters accuracy Figure 3: A. An illustration of the 20 newsgroups hierarchical data we used. B. Categorization accuracy vs. no of word clusters .      .IB dashed line. IBSI solid line. 6.2 Hierarchical text categorization Text categorization is a fundamental task in information retrieval. Typically, one has to group a large set of texts into groups of homogeneous subjects. Recently, Slonim and colleagues showed that the IB method achieves categorization that predicts manually predefined categories with great accuracy, and largely outperforms competing methods [12]. Clearly, this unsupervised task becomes more difficult when the texts have similar subjects, because alternative categories are extracted instead of the “correct” one. This problem can be alleviated by using side information in the form of additional documents from other categories. This is specifically useful in hierarchical document categorization, in which known categories are refined by grouping documents into sub-categories. [4, 16]. IBSI can be applied to this problem by operating on the terms-documents cooccurrence matrix while using the other top-level groups for focusing on the relevant structures. To this end, IBSI is used to identify clusters of terms that will be later used to cluster a group of documents into its subgroups, While IBSI is targeted at learning structures in unsupervised manner, we have chosen to apply it to a labelled dataset of documents in order to be able to measure how its results agree with manual classification. Labels are not used by our algorithms during learning and serve only to quantify the performance. We used the 20 Newsgroups database collected by [7] preprocessed as described in [12]. This database consists of 20 equal sized groups of documents, hierarchically organized into groups according to their content (figure 3A). We aimed to cluster documents that belong to two newsgroups from the supergroup of computer documents and have very similar subjects comp.sys.ibm.pc.hardware and comp.sys.mac.hardware. As side information we used all documents from the super group of science ( sci.crypt, sci.electronics, sci.med, sci.space). To demonstrate the power of IBSI we used double clustering to separate documents into two groups. The goal of the first clustering phase is to use IBSI to identify clusters of terms that extract the relevant structures of the data. The goal of the second clustering phase is simply to provide a quantitative measure for the quality of the features extracted in the first phase. We therefor performed the following procedure: First, the most frequent 2000 words in these documents were clustered into  clusters using IBSI. Then, word clusters were sorted by a single-cluster score    (8 ,  ,:, ( !      (  ,  ,:, ( '  , and the clusters with the highest score were chosen. These word-clusters were then used for clustering documents. The performance of this process is evaluated by measuring the overlap of the resulting clusters with the manualy classified groups. Figure 3, plots document-clustering accuracy for      , as a function of . IBSI ( >  ) is compared with the IB method (i.e.   ). Using IBSI successfully improves mean clustering accuracy from about 55 percent to about 63 percents. 7 Discussion and Further Research We have presented an information theoretic approach for extracting relevant structures from data, by utilizing additional data known to share irrelevant structures with the relevant data. Naturally, the choice of side data may considerably influence the solutions obtained with IBSI, simply because using different irrelevant variables, is equivalent to asking different questions about the data analysed. In practice, side data can be naturally defined in numerous applications, in particular in exploratory analysis of scientific experiments, e.g. when searching for features that characterize a disease but not healthy subjects. While the current work is based on clustering to compress the source, the notion of extracting relevance through side information can be extended to other forms of dimentionality reduction, such as non-linear embedding on low dimensional manifolds. In particular side information can be naturally combined with information theoretic modeling approaches such as SDR [5]. Our preliminary results with this approach were found very promissing. Acknowledgements We thank Amir Globerson, Noam Slonim, Israel Nelken and Nir Friedman for helpful discussions. G.C. is supported by a grant from the ministry of Science, Israel. References [1] L.D. Baker and A. K. McCallum. Distributional clustering of words for text classification. In Proc. of SIGIR, 1998. [2] T.M. Cover and J.A. Thomas. The elements of information theory. Plenum Press, NY, 1991. [3] I. Csiszar and J.Korner. Information theory: Coding Theorems for Discrete Memoryless Systems. Academic Press New York, 1997. 2nd edition. [4] S. Dumais and H. Chen. Hierarchical classification of web content. In Proc. of SIGIR, pages 256–263, 2000. [5] A. Globerson and N. Tishby. Sufficient dimentionality reduction. J. Mach. Learn. Res., 2003. [6] T. Hoffman. Probabilistic latent semantic indexing. In Proc. of SIGIR, pages 50–57, 1999. [7] K. Lang. Learning to filter netnews. In Proc. of 12th Int Conf. on machine Learning, 1995. [8] N. Friedman O. Mosenzon, N. Slonim, and N. Tishby. Multivariate information bottleneck. In Proc of UAI, pages 152–161, 2001. [9] F.C. Pereira, N. Tishby, and L. Lee. Distributional clustering of english words. In Meeting of the Association for Computational Linguistics, pages 183–190, 1993. [10] E. Schneidman, N. Slonim, N. Tishby, R. deRuyter van Steveninck, and W. Bialek. Analyzing neural codes using the information bottleneck method. Technical report, The Hebrew University, 2002. [11] J. Sinkkonen and S. Kaski. Clustering based on conditional distribution in an auxiliary space. Neural Computation, 14:217–239, 2001. [12] N. Slonim, N. Friedman, and N. Tishby. Unsupervised document classification using sequential information maximization. In Proc. of SIGIR, pages 129–136, 2002. [13] N. Slonim and N. Tishby. Agglomerative information bottleneck. In Advances in Neural Information Processing Systems (NIPS), 1999. [14] N. Slonim and N. Tishby. Document clustering using word clusters via the information bottleneck method. In Proc. of SIGIR, pages 208–215, 2000. [15] N. Tishby, F.C. Pereira, and W. Bialek. The information bottleneck method. In Proc. of 37th Allerton Conference on communication and computation, 1999. [16] A. Vinokourov and M.Girolani. A probabilistic framework for the hierarchic organization and classification of document collections. J. Intell. Info Sys., 18(23):153–172, 2002. [17] A. Wyner and J. Ziv. The rate distortion function for source coding with side information at the decoder. IEEE Trans. Information Theory, 22(1):1–10, 1976.
2002
8
2,287
Handling Missing Data with Variational Bayesian Learning of ICA Kwokleung Chan, Te-Won Lee and Terrence Sejnowski The Salk Institute, Computational Neurobiology Laboratory, 10010 N. Torrey Pines Road, La Jolla,, CA 92037, USA {kwchan,tewon,terry}@salk.edu Abstract Missing data is common in real-world datasets and is a problem for many estimation techniques. We have developed a variational Bayesian method to perform Independent Component Analysis (ICA) on high-dimensional data containing missing entries. Missing data are handled naturally in the Bayesian framework by integrating the generative density model. Modeling the distributions of the independent sources with mixture of Gaussians allows sources to be estimated with different kurtosis and skewness. The variational Bayesian method automatically determines the dimensionality of the data and yields an accurate density model for the observed data without overfitting problems. This allows direct probability estimation of missing values in the high dimensional space and avoids dimension reduction preprocessing which is not feasible with missing data. 1 Introduction Data density estimation is an important step in many machine learning problems. Often we are faced with data containing incomplete entries. The data may be missing due to measurement or recording failure. Another frequent cause is difficulty in collecting complete data. For example, it could be expensive and time consuming to perform some biomedical tests. Data scarcity is not uncommon and it would be very undesirable to discard those data points with missing entries when we already have a small dataset. Traditionally, missing data are filled in by mean imputation or regression imputation during preprocessing. This could introduce biases into the data cloud density and adversely affect subsequent analysis. A more principled way would be to use probability density estimates of the missing entries instead of point estimates. A well known example of this approach is the use of Expectation-Maximization (EM) algorithm in fitting incomplete data with a single Gaussian density [5]. Independent Component Analysis (ICA) [4] tries to locate independent axes within the data cloud and was developed for blind source separation. It has been applied to speech separation and analyzing fMRI and EEG data. ICA is also used to model data density, describing data as linear mixture of independent features and finding projections that may uncover interesting structure in the data. Maximum likelihood learning of ICA with incomplete data has been studied by [6], in the limited case of a square mixing matrix and predefined source densities. Many real-world datasets have intrinsic dimensionality smaller then that of the observed data. With missing data, principal component analysis cannot be used to perform dimension reduction as preprocessing for ICA. Instead, the variational Bayesian method applied to ICA can handle small datasets with high observed dimension [1, 2]. The Bayesian method prevents overfitting and performs automatic dimension reduction. In this paper, we extend the variational Bayesian ICA method to problems with missing data. The probability density estimate of the missing entries can be used to fill in the missing values. This also allows the density model to be refined and made more accurate. 2 Model and Theory 2.1 ICA generative model with missing data Consider a data set of T data points in an N-dimensional space: X = {xt ∈RN}, t = {1, · · · , T }. Assume a noisy ICA generative model for the data: P(xt|θ) = Z N(xt|Ast + ν, Ψ)P(st|θs) dst (1) where A is the mixing matrix, ν is the observation mean and Ψ−1 is the diagonal noise variance. The hidden source st is assumed to have L dimensions. Each component of st is modeled by a mixture of K Gaussians to allow for source densities of various kurtosis and skewness, P(st|θs) = L Y l K X kl πlklN (st(l)|φlkl, βlkl) ! (2) Split each data point into a missing part and an observed part: x⊤ t = (xo⊤ t , xm⊤ t ). In this paper, we only consider the random missing case [3], i.e. the probability for the missing entries xm t is independent of the value of xm t , but could depend on the value of xo t. The likelihood of the dataset is then defined to be L(θ; X) = Y t P(xo t|θ) , (3) P(xo t|θ) = Z P(xt|θ) dxm t = Z N(xo t |[Ast + ν]o t, [Ψ]o t)P(st|θs) dst (4) Here we have introduced the notation [·]o t, which means taking only the observed dimensions (corresponding to the tth data point) of whatever is inside the square brackets. Since eqn. (4) is similar to eqn. (1), the variational Bayesian ICA [1, 2] can be extended naturally to handled missing data, but only if care is taken in discounting missing entries in the learning rules. 2.2 Variational Bayesian method In a full Bayesian treatment, the posterior distribution of the parameters θ is obtained by P(θ|X) = P(X|θ)P(θ) P(X) = Q t P(xo t|θ)P(θ) P(X) (5) where P(X) is the marginal likelihood of the data and given as: P(X) = Z Y t P(xo t |θ)P(θ) dθ (6) The ICA model for P(X) is defined with the following priors on the parameters P(θ), P(Anl) = N(Anl|0, αl) P(αl) = G(αl|ao(αl), bo(αl)) P(πl) = D(πl|do(πl)) P(φlkl) = N(φlkl|µo(φlkl), Λo(φlkl)) P(βlkl) = G(βlkl|ao(βlkl), bo(βlkl)) (7) P(νn) = N(νn|µo(νn), Λo(νn)) P(Ψn) = G(Ψn|ao(Ψn), bo(Ψn)) (8) where N(·), G(·) and D(·) are the normal, gamma and Dirichlet distributions.ao(·), bo(·), do(·), µo(·), and Λo(·) are prechosen hyperparameters for the priors. Under the variational Bayesian treatment, instead of performing the integration in eqn. (6) to solve for P(θ|X) directly, we approximate it by Q(θ) and opt to minimize the KullbackLeibler distance between them: −KL(Q(θ)|P(θ|X)) = Z Q(θ) log P(θ|X) Q(θ) dθ = Z Q(θ) "X t log P(xo t |θ) + log P(θ) Q(θ) # dθ −log P(X) (9) Since −KL(Q(θ)|P(θ|X)) ≤0, we get a lower bound for the log marginal likelihood of the data, log P(X) ≥ Z Q(θ) X t log P(xo t|θ) dθ + Z Q(θ) log P(θ) Q(θ) dθ , (10) which can also be obtained by applying the Jensen’s inequality to eqn. (6). Q(θ) is then solved by functional maximization of the lower bound. A separable approximate posterior Q(θ) will be assumed: Q(θ) = Q(ν)Q(Ψ) × Q(A)Q(α) × Y l " Q(πl) Y kl Q(φlkl)Q(βlkl) # . (11) The second term in eqn. (10), which is the negative Kullback-Leibler divergence between approximate posterior Q(θ) and prior P(θ), can be expanded as, Z Q(θ) log P(θ) Q(θ) dθ = X l Z Q(πl) log P(πl) Q(πl) dπl + X l kl Z Q(φlkl) log P(φlkl) Q(φlkl) dφlkl + X l kl Z Q(βlkl) log P(βlkl) Q(βlkl) dβlkl + ZZ Q(A)Q(α) log P(A|α) Q(A) dA dα + Z Q(α) log P(α) Q(α) dα + Z Q(ν) log P(ν) Q(ν) dν + Z Q(Ψ) log P(Ψ) Q(Ψ) dΨ (12) 2.3 Special treatment for missing data Thus far the analysis follows almost exactly that of the variational Bayesian ICA on complete data, except that P(xt|θ) is replaced by P(xo t |θ) in eqn. (6) and consequently the missing entries are discounted in the learning rules. However, it would be useful to obtain Q(xm t |xo t ), i.e., the approximate distribution on the missing entries, which is given by Q(xm t |xo t) = Z Q(θ) Z N(xm t |[Ast + ν]m t , [Ψ]m t )Q(st) dst dθ . (13) As noted in [6], elements of st given xo t are dependent. More importantly, under the ICA model, Q(st) is unlikely to be a single Gaussian. This is evident from figure 1 which shows the probability density functions of the data x and hidden variable s. The inserts show the sample data in the two spaces. Here the hidden sources assume density of P(sl) ∝ exp(−|sl|0.7). They are mixed noiselessly to give P(x) in the left graph. The cut in the left graph represents P(x1|x2 = −0.5), which transforms into a highly correlated and non-Gaussian P(s|x2 = −0.5). −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 x1 x2 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 s1 s2 Figure 1: Pdfs for the data x (left) and hidden sources s (right). Inserts show the sample data in the two spaces. The “cuts” show P(x1|x2 = −0.5) and P(s|x2 = −0.5). Unless we are interested only in the first and second order statistics of Q(xm t |xo t ), we should try to capture as much structure as possible of P(st|xo t) in Q(st). In this paper, we take a slightly different route from [1, 2] when performing variational Bayesian learning. First, we break down P(st) (eqn. 2) into a mixture of KL Gaussians in the L dimensional s space. P(st) = X k1 · · X kL [π1k1 × · · ×πLkL × N(st(1)|φ1k1β1k1) × · · ×N(st(L)|φLkLβLkL)] = X k πkN(st|φk, βk) (14) Here we have defined k to be a vector index. The “kth” Gaussian is centered at φk, of inverse covariance βk, in the source s space, πk = π1k1 × · · · × πLkL βk = diag (β1k1, · · · βLkL) φk = (φ1k1, · · · , φlkl, · · · , φLkL)⊤ k = (k1, · · · , kl, · · · , kL)⊤, kl = 1, · · · , K (15) Log likelihood for xo t is then expanded using the Jensen’s inequality, log P(xo t|θ) = log X k πk Z P(xo t |st, θ) N(st|φk, βk) dst ≥ X k Q(kt) log Z P(xo t|st, θ)N(st|φk, βk) dst + X k Q(kt) log πk Q(kt) (16) Here Q(kt) is a short form for Q(kt = k). kt is a discrete hidden variable and Q(kt = k) is the probability that the tth data point belongs to the kth Gaussian. Recognizing that st is just a dummy variable, we introduce Q(skt), apply the Jensen’s inequality again and get log P(xo t|θ) ≥ X k Q(kt) Z Q(skt) log P(xo t |skt, θ) dskt + Z Q(skt) log N(skt|φk, βk) Q(skt) dskt  + X k Q(kt) log πk Q(kt) (17) Substituting log P(xo t |θ) back into eqn. (10), the variational Bayesian method can be continued as usual. We have drawn in figure 2 a simplified graphical representation for the generative model of variational ICA. xt is the observed variable, kt and st are hidden variables and the rest are model parameters, where kt indicates which of the KL expanded Gaussians generated st. xt Ψ ν A α st β φ kt π Figure 2: A simplified directed graph for the generative model of variational ICA. xt is the observed variable, kt and st are hidden variables and the rest are model parameters. The kt indicates which of the KL expanded Gaussians generated st. 3 Learning Rules Combining eqns. (10,12 and 17) we perform functional maximization on the lower bound of the log marginal likelihood, log P(X), w.r.t. Q(θ) (eqn. 11), Q(kt) and Q(skt) (eqn. 17) and obtain the following learning rules for the sufficient statistics of Q(θ) and Q(skt): Λ(νn) = Λo(νn) + ⟨Ψn⟩ X t ont µ(νn) = Λo(νn)µo(νn) + ⟨Ψn⟩P t ont P k Q(kt)⟨(xnt −An·skt)⟩ Λ(νn) (18) a(Ψn) = ao(Ψn) + 1 2 X t ont b(Ψn) = bo(Ψn) + 1 2 X t ont X k Q(kt)⟨(xnt −An·skt −νn)2⟩ (19) Λ(An·) = diag (⟨α1⟩, · · · ⟨αL⟩) + ⟨Ψn⟩ X t ont X k Q(kt)⟨skts⊤ kt⟩ µ(An·) = ⟨Ψn⟩ X t ont(xnt −⟨νn⟩) X k Q(kt)⟨s⊤ kt⟩ ! Λ(An·)−1 (20) a(αl) = ao(αl) + N 2 b(αl) = bo(αl) + 1 2 X n ⟨A2 nl⟩ (21) d(πlk) = do(πlk) + X t X kl=k Q(kt) (22) Λ(φlkl) = Λo(φlkl) + ⟨βlkl⟩ X t X kl=k Q(kt) µ(φlkl) = Λo(φlkl)µo(φlkl) + ⟨βlkl⟩P t P kl=k Q(kt)⟨skt(l)⟩ Λ(φlkl) (23) a(βlkl) = ao(βlkl) + 1 2 X t X kl=k Q(kt) b(βlkl) = bo(βlkl) + 1 2 X t X kl=k Q(kt)⟨(skt(l) −φlkl)2⟩ (24) Q(skt) = N(skt|µ(skt), Λ(skt)) Λ(skt) = diag (⟨β1k1⟩, · · · ⟨βLkL⟩) + ⟨A⊤diag(o1tΨ1, · · · oNtΨN) A⟩ (25) Λ(skt)µ(skt) = ⟨β1k1φ1k1, · · · βLkLφLkL⟩⊤+ ⟨A⊤diag (o1tΨ1, · · · oNtΨN) (xt −ν)⟩ −4 −3 −2 −1 0 1 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Figure 3: The approximation of Q(xm t |xo t ) from the full missing ICA (solid line) and the polynomial missing ICA (dashed line). Shaded area is the exact posterior P(xm t |xo t ) corresponding to the noiseless mixture in fig. 1 with observed x2=–2. Dotted lines are contribution from the individual Q(xm kt|xo t, k). In the above equations, ⟨·⟩denotes the expectation over the posterior distributions Q(·). An· is the nth row of the mixing matrix A, P kl=k means picking out those Gaussians such that the lth element of their indices k has the value of k, and ont is a binary indicator variable for whether or not xnt is observed. For a model of equal noise variance among all the observation dimensions, the summation in the learning rules for Q(Ψ) would be over both t and n. Note that there exists scale and translational degeneracy in the model, as given by eqn. (1) and (2). After each update of Q(πl), Q(φlkl) and Q(βlkl), it is better to rescale P(st(l)) to have zero mean and unit variance. Q(skt), Q(A), Q(α), Q(ν) and Q(Ψ) have to be adjusted correspondingly. Finally, Q(kt) is given by, log Q(kt) = ⟨log P(xo t |skt, θ)+log N(skt|φk, βk)−log Q(skt)+log πk⟩−log zt (26) where zt is a normalization constant. The lower bound E(X, Q(θ)|H) for the log marginal likelihood E(X, Q(θ)|H) = X t log zt + Z Q(θ) log P(θ) Q(θ) dθ (27) can be monitored during learning and used for comparison of different solutions or models. 4 Filling in missing entries The approximate distribution Q(xm t |xo t) can be obtained by a summation of Q(xm kt|xo t, k): Q(xm t |xo t ) = X k Q(kt) Z δ(xm t −xm kt)Q(xm kt|xo t, k) dxm kt , (28) Q(xm kt|xo t, k) = Z Q(θ) Z N(xm kt|[Askt + ν]m t , [Ψ]m t )Q(skt) dskt dθ (29) Estimation of Q(xm t |xo t ) using the above equations is demonstrated in fig. 3. The shaded area is the exact posterior P(xm t |xo t) for the noiseless mixing in fig. 1 with observed x2=–2 and the solid line is the approximation by eqn. 28–29. We have modified the variational ICA of [1] by discounting missing entries in the learning rules. The dashed line is the approximation of Q(xm t |xo t ) from this modified method. The treatment of fully expanding the KL hidden source Gaussians discussed in section 2.3 is called “full missing ICA”, and the modified method is “polynomial missing ICA”. The “full missing ICA” gives a more accurate fit for P(xm t |xo t ) and a better estimate for ⟨xm t |xo t⟩. a) b) c) d) e) 1 2 3 4 5 6 7 −2000 −1900 −1800 −1700 −1600 −1500 Number of dimensions log marginal likelihood lower bound full missing ICA polynomial missing ICA Figure 4: a)-d) Source density modeling by variational missing ICA of the synthetic data. Histograms: recovered sources distribution; dashed lines: original probability densities; solid line: mixture of Gaussians modeled probability densities; dotted lines: individual Gaussian contribution. e) E(X, Q(θ)|H) as a function of hidden source dimensions. 5 Experiment 5.1 Synthetic Data In the first experiment, 200 data points were generated by mixing 4 sources randomly in a 7 dimensional space. The generalized Gaussian, gamma and beta distributions were used to represent source densities of various skewness and kurtosis (fig. 4 a)-d)). Noise at –26 dB level was added to the data and missing entries were created with a probability of 0.3. In fig. 4 a)-d), we plotted the histograms of the recovered sources and the probability density functions (pdf) of the 4 sources. The dashed line is the exact pdf used to generate the data and solid line is the modeled pdf by mixture of two 1-D Gaussians (eqn. 2). Fig. 4 e) plots the lower bound of log marginal likelihood (eqn. 27) for models assuming different numbers of intrinsic dimensions. As expected, the Bayesian treatment allows us to the infer the intrinsic dimension of the data cloud. In the figure, we also plot the E(X, Q(θ)|H) from the polynomial missing ICA. It is clear that the full missing ICA gave a better fit to the data density. Furthermore, the polynomial missing ICA converges slower per epoch of learning, suffers from many more local minima and problems get worse with higher missing rate. 5.2 Mixing Images This experiment demonstrates the ability of the proposed method to fill in missing values while performing demixing. The 1st column in fig. 5 shows the 2 original 380-by-380 pixels images. They were linearly mixed into 3 images and –20 dB noise was added. 20% missing entries were introduced randomly. The denoised mixtures and recovered sources are in the 3rd and 4th columns of fig. 5. 0.8% of the pixels were missing from all 3 mixed images and could not be recovered. 38.4% of the pixels were missing from only 1 mixed image and could be filled in with low uncertainty. 9.6% of the pixels were missing from any two of the mixed images. Estimation of their values incurred high uncertainty. From fig. 5, we can see that the source images were well separated and the mixed images were nicely denoised. The denoised mixed images in this example were only meant to visually illustrate the method. However, if (x1, x2, x3) represent cholesterol, blood sugar and uric acid level, for example, it would be possible to fill in the third when only two are available. 6 Conclusion In this paper, we derived the learning rules for variational Bayesian ICA with missing data. The complexity of the method is exponential in L. However, this exponential growth in ⇒ ⇒ + Figure 5: A demonstration of recovering missing values. The original images are in the 1st column. 20% of the pixels in the mixed images (2nd column) are missing, while only 0.8% are missing from the denoised mixed (3rd column) and separated images (4th column). complexity is manageable and worthwhile for small data sets containing missing entries in a high dimensional space. The proposed method shows promise in analyzing and identifying projections of datasets that have a very limited number of expensive data points yet contain missing entries due to data scarcity. We have applied the variational missing ICA to a primates brain volumetric dataset containing 44 examples in 57 dimensions. Very encouraging results were obtained and will be reported in another paper. References [1] Kwokleung Chan, Te-Won Lee, and Terrence J. Sejnowski. Variational learning of clusters of undercomplete nonsymmetric independent components. Journal of Machine Learning Research, 3:99–114, 2002. [2] Rizwan A. Choudrey and Stephen J. Roberts. Flexible Bayesian independent component analysis for blind source separation. In 3rd International Conference on Independent Component Analysis and Blind Signal Separation, pages 90–95, San Diego, Dec. 09-12 2001. [3] Z. Ghahramani and M. Jordan. Learning from incomplete data. Technical Report CBCL Paper No. 108, Center for Biological and Computational Learning, Massachusetts Institute of Technology, 1994. [4] Aapo Hyvarinen, Juha Karhunen, and Erkki Oja. Independent Component Analysis. J. Wiley, New York, 2001. [5] R. J. A. Little and D. B. Rubin. Statistical Analysis with Missing Data. Wiley, New York, 1987. [6] Max Welling and Markus Weber. Independent component analysis of incomplete data. In 1999 6th Joint Symposium on Neural Compuatation Proceedings, volume 9, pages 162–168. UCSD, May. 22 1999.
2002
80
2,288
Ranking with Large Margin Principle: Two Approaches* Amnon Shashua School of CS&E Hebrew University of Jerusalem Jerusalem 91904, Israel email: shashua@cs.huji.ac.il Anat Levin School of CS&E Hebrew University of Jerusalem Jerusalem 91904, Israel email: alevin@cs.huji.ac.il Abstract We discuss the problem of ranking k instances with the use of a "large margin" principle. We introduce two main approaches: the first is the "fixed margin" policy in which the margin of the closest neighboring classes is being maximized which turns out to be a direct generalization of SVM to ranking learning. The second approach allows for k - 1 different margins where the sum of margins is maximized. This approach is shown to reduce to lI-SVM when the number of classes k = 2. Both approaches are optimal in size of 21 where I is the total number of training examples. Experiments performed on visual classification and "collaborative filtering" show that both approaches outperform existing ordinal regression algorithms applied for ranking and multi-class SVM applied to general multi-class classification. 1 Introduction In this paper we investigate the problem of inductive learning from the point of view of predicting variables of ordinal scale [3, 7,5], a setting referred to as ranking learning or ordinal regression. We consider the problem of applying the large margin principle used in Support Vector methods [12, 1] to the ordinal regression problem while maintaining an (optimal) problem size linear in the number of training examples. Let x{ be the set of training examples where j = 1, ... , k denotes the class number, and i = 1, ... , ij is the index within each class. Let I = 2:j ij be the total number of training examples. A straight-forward generalization of the 2-c1ass separating hyperplane problem, where a single hyperplane determines the classification rule, is to define k - 1 separating hyperplanes which would separate the training data into k ordered classes by modeling the ranks as intervals on the real line an idea whose origins are with the classical cumulative model [9], see also [7,5]. The geometric interpretation of this approach is to look for k - 1 parallel hyperplanes represented by vector w E Rn (the dimension of the input vectors) and scalars bl :::; ... :::; bk- I defining the hyperplanes (w, bd, ... , (w, bk-d, such that the 'This work was done while A.S. was spending his sabbatical at the computer science department of Stanford University. 2 Iwl ~~ .... . : ~ ~ " maximize the mf~in (w·o) Fixed-margin ~ ~ Iwl Iwl Sum-oj-margins Figure 1: Lefthand display: fi xed-margin policy for ranking learning. The margin to be maximized is associated with the two closest neighboring classes. As in conventional SVM, the margin is prescaled to be equal to 2/lwl thus maximizing the margin is achieved by minimizing w·w. The support vectors lie on the boundaries between the two closest classes. Righthand display: sum-of-margins policy for ranking learning. The objective is to maximize the sum of k - 1 margins. Each class is sandwiched between two hyperplanes, the norm of w is set to unity as a constraint in the optimization problem and as a result the objective is to maximize I:j (bj - aj). In this case, the support vectors lie on the boundaries among all neighboring classes (unlike the fi xed-margin policy). When the number of classes k = 2, the dual functional is equivalent to v-SVM. data are separated by dividing the space into equally ranked regions by the decision rule f (x) = min {r: w . x - br < O}. rE{l , ... ,k} (1) In other words, all input vectors x satisfying br - 1 < w . x < br are assigned the rank r (using the convention that bk = (0). For instance, recently [5] proposed an "on-line" algorithm (with similar principles to the classic "perceptron" used for 2-class separation) for finding the set of parallel hyperplanes which would comply with the separation rule above. To continue the analogy to 2-class learning, in addition to the separability constraints on the variables 0: = {w, b1 :S ... :S bk-d one would like to control the tradeoff between lowering the "empirical risk" Remp(O:) (error measure on the training set) and lowering the "confidence interval" 1J>(0:, h) controlled by the VC-dimension h of the set of loss functions. The "structural risk minimization" (SRM) principle [12] minimizes a bound on the risk over a structure on the set of functions. The geometric interpretation for 2-class learning is to maximize the margin between the boundaries of the two sets [12, 1]. In our setting of ranking learning, there are k - 1 margins to consider, thus there are two possible approaches to take on the "large margin" principle for ranking learning: "fixed margin" strategy: the margin to be maximized is the one defined by the closest (neighboring) pair of classes. Formally, let w, bq be the hyperplane separating the two pairs of classes which are the closest among all the neighboring pairs of classes. Let w , bq be scaled such the distance of the boundary points from the hyperplane is 1, i.e., the margin between the classes q, q + 1 is 2/lwl (see Fig. 1, lefthand display). Thus, the fixed margin policy for ranking learning is to find the direction wand the scalars b1 , ... , bk - 1 such that w . w is minimized (i.e., the margin between classes q, q + 1 is maximized) subject to the separability constraints (modulo margin errors in the non-separable case). "sum of margins" strategy: the sum of all k - 1 margins are to be maximized. In this case, the margins are not necessarily equal (see Fig. 1, righthand display). Formally, the ranking rule employs a vector w, Iwi = 1, and a set of 2(k - 1) thresholds ai ::::; bi ::::; a2 ::::; b2 ::::; ... ::::; ak-i ::::; bk- i such that w . x{ ::::; aj and w . x{+i 2:: bj for j = 1, ... , k - 1. In other words, all the examples of class 1 ::::; j ::::; k are "sandwiched" between two parallel hyperplanes (w,aj) and (w, bj- t}, where bo = -00 and ak = 00. The k - 1 margins are therefore (bj - aj) and the large margin principle is to maximize Lj (bj - aj) subject to the separability constraints above. It is also fairly straightforward to apply the SRM principle and derive the bounds on the actual risk functional see [11] for details. In the remainder of this paper we will introduce the algorithmic implications of these two strategies for implementing the large margin principle for ranking learning. The fixedmargin principle will turn out to be a direct generalization of the Support Vector Machine (SYM) algorithm in the sense that substituting k = 2 in our proposed algorithm would produce the dual functional underlying conventional SVM.1t is interesting to note that the sum-of-margins principle reduces to v-SVM (introduced by [10] and later [2]) when k = 2. 2 Fixed Margin Strategy Recall that in the fixed margin policy (w, bq ) is a "canonical" hyperplane normalized such that the margin between the closest classes q, q + 1 is 2/llwll. The index q is of course unknown. The unknown variables w, bi ::::; ... ::::; bk - i (and the index q) could be solved in a two-stage optimization problem: a Quadratic Linear Programming (QLP) formulation followed by a Linear Programming (LP) formulation. The (primal) QLP formulation of the ("soft margin") fixed-margin policy for ranking learning takes the form: ~w . w + c l: l: (E{ + <j+1) i j subject to w·xj -b < -l+c:j • J .' w . xj+1 - b· > 1 c:~j+1 l J t' c:j > 0 c:*j > 0 't , 't (2) (3) (4) (5) where j = 1, ... , k - 1 and i = 1, ... , i j , and C is some predefined constant. The scalars c:{ and <j+1 are positive for data points which are inside the margins or placed on the wrong side of the respective hyperplane. Since the margin is maximized while maintaining separability, it will be governed by the closest pair of classes because otherwise the separability conditions would cease to hold (modulo the choice of the constant C which would tradeoff the margin size with possible margin errors but that is discussed later). The solution to this optimization problem is given by the saddle point of the Lagrange functional (Lagrangian): L(·) ~w. w + CI: (c:{ + <Hi) + I:A{(W' x{ - bj + 1- c:{) i,j i,j i,j h . 1 k l' 1 . d rj r*j+i d d II . L were J , ... , ,Z , ••• , Zj, an '>i' '>i ,Ai' Ui are a non-negattve agrange multipliers. Since the primal problem is convex, there exists a strong duality between the primal and dual optimization functions. By first minimizing the Lagrangian with respect to w, bj , fi, f;j+1 we obtain the dual optimization function which then must be maximized with respect to the Lagrange multipliers. From the minimization of the Lagrangian with respect to w we obtain: w = - '" )..~x~ + '" 8j x j +1 L...-t 't 't L...-t 'I. 't (6) i,j i,j That is, the direction w of the parallel hyperplanes is described by a linear combination of the support vectors x associated with the non-vanishing Lagrange multipliers. From the Kuhn-Tucker theorem the support vectors are those vectors for which equality is achieved in the inequalities (3,4). These vectors lie on the two boundaries between the adjacent classes q, q + 1 (and other adjacent classes which have the same margin). From the minimization of the Lagrangian with respect to bj we obtain the constraint: (7) and the minimization with respect to fi and <H1 yields the constraints: C )..j rj = 0 C - 8j r~H1 = 0 't ':,'1.' 't "::.'1. (8) which in turn gives rise to the constraints 0 :s )..i :S C where )..i = C if the corresponding data point is a margin error «(1 = 0, thus from the Kuhn-Tucker theorem f{ > 0), and likewise for 8{. Note that a data point can count twice as a margin error once with respect to the class on its "left" and once with respect to the class on its "right". For the sake of presenting the dual functional in a compact form, we will introduce some new notations. Let X j be the n x ij matrix whose columns are the data points xi, i = 1, ... , ij. Let )..j = ()..I, ... ,)..i.) T be the vector whose components are the Lagrange , multipliers )..{ corresponding to class j. Likewise, let 8j = (8{, ... , 8f) T be the Lagrange , multipliers 8! corresponding to class j + 1. Let fL = (P, ... , )..k-1, 81 , ... , 8k- 1) T be the vector holding all the )..! and 8! Lagrange multipliers, and let fL1 = (fLL ... , fLL1) T = ()..1, ... , )..k-1) T and fL2 = (fLr, ... , fLL1) T = (81, ... , 8k- 1) T the first and second halves of fL. Note that fL] = )..j is a vector, and likewise so is fL3 = 8j . Let 1 be the vector of 1 's, and finally, let Q be the matrix holding two copies of the training data: (9) where N = 2l - i1 - ik' For example, (6) becomes in the new notations w QfL. By substituting the expression for w = QfL back into the Lagrangian and taking into account the constraints (7,8) one obtains the dual functional which should be maximized with respect to the Lagrange multipliers fLi: max {! i= l subject to o :S fLi :S C i = 1, ... , N 1· fLJ = 1 . fL] j = 1, ... , k - 1 (10) (11) (12) Note that k = 2, i.e., we have only two classes thus the ranking learning problem is equivalent to the 2-class classification problem, the dual functional reduces and becomes equivalent to the dual form of conventional SVM. In that case (QT Q)ij = YiYjXi . Xj where Yi, Yj = ±1 denoting the class membership. Also worth noting is that since the dual functional is a function of the Lagrange multipliers >-.{ and 5{ alone, the problem size (the number of unknown variables) is equal to twice the number of training examples precisely N = 2l-il -ik where l is the number oftraining examples. This favorably compares to the O(l2) required by the recent SYM approach to ordinal regression introduced in [7] or the kl required by the general multi-class approach to SYM [4,8]. Further note that since the entries of Q T Q are the inner-products of the training examples, they can be represented by the kernel inner-product in the input space dimension rather than by inner-products in the feature space dimension. The decision rule, in this case, given a new instance vector x would be the rank r corresponding to the first smallest threshold br for which support vector s support vectors where K(x, y) = ¢>(x) . ¢>(y) replaces the inner-products in the higher-dimensional "feature" space ¢>(x). Finally, from the dual form one can solve for the Lagrange multipliers J-Li and in turn obtain w = QJ-L the direction of the parallel hyperplanes. The scalar bq (separating the adjacent classes q, q + 1 which are the closest apart) can be obtained from the support vectors, but the remaining scalars bj cannot. Therefore an additional stage is required which amounts to a Linear Programming problem on the original primal functional (2) but this time w is already known (thus making this a linear problem instead of a quadratic one). 3 Sum-of-Margins Strategy In this section we propose an alternative large-margin policy which allows for k - 1 margins where the criteria function maximizes the sum of them. The challenge in formulating the appropriate optimization functional is that one cannot adopt the "pre-scaling" of w approach which is at the center of conventional SYM formulation and of the fixed-margin policy for ranking learning described in the previous section. The approach we take is to represent the primal functional using 2(k 1) parallel hyperplanes instead of k - 1. Each class would be "sandwiched" between two hyperplanes (except the first and last classes). Formally, we seek a ranking rule which employs a vector wand a set of 2(k 1) thresholds al :::; b1 :::; a2 :::; b2 :::; ... :::; ak-l :::; bk- 1 such that w . x{ :::; aj and w . X{+l ::::: bj for j = 1, ... , k - 1. In other words, all the examples of class 1 :::; j :::; k are "sandwiched" between two parallel hyperplanes (w, aj) and (w, bj- d, where bo = -00 and ak = 00. The margin between two hyperplanes separating class j and j + 1 is: (bj - aj) / JTIWTI. Thus, by setting the magnitude of w to be of unit length (as a constraint in the optimization problem), the margin which we would like to maximize is Lj(bj - aj) for j = 1, ... , k-1 which we can formulate in the following primal QLP (see also Fig. 1, righthand display): k-l min 2)aj - bj ) + C 2: 2: (f{ + f;j+l) j =l i j subject to aj :::; bj , bj:::;aj+l, j=1, ... , k-2 w· x j < a· + fj b· - f*j+l < w· x j+! • J ., J • ., w . w < 1 fj > 0 f*j+! > 0 -, 2-'1, (13) (14) (15) (16) (17) where j = 1, ... , k - 1 (unless otherwise specified) and i = 1, ... , ij, and C is some predefined constant (whose physical role would be explained later). Note that the (non-convex) constraint w . w = 1 is replaced by the convex constraint w . w ::; 1 since it can be shown that the optimal solution w* would have unit magnitude in order to optimize the objective function (see [11] for details). We will proceed to derive the dual functional below. The Lagrangian takes the following form: k- 2 L(·) l)aj - bj ) + C L (e1 + <HI) + L ~j(aj - bj ) + L 1}j(bj - aHd j i ,j j j = 1 + L A1(w . x1- aj - e1) + L 61(bj - e:j +! - w · xi+!) i,j i ,j + a(w· w -1) - L (lei - L (i*H1e? i ,j i,j where j 1, ... , k 1 (unless otherwise specified), i 1, ... , ij , and ~j, 1}j, a, (1, Cj , Ai, 61 are all non-negative Lagrange multipliers. Due to lack of space we will omit further derivations (those can be found in [11]) and move directly to the dual functional which takes the following form: max J.L subject to o ::; f.1i ::; C i = 1, ... , N 1 . f.1~ ;::: 1, 1· f.1Ll ;::: 1 1· f.11 = 1 . f.12 (18) (19) (20) (21) where Q and f.1 are defined in the previous section. The direction w is represented by the linear combination of the support vectors: w = Qf.1/IIQf.111 where, following the KuhnTucker theorem, f.1i > 0 for all vectors on the boundaries between the adjacent pairs of classes and margin errors. In other words, the vectors x associated with non-vanishing f.1i are those which lie on the hyperplanes or vectors tagged as margin errors. Therefore, all the thresholds aj, bj can be recovered from the support vectors unlike the fixed-margin scheme which required another LP pass. The dual functional (18) is similar to the dual functional (10) but with some crucial differences: (i) the quadratic criteria functional is homogeneous, and (ii) constraints (20) lead to the constraint L:i f.1i ;::: 2. These two differences are also what distinguishes between conventional SVM and v-SVM for 2-class learning proposed recently by [10]. Indeed, if we set k = 2 in the dual functional (18) we would be able to conclude that the two dual functionals are identical (by a suitable change of variables). Therefore, the role of the constant C complies with the findings of [10] by controlling the tradeoff between the number of margin errors and support vectors and the size of the margins: 2/ N ::; C ::; 2 such that when C = 2 a single margin error is allowed (otherwise a duality gap would occur) and when C = 2/N all vectors are allowed to become margin errors and support vectors (see [11] for a detailed discussion on this point). In the general case of k > 2 classes (in the context of ranking learning) the role of the constant C carries the same meaning: C::; 2(k - 1)/#m.e. where #m.e. stand for "total number of margin errors", thus 2(k;; 1) ::; C ::; 2(k _ 1). Since a data point can can count twice for a margin error, the total number of margin errors in the worst case is N = 2l - il - ik where l is the total number of data points. '" .. ~ ~o~ 1~ I~ * ~ ~ Figure 2: The results of the fi xed-margin principle plotted against the results of PRank of [5] which does not use a large-margin principle. The average error of PRank is about 1.25 compared to 0.7 with the fi xed-margin algorithm. 4 Experiments Due to lack of space we describe only two sets of experiments we conducted on a "collaborative filtering" problem and visual data ranking. More details and further experiments are reported in [11]. In general, the goal in collaborative filtering is to predict a person's rating on new items such as movies given the person's past ratings on similar items and the ratings of other people of all the items (including the new item). The ratings are ordered, such as "highly recommended", "good" , ... , "very bad" thus collaborative filtering falls naturally under the domain of ordinal regression (rather than general multi-class learning). The "EachMovie" dataset [6] contains 1628 movies rated by 72,916 people arranged as a 2D array whose columns represent the movies and the rows represent the users about 5% of the entries of this array are filled-in with ratings between 0, ... ,6 totaling 2,811,983 ratings. Given a new user, the ratings of the user on the 1628 movies (not all movies would be rated) form the Yi and the i'th column of the array forms the Xi which together form the training data (for that particular user). Given a new movie represented by the vector x of ratings of all the other 72,916 users (not all the users rated the new movie), the learning task is to predict the rating f (x) of the new user. Since the array contains empty entries, the ratings were shifted by -3.5 to have the possible ratings {-2.5, -1.5, -0.5, 0.5,1.5, 2.5} which allows to assign the value of zero to the empty entries of the array (movies which were not rated). For the training phase we chose users which ranked about 450 movies and selected a subset {50, 100, ... , 300} of those movies for training and tested the prediction on the remaining movies. We compared our results (collected over 100 runs) the average distance between the correct rating and the predicted rating to the best "on-line" algorithm of [5] called "PRank" (there is no use of large margin principle). In their work, PRank was compared to other known on-line approaches and was found to be superior, thus we limited our comparison to PRank alone. Attempts to compare our algorithms to other known ranking algorithms which use a large-margin principle ([7], for example) were not successful since those square the training set size which made the experiment with the Eachmovie dataset untractable computationally. The graph in Fig. 2 shows that the large margin principle makes a significant difference on the results compared to PRank. The results we obtained with PRank are consistent with the reported results of [5] (best average error of about 1.25), whereas our fixed-margin algorithm provided an average error of about 0.7). We have applied our algorithms to classification of "vehicle type" to one of three classes: "small" (passenger cars), "medium" (SUVs, minivans) and "large" (buses, trucks). There Figure 3: Classifi cation of vehicle type: Small, Medium and Large (see text for details). is a natural order Small, Medium, Large since making a mistake between Small and Large is worse than confusing Small and Medium, for example. We compared the classification error (counting the number of miss-classifications) to general multi-class learning using pair-wise SVM. The error over a test set of about 14,000 pictures was 20% compared to 25% when using general multi-class SVM. We also compared the error (averaging the difference between the true rank {I, 2,3} and the predicted rank using 2nd-order kernel) to PRank. The average error was 0.216 compared to 1.408 with PRank. Fig. 3 shows a typical collection of correctly classified and incorrectly classified pictures from the test set. References [1] B.E. Boser, LM. Guyon, and V.N. Vapnik. A training algorithm for optimal margin classifers. In Proc. of the 5th ACM Workshop on Computational Learning Theory, pages 144-152. ACM Press, 1992. [2] C.C. Chang and C.J. Lin. Training v-Support Vector classifi ers: Theory and Algorithms. In Neural Computations, 14(8),2002. [3] W.W. Cohen, R.E. Schapire, and Y. Singer. Learning to order things. lournal of Artificial Intelligence Research (lAIR), 10:243-270, 1999. [4] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. lournal of Machine Learning Research, 2:265-292, 2001. [5] K. Crammer and Y. Singer. Pranking with ranking. In Proceedings of the conference on Neural Information Processing Systems (NIPS), 2001. [6] http://www.research.compaq.comlSRC/eachmovie/ . [7] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. Advances in Large Margin Classifi ers, 2000. pp. 115-132. [8] Y. Lee, Y. Lin, and G. Wahba. Multicategory support vector machines. Technical Report 1043, Univ. of Wisconsin, Dept. of Statistics, Sep. 2001. [9] P. McCullagh and J. A. NeIder. Generalized Linear Models. Chapman and Hall, London, 2nd edition edition, 1989. [10] B. Scholkopf, A. Smola, R.C. Williamson, and P.L. Bartless. New support vector algorithms. Neural Computation, 12:1207-1245, 2000. [11] A. Shashua and A. Levin. Taxonomy of Large Margin Principle Algorithms for Ordinal Regression Problems. Technical Report 2002-39, Leibniz Center for Research, School of Computer Science and Eng., the Hebrew University of Jerusalem. [12] V.N. Vapnik. The nature of statistical learning. Springer, 2nd edition, 1998. [13] J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. In Proc. of the 7th European Symposium on Artificial Neural Networks, April 1999.
2002
81
2,289
A Bilinear Model for Sparse Coding David B. Grimes and Rajesh P. N. Rao Department of Computer Science and Engineering University of Washington Seattle, WA 98195-2350, U.S.A. grimes,rao  @cs.washington.edu Abstract Recent algorithms for sparse coding and independent component analysis (ICA) have demonstrated how localized features can be learned from natural images. However, these approaches do not take image transformations into account. As a result, they produce image codes that are redundant because the same feature is learned at multiple locations. We describe an algorithm for sparse coding based on a bilinear generative model of images. By explicitly modeling the interaction between image features and their transformations, the bilinear approach helps reduce redundancy in the image code and provides a basis for transformationinvariant vision. We present results demonstrating bilinear sparse coding of natural images. We also explore an extension of the model that can capture spatial relationships between the independent features of an object, thereby providing a new framework for parts-based object recognition. 1 Introduction Algorithms for redundancy reduction and efficient coding have been the subject of considerable attention in recent years [6, 3, 4, 7, 9, 5, 11]. Although the basic ideas can be traced to the early work of Attneave [1] and Barlow [2], recent techniques such as independent component analysis (ICA) and sparse coding have helped formalize these ideas and have demonstrated the feasibility of efficient coding through redundancy reduction. These techniques produce an efficient code by attempting to minimize the dependencies between elements of the code by using appropriate constraints. One of the most successful applications of ICA and sparse coding has been in the area of image coding. Olshausen and Field showed that sparse coding of natural images produces localized, oriented basis filters that resemble the receptive fields of simple cells in primary visual cortex [6, 7]. Bell and Sejnowski obtained similar results using their algorithm for ICA [3]. However, these approaches do not take image transformations into account. As a result, the same oriented feature is often learned at different locations, yielding a redundant code. Moreover, the presence of the same feature at multiple locations prevents more complex features from being learned and leads to a combinatorial explosion when one attempts to scale the approach to large image patches or hierarchical networks. In this paper, we propose an approach to sparse coding that explicitly models the interaction between image features and their transformations. A bilinear generative model is used to learn both the independent features in an image as well as their transformations. Our approach extends Tenenbaum and Freeman’s work on bilinear models for learning content and style [12] by casting the problem within probabilistic sparse coding framework. Thus, whereas prior work on bilinear models used global decomposition methods such as SVD, the approach presented here emphasizes the extraction of local features by removing higher-order redundancies through sparseness constraints. We show that for natural images, this approach produces localized, oriented filters that can be translated by different amounts to account for image features at arbitrary locations. Our results demonstrate how an image can be factored into a set of basic local features and their transformations, providing a basis for transformation-invariant vision. We conclude by discussing how the approach can be extended to allow parts-based object recognition, wherein an object is modeled as a collection of local features (or “parts”) and their relative transformations. 2 Bilinear Generative Models We begin by considering the standard linear generative model used in algorithms for ICA and sparse coding [3, 7, 9]:        (1) where is a -dimensional input vector (e.g. an image),  is a -dimensional basis vector and  is its scalar coefficient. Given the linear generative model above, the goal of ICA is to learn the basis vectors  such that the  are as independent as possible, while the goal in sparse coding is to make the distribution of  highly kurtotic given Equation 1. The linear generative model in Equation 1 can be extended to the bilinear case by using two independent sets of coefficients  and   (or equivalently, two vectors  and  ) [12]:                 (2) The coefficients  and   jointly modulate a set of basis vectors   to produce an input vector . For the present study, the coefficient  can be regarded as encoding the presence of object feature  in the image while the   values determine the transformation present in the image. In the terminology of Tenenbaum and Freeman [12],  describes the “content” of the image while  encodes its “style.” Equation 2 can also be expressed as a linear equation in  for a fixed  :  !      "#    $    &%'       )(   (3) Likewise, for a fixed  , one obtains a linear equation in  . Indeed this is the definition of bilinear: given one fixed factor, the model is linear with respect to the other factor. The power of bilinear models stems from the rich non-linear interactions that can be represented by varying both  and  simultaneously. 3 Learning Sparse Bilinear Models 3.1 Learning Bilinear Models Our goal is to learn from image data an appropriate set of basis vectors   that effectively describe the interactions between the feature vector  and the transformation vector  . A commonly used approach in unsupervised learning is to minimize the sum of squared pixel-wise errors over all images:                          (4)                                  (5) where    denotes the  norm of a vector. A standard approach to minimizing such a function is to use gradient descent and alternate between minimization with respect to    and minimization with respect to   . Unfortunately, the optimization problem as stated is underconstrained. The function  has many local minima and results from our simulations indicate that convergence is difficult in many cases. There are many different ways to represent an image, making it difficult for the method to converge to a basis set that can generalize effectively. A related approach is presented by Tenenbaum and Freeman [12]. Rather than using gradient descent, their method estimates the parameters directly by computing the singular value decomposition (SVD) of a matrix  containing input data corresponding to each content class  in every style  . Their approach can be regarded as an extension of methods based on principal component analysis (PCA) applied to the bilinear case. The SVD approach avoids the difficulties of convergence that plague the gradient descent method and is much faster in practice. Unfortunately, the learned features tend to be global and non-localized similar to those obtained from PCA-based methods based on second-order statistics. As a result, the method is unsuitable for the problem of learning local features of objects and their transformations. The underconstrained nature of the problem can be remedied by imposing constraints on  and  . In particular, we could cast the problem within a probabilistic framework and impose specific prior distributions on  and  with higher probabilities for values that achieve certain desirable properties. We focus here on the class of sparse prior distributions for several reasons: (a) by forcing most of the coefficients to be zero for any given input, sparse priors minimize redundancy and encourage statistical independence between the various  and between the various   [7], (b) there is growing evidence for sparse representations in the brain – the distribution of neural responses in visual cortical areas is highly kurtotic i.e. the cell exhibits little activity for most inputs but responds vigorously for a few inputs, causing a distribution with a high peak near zero and long tails, (c) previous approaches based on sparseness constraints have obtained encouraging results [7], and (d) enforcing sparseness on the  encourages the parts and local features shared across objects to be learned while imposing sparseness on the   allows object transformations to be explained in terms of a small set of basic transformations. 3.2 Bilinear Sparse Coding We assume the following priors for  and   :          (6)        "!   ! ( #  (7) where   and  ! are normalization constants, $ and % are parameters that control the degree of sparseness, and & is a “sparseness function.” For this study, we used &  '   ( )+*  , '   . Within a probabilistic framework, the squared error function  summed over all images can be interpreted as representing the negative log likelihood of the data given the parameters:  ( )+*      !  (see, for example, [7]). The priors     and      can be used to marginalize this likelihood to obtain the new likelihood function:            . The goal then is to find the   that maximize , or equivalently, minimize the negative log of . Under certain reasonable assumptions (discussed in [7]), this is equivalent to minimizing the following optimization function over all input images:                          , $     &    , %     &     (8) Gradient descent can be used to derive update rules for the components  and  of the feature vector  and transformation vector  respectively for any image , assuming a fixed basis   :                                     , $  &    (9)                              $        , %  &     (10) Given a training set of inputs  , the values for  and  for each image after convergence can be used to update the basis set   in batch mode according to:                            $          (11) As suggested by Olshausen and Field [7], in order to keep the basis vectors from growing without bound, we adapted the  norm of each basis vector in such a way that the variances of the  and   were maintained at a fixed desired level. 4 Results 4.1 Training Paradigm We tested the algorithms for bilinear sparse coding on natural image data. The natural images we used are distributed by Olshausen and Field [7], along with the code for their algorithm. The training set of images consisted of   patches randomly extracted from ten        source images. The images are pre-whitened to equalize large variances in frequency, and thus speed convergence. We choose to use a complete basis where     and we let  be at least as large as the number of transformations (including the notransformation case). The sparseness parameters $ and % were set to   and    . In order to assist convergence all learning occurs in batch mode, where the batch consisted of    image patches. The step size  for gradient descent using Equation 11 was set to     . The transformations were chosen to be 2D translations in the range   "!#$ pixels in both the axes. The style/content separation was enforced by learning a single  vector to describe an image patch regardless its translation, and likewise a single  vector to describe a particular style given any image patch content. 4.2 Bilinear Sparse Coding of Natural Images Figure 1 shows the results of training on natural image data. A comparison between the learned features for the linear generative model (Equation 1) and the bilinear model is (a) i = 1 i = 2 Bilinear basis linear basis Example of Example of wy(3) i wy(2) i wy(1) i wy(0) i wy(−1) i wy(−2) i wy(−3) i wi (b) patch Translated patch Canonical transformation Estimated vectors 3 91 9 2 1 7 8 1 7 8 2 feature Estimated vector 9 91 3 i j ↓ y → y x wij after learning y j = x i = Figure 1: Representing natural images and their transformations with a sparse bilinear model. (a) A comparison of learned features between a standard linear model and a bilinear model, both trained with the same sparseness priors. The two rows for the bilinear case depict the translated object features w (  (see Equation 3) for translations of    pixels. (b) The representation of an example natural image patch, and of the same patch translated to the left. Note that the bar plot representing the  vector is indeed sparse, having only three significant coefficients. The code for the style vectors for both the canonical patch, and the translated one is likewise sparse. The   basis images are shown for those dimensions which have non-zero coefficients for  or   . provided in Figure 1 (a). Although both show simple, localized, and oriented features, the bilinear method is able to model the same features under different transformations. In this case, the range   $ horizontal translations were used in the training of the bilinear model. Figure 1 (b) provides an example of how the bilinear sparse coding model encodes a natural image patch and the same patch after it has been translated. Note that both the  and  vectors are sparse. Figure 2 shows how the model can account for a given localized feature at different locations by varying the y vector. As shown in the last column of the figure, the translated local feature is generated by linearly combining a sparse set of basis vectors   . 4.3 Towards Parts-Based Object Recognition The bilinear generative model in Equation 2 uses the same set of transformation values   for all the features        . Such a model is appropriate for global transformations 1 2 3 4 5 6 7 8 y(0,3) y(−2,0) y(+1,0) Selected transformations y(−1,+2) 1 ... 8 wij wij j = Feature 1 (x57) Feature 2 (x32) j = Figure 2: Translating a learned feature to multiple locations. The two rows of eight images represent the individual basis vectors   for two values of  . The   values for two selected transformations for each  are shown as bar plots.   ' & denotes a translation of  '  pixels in the Cartesian plane. The last column shows the resulting basis vectors after translation. that apply to an entire image region such as a shift of  pixels for an image patch or a global illumination change. Consider the problem of representing an object in terms of its constituent parts. In this case, we would like to be able to transform each part independently of other parts in order to account for the location, orientation, and size of each part in the object image. The standard bilinear model can be extended to address this need as follows:                  (12) Note that each object feature  now has its own set of transformation values    . The double summation is thus no longer symmetric. Also note that the standard model (Equation 2) is a special case of Equation 12 where       for all  . We have conducted preliminary experiments to test the feasibility of Equation 12 using a set of object features learned for the standard bilinear model. Fig. 3 shows the results. These results suggest that allowing independent transformations for the different features provides a rich substrate for modeling images and objects in terms of a set of local features (or parts) and their individual transformations. 5 Summary and Conclusion A fundamental problem in vision is to simultaneously recognize objects and their transformations [8, 10]. Bilinear generative models provide a tractable way of addressing this problem by factoring an image into object features and transformations using a bilinear equation. Previous approaches used unconstrained bilinear models and produced global basis vectors for image representation [12]. In contrast, recent research on image coding has stressed the importance of localized, independent features derived from metrics that emphasize the higher-order statistics of inputs [6, 3, 7, 5]. This paper introduces a new probabilistic framework for learning bilinear generative models based on the idea of sparse coding. Our results demonstrate that bilinear sparse coding of natural images produces localized oriented basis vectors that can simultaneously represent features in an image and their transformation. We showed how the learned generative model can be used to translate a y(0,1) y(−2,0) y(1,1) y(1,0) y(−2,0) y(0,1) y(1,0) y(1,1) 81 57 y(0,1) y(0,1) (a) (b) x x81 x57 wy 81 wy 57 z z x81 x57 ∑w81 j y81 j y81  y57  Figure 3: Modeling independently transformed features. (a) shows the standard bilinear method of generating a translated feature by combining basis vectors   using the same set of   values for two different features (     and   ). (b) shows four examples of images generated by allowing different values of   for the two different features. Note the significant differences between the resulting images, which cannot be obtained using the standard bilinear model. basis vector to different locations, thereby reducing the need to learn the same basis vector at multiple locations as in traditional sparse coding methods. We also proposed an extension of the bilinear model that allows each feature to be transformed independently of other features. Our preliminary results suggest that such an approach could provide a flexible platform for adaptive parts-based object recognition, wherein objects are described by a set of independent, shared parts and their transformations. The importance of parts-based methods has long been recognized in object recognition in view of their ability to handle a combinatorially large number of objects by combining parts and their transformations. Few methods, if any, exist for learning representations of object parts and their transformations directly from images. Our ongoing efforts are therefore focused on deriving efficient algorithms for parts-based object recognition based on the combination of bilinear models and sparse coding. Acknowledgments This research is supported by NSF grant no. 133592 and a Sloan Research Fellowship to RPNR. References [1] F. Attneave. Some informational aspects of visual perception. Psychological Review, 61(3):183–193, 1954. [2] H. B. Barlow. Possible principles underlying the transformation of sensory messages. In W. A. Rosenblith, editor, Sensory Communication, pages 217–234. Cambridge, MA: MIT Press, 1961. [3] A. J. Bell and T. J. Sejnowski. The ‘independent components’ of natural scenes are edge filters. Vision Research, 37(23):3327–3338, 1997. [4] G. E. Hinton and Z. Ghahramani. Generative models for discovering sparse distributed representations. Philosophical Transactions Royal Society B, 352(1177– 1190), 1997. [5] M. S. Lewicki and T. J. Sejnowski. Learning overcomplete representations. Neural Computation, 12(2):337–365, 2000. [6] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–609, 1996. [7] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37:33113325, 1997. [8] R. P. N. Rao and D. H. Ballard. Development of localized oriented receptive fields by learning a translation-invariant code for natural images. Network: Computation in Neural Systems, 9(2):219–234, 1998. [9] R. P. N. Rao and D. H. Ballard. Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive field effects. Nature Neuroscience, 2(1):79–87, 1999. [10] R. P. N. Rao and D. L. Ruderman. Learning Lie groups for invariant visual perception. In Advances in Neural Information Processing Systems 11, pages 810–816. Cambridge, MA: MIT Press, 1999. [11] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 4(8):819–825, August 2001. [12] J. B. Tenenbaum and W. T. Freeman. Separating style and content with bilinear models. Neural Computation, 12(6):1247–1283, 2000.
2002
82
2,290
Data-Dependent Bounds for Bayesian Mixture Methods Ron Meir Department of Electrical Engineering Technion, Haifa 32000, Israel rmeir@ee.technion.ac.il Tong Zhang IBM T.J. Watson Research Center Yorktown Heights, NY 10598, USA tzhang@watson.ibm.com Abstract We consider Bayesian mixture approaches, where a predictor is constructed by forming a weighted average of hypotheses from some space of functions. While such procedures are known to lead to optimal predictors in several cases, where sufficiently accurate prior information is available, it has not been clear how they perform when some of the prior assumptions are violated. In this paper we establish data-dependent bounds for such procedures, extending previous randomized approaches such as the Gibbs algorithm to a fully Bayesian setting. The finite-sample guarantees established in this work enable the utilization of Bayesian mixture approaches in agnostic settings, where the usual assumptions of the Bayesian paradigm fail to hold. Moreover, the bounds derived can be directly applied to non-Bayesian mixture approaches such as Bagging and Boosting. 1 Introduction and Motivation The standard approach to Computational Learning Theory is usually formulated within the so-called frequentist approach to Statistics. Within this paradigm one is interested in constructing an estimator, based on a finite sample, which possesses a small loss (generalization error). While many algorithms have been constructed and analyzed within this context, it is not clear how these approaches relate to standard optimality criteria within the frequentist framework. Two classic optimality criteria within the latter approach are the minimax and admissibility criteria, which characterize optimality of estimators in a rigorous and precise fashion [9]. Except in some special cases [12], it is not known whether any of the approaches used within the Learning community lead to optimality in either of the above senses of the word. On the other hand, it is known that under certain regularity conditions, Bayesian estimators lead to either minimax or admissible estimators, and thus to well-defined optimality in the classical (frequentist) sense. In fact, it can be shown that Bayes estimators are essentially the only estimators which can achieve optimality in the above senses [9]. This optimality feature provides strong motivation for the study of Bayesian approaches in a frequentist setting. While Bayesian approaches have been widely studied, there have not been generally applicable bounds in the frequentist framework. Recently, several approaches have attempted to address this problem. In this paper we establish finite sample datadependent bounds for Bayesian mixture methods, which together with the above optimality properties suggest that these approaches should become more widely used. Consider the problem of supervised learning where we attempt to construct an estimator based on a finite sample of pairs of examples S = {(x1, y1), . . . , (xn, yn)}, each drawn independently according to an unknown distribution µ(x, y). Let A be a learning algorithm which, based on the sample S, constructs a hypothesis (estimator) h from some set of hypotheses H. Denoting by ℓ(y, h(x)) the instantaneous loss of the hypothesis h, we wish to assess the true loss L(h) = Eµℓ(y, h(x)) where the expectation is taken with respect to µ. In particular, the objective is to provide data-dependent bounds of the following form. For any h ∈H and δ ∈(0, 1), with probability at least 1 −δ, L(h) ≤Λ(h, S) + ∆(h, S, δ), (1) where Λ(h, S) is some empirical assessment of the true loss, and ∆(h, S, δ) is a complexity term. For example, in the classic Vapnik-Chervonenkis framework, Λ(h, S) is the empirical error (1/n) Pn i=1 ℓ(yi, h(xi)) and ∆(h, S, δ) depends on the VCdimension of H but is independent of both the hypothesis h and the sample S. By algorithm and data-dependent bounds we mean bounds where the complexity term depends on both the hypothesis (chosen by the algorithm A) and the sample S. 2 A Decision Theoretic Bayesian Framework Consider a decision theoretic setting where we define the sample dependent loss of an algorithm A by R(µ, A, S) = Eµℓ(y, A(x, S)). Let θµ be the optimal predictor for y, namely the function minimizing Eµ{ℓ(y, φ(x))} over φ. It is clear that the best algorithm A (Bayes algorithm) is the one that always return θµ, assuming µ is known. We are interested in the expected loss of an algorithm averaged over samples S: R(µ, A) = ESR(µ, A, S) = Z R(µ, A, S)dµ(S), where the expectation is taken with respect to the sample S drawn i.i.d. from the probability measure µ. If we consider a family of measures µ, which possesses some underlying prior distribution π(µ), then we can construct the averaged risk function with respect to the prior as, r(π, A) = EπR(µ, A) = Z dµ(S)dπ(µ) Z R(µ, A, S)dπ(µ|S), where dπ(µ|S) = dµ(S)dπ(µ) R µ dµ(S)dπ(µ) is the posterior distribution on the µ family, which induces a posterior distribution on the sample space as πS = Eπ(µ|S)µ. An algorithm minimizing the Bayes risk r(π, A) is referred to as a Bayes algorithm. In fact, for a given prior, and a given sample S, the optimal algorithm should return the Bayes optimal predictor with respect to the posterior measure πS. For many important practical problems, the optimal Bayes predictor is a linear functional of the underlying probability measure. For example, if the loss function is quadratic, namely ℓ(y, A(x)) = (y−A(x))2, then the optimal Bayes predictor θµ(x) is the conditional mean of y, namely Eµ[y|x]. For binary classification problems, we can let the predictor be the conditional probability θµ(x) = µ(y = 1|x) (the optimal classification decision rule then corresponds to a test of whether θµ(x) > 0.5), which is also a linear functional of µ. Clearly if the Bayes predictor is a linear functional of the probability measure, then the optimal Bayes algorithm with respect to the prior π is given by AB(x, S) = Z µ θµ(x)dπ(µ|S) = R µ θµ(x)dµ(S)dπ(µ) R µ dµ(S)dπ(µ) . (2) In this case, an optimal Bayesian algorithm can be regarded as the predictor constructed by averaging over all predictors with respect to a data-dependent posterior π(µ|S). We refer to such methods as Bayesian mixture methods. While the Bayes estimator AB(x, S) is optimal with respect to the Bayes risk r(π, A), it can be shown, that under appropriate conditions (and an appropriate prior) it is also a minimax and admissible estimator [9]. In general, θµ is unknown. Rather we may have some prior information about possible models for θµ. In view of (2) we consider a hypothesis space H, and an algorithm based on a mixture of hypotheses h ∈H. This should be contrasted with classical approaches where an algorithm selects a single hypothesis h form H. For simplicity, we consider a countable hypothesis space H = {h1, h2, . . .}; the general case will be deferred to the full paper. Let q = {qj}∞ j=1 be a probability vector, namely qj ≥0 and P j qj = 1, and construct the composite predictor by fq(x) = P j qjhj(x). Observe that in general fq(x) may be a great deal more complex that any single hypothesis hj. For example, if hj(x) are non-polynomial ridge functions, the composite predictor f corresponds to a two-layer neural network with universal approximation power. We denote by Q the probability distribution defined by q, namely P j qjhj = Eh∼Qh. A main feature of this work is the establishment of data-dependent bounds on L(Eh∼Qh), the loss of the Bayes mixture algorithm. There has been a flurry of recent activity concerning data-dependent bounds (a non-exhaustive list includes [2, 3, 5, 11, 13]). In a related vein, McAllester [7] provided a data-dependent bound for the so-called Gibbs algorithm, which selects a hypothesis at random from H based on the posterior distribution π(h|S). Essentially, this result provides a bound on the average error Eh∼QL(h) rather than a bound on the error of the averaged hypothesis. Later, Langford et al. [6] extended this result to a mixture of classifiers using a margin-based loss function. A more general result can also be obtained using the covering number approach described in [14]. Finally, Herbrich and Graepel [4] showed that under certain conditions the bounds for the Gibbs classifier can be extended to a Bayesian mixture classifier. However, their bound contained an explicit dependence on the dimension (see Thm. 3 in [4]). Although the approach pioneered by McAllester came to be known as PAC-Bayes, this term is somewhat misleading since an optimal Bayesian method (in the decision theoretic framework outline above) does not average over loss functions but rather over hypotheses. In this regard, the learning behavior of a true Bayesian method is not addressed in the PAC-Bayes analysis. In this paper, we would like to narrow the discrepancy by analyzing Bayesian mixture methods, where we consider a predictor that is the average of a family of predictors with respect to a data-dependent posterior distribution. Bayesian mixtures can often be regarded as a good approximation to a true optimal Bayesian method. In fact, we have shown above that they are equivalent for many important practical problems. Therefore the main contribution of the present work is the extension of the above mentioned results in PAC-Bayes analysis to a rather unified setting for Bayesian mixture methods, where different regularization criteria may be incorporated, and their effect on the performance easily assessed. Furthermore, it is also essential that the bounds obtained are dimension-independent, since otherwise they yield useless results when applied to kernel-based methods, which often map the input space into a space of very high dimensionality. Similar results can also be obtained using the covering number analysis in [14]. However the approach presented in the current paper, which relies on the direct computation of the Rademacher complexity, is more direct and gives better bounds. The analysis is also easier to generalize than the corresponding covering number approach. Moreover, our analysis applies directly to other non-Bayesian mixture approaches such as Bagging and Boosting. Before moving to the derivation of our bounds, we formalize our approach. Consider a countable hypothesis space H = {hj}∞ j=1, and a probability distribution {qj} over H. Introduce the vector notation P∞ k=1 qkhk(x) = q⊤h(x). A learning algorithm within the Bayesian mixture framework uses the sample S to select a distribution Q over H and then constructs a mixture hypothesis fq(x) = q⊤h(x). In order to constrain the class of mixtures used in constructing the mixture q⊤h we impose constraints on the mixture vector q. Let g(q) be a non-negative convex function of q and define for any positive A, ΩA = {q ∈S : g(q) ≤A} ; FA = © fq : fq(x) = q⊤h(x) : q ∈ΩA ª , (3) where S denotes the probability simplex. In subsequent sections we will consider different choices for g(q), which essentially acts as a regularization term. Finally, for any mixture q⊤h we define the loss by L(q⊤h) = Eµℓ(y, (q⊤h)(x)) and the empirical loss incurred on the sample by ˆL(q⊤h) = (1/n) Pn i=1 ℓ(yi, (q⊤h)(xi)). 3 A Mixture Algorithm with an Entropic Constraint In this section we consider an entropic constraint, which penalizes weights deviating significantly from some prior probability distribution ν = {νj}∞ j=1, which may incorporate our prior information about he problem. The weights q themselves are chosen by the algorithm based on the data. In particular, in this section we set g(q) to be the Kullback-Leibler divergence of q from ν, g(q) = D(q∥ν) ; D(q∥ν) = X j qj log(qj/νj). Let F be a class of real-valued functions, and denote by σi independent Bernoulli random variables assuming the values ±1 with equal probability. We define the data-dependent Rademacher complexity of F as ˆRn(F) = Eσ " sup f∈F 1 n n X i=1 σif(xi) |S # . The expectation of ˆRn(F) with respect to S will be denoted by Rn(F). We note that ˆRn(F) is concentrated around its mean value Rn(F) (e.g., Thm. 8 in [1]). We quote a slightly adapted result from [5]. Theorem 1 (Adapted from Theorem 1 in [5]) Let {x1, x2, . . . , xn} ∈X be a sequence of points generated independently at random according to a probability distribution P, and let F be a class of measurable functions from X to R. Furthermore, let φ be a non-negative Lipschitz function with Lipschitz constant κ, such that φ◦f is uniformly bounded by a constant M. Then for all f ∈F with probability at least 1 −δ Eφ(f(x)) −1 n n X i=1 φ(f(xi)) ≤4κRn(F) + M r log(1/δ) 2n . An immediate consequence of Theorem 1 is the following. Lemma 3.1 Let the loss function ℓbe bounded by M, and assume that it is Lipschitz with constant κ. Then for all q ∈ΩA with probability at least 1 −δ L(q⊤h) ≤ˆL(q⊤h) + 4κRn(FA) + M r log(1/δ) 2n . Next, we bound the empirical Rademacher average of FA using g(q) = D(q∥ν). Lemma 3.2 The empirical Rademacher complexity of FA is upper bounded as follows: ˆRn(FA) ≤ Ãr 2A n ! sup j v u u t 1 n n X i=1 hj(xi)2 . Proof: We first recall a few facts from the theory of convex duality [10]. Let p(u) be a convex function over a domain U, and set its dual s(z) = supu∈U ¡ u⊤z −p(u) ¢ . It is known that s(z) is also convex. Setting u = q and p(q) = P j qj log(qj/νj) we find that s(v) = log P j νjezj. From the definition of s(z) it follows that for any q ∈S, q⊤z ≤ X j qj log(qj/νj) + log X j νjezj. Since z is arbitrary, we set z = (λ/n) P i σih(xi) and conclude that for q ∈ΩA and any λ > 0 sup q∈ΩA ( 1 n n X i=1 σiq⊤h(xi) ) ≤1 λ   A + log X j νj exp " λ n X i σihj(xi) #  . Taking the expectation with respect to σ, and using the Chernoffbound Eσ {exp (P i σiai)} ≤exp ¡P i a2 i /2 ¢ , we have that ˆRn(FA) ≤1 λ   A + Eσ log X j νj exp " λ n X i σihj(xi) #   ≤1 λ ( A + sup j log Eσ exp " λ n X i σihj(xi) #) (Jensen) ≤1 λ ( A + sup j log exp " λ2 n2 X i hj(xi)2 2 #) (Chernoff) = A λ + λ 2n2 sup j X i hj(xi)2 . Minimizing the r.h.s. with respect to λ, we obtain the desired result. □ Combining Lemmas 3.1 and 3.2 yields our basic bound, where κ and M are defined in Lemma 3.1. Theorem 2 Let S = {(x1, y1), . . . , (xn, yn)} be a sample of i.i.d. points each drawn according to a distribution µ(x, y). Let H be a countable hypothesis class, and set FA to be the class defined in (3) with g(q) = D(q∥ν). Set ∆H = £ (1/n)Eµ supj Pn i=1 hj(xi)2¤1/2. Then for any q ∈ΩA with probability at least 1 −δ L(q⊤h) ≤ˆL(q⊤h) + 4κ∆H r 2A n + M r log(1/δ) 2n . Note that if hj are uniformly bounded, hj ≤c, then ∆H ≤c. Theorem 2 holds for a fixed value of A. Using the so-called multiple testing Lemma (e.g. [11]) we obtain: Corollary 3.1 Let the assumptions of Theorem 2 hold, and let {Ai, pi} be a set of positive numbers such that P i pi = 1. Then for all Ai and q ∈ΩAi with probability at least 1 −δ, L(q⊤h) ≤ˆL(q⊤h) + 4κ∆H r 2Ai n + M r log(1/piδ) 2n . Note that the only distinction with Theorem 2 is the extra factor of log pi which is the price paid for the uniformity of the bound. Finally, we present a data-dependent bound of the form (1). Theorem 3 Let the assumptions of Theorem 2 hold. Then for all q ∈S with probability at least 1 −δ, L(q⊤h) ≤ˆL(q⊤h) + max(κ∆H, M) × r 130D(q∥ν) + log(1/δ) n . (4) Proof sketch Pick Ai = 2i and pi = 1/i(i + 1), i = 1, 2, . . . (note that P i pi = 1). For each q, let i(q) be the smallest index for which Ai(q) ≥D(q∥ν) implying that log(1/pi(q)) ≤2 log log2(4D(q∥ν)). A few lines of algebra, to be presented in the full paper, yield the desired result. □ The results of Theorem 3 can be compared to those derived by McAllester [8] for the randomized Gibbs procedure. In the latter case, the first term on the r.h.s. is Eh∼Q ˆL(h), namely the average empirical error of the base classifiers h. In our case the corresponding term is ˆL(Eh∼Qh), namely the empirical error of the average hypothesis. Since Eh∼Qh is potentially much more complex than any single h ∈H, we expect that the empirical term in (4) is much smaller than the corresponding term in [8]. Moreover, the complexity term we obtain is in fact tighter than the corresponding term in [8] by a logarithmic factor in n (although the logarithmic factor in [8] could probably be eliminated). We thus expect that Bayesian mixture approach advocated here leads to better performance guarantees. Finally, we comment that Theorem 3 can be used to obtain so-called oracle inequalities. In particular, let q∗be the optimal distribution minimizing L(q⊤h), which can only be computed if the underlying distribution µ(x, y) is known. Consider an algorithm which, based only on the data, selects a distribution ˆq by minimizing the r.h.s. of (4), with the implicit constants appropriately specified. Then, using standard approaches (e.g. [2]) we can obtain a bound on L(ˆq⊤h) −L(q∗⊤h). For lack of space, we defer the derivation of the precise bound to the full paper. 4 General Data-Dependent Bounds for Bayesian Mixtures The Kullback-Leibler divergence is but one way to incorporate prior information. In this section we extend the results to general convex regularization functions g(q). Some possible choices for g(q) besides the Kullback-Leibler divergence are the standard Lp norms ∥q∥p. In order to proceed along the lines of Section 3, we let s(z) be the convex function associated with g(q), namely s(z) = supq∈ΩA © q⊤z −g(q) ª . Repeating the arguments of Section 3 we have for any λ > 0 that 1 n Pn i=1 σiq⊤h(xi) ≤ 1 λ © A + s ¡ λ n P i σih(xi) ¢ª , which implies that ˆRn(FA) ≤inf λ≥0 1 λ ( A + Eσs à λ n X i σih(xi) !) . (5) Assume that s(z) is second order differentiable, and that for any h = Pn i=1 σih(xi) 1 2(s(h + ∆h) + s(h −∆h)) −s(h) ≤u(∆h). Then, assuming that s(0) = 0, it is easy to show by induction that Eσs ³ (λ/n) Xn i=1σih(xi) ´ ≤ n X i=1 u((λ/n)h(xi)). (6) In the remainder of the section we focus on the the case of regularization based on the Lp norm. Consider p and q such that 1/q + 1/p = 1, p ∈(1, ∞), and let p′ = max(p, 2) and q′ = min(q, 2). Note that if p ≤2 then q ≥2, q′ = p′ = 2 and if p > 2 then q < 2, q′ = q, p′ = p. Consider p-norm regularization g(q) = 1 p′ ∥q∥p′ p , in which case s(z) = 1 q′ ∥z∥q′ q . The Rademacher averaging result for p-norm regularization is known in the Geometric theory of Banach spaces (type structure of the Banach space), and it also follows from Khinchtine’s inequality. We show that it can be easily obtained in our framework. In this case, it is easy to see that s(z) = 1 q′ ∥z∥q′ q implies u(h(x)) ≤q−1 q′ ∥h(x)∥q′ q . Substituting in (5) we have ˆRn(FA) ≤inf λ≥0 1 λ ( A + q −1 q′ µ λ n ¶q′ n X i=1 ∥h(xi)∥q′ q ) = Cq n1/p′ A1/p′ à 1 n n X i=1 ∥h(xi)∥q′ q !1/q′ where Cq = ((q −1)/q′)1/q′ . Combining this result with the methods described in Section 3, we establish a bound for regularization based on the Lp norm. Assume that ∥h(xi)∥q is finite for all i, and set ∆H,q = ³ E n (1/n) Pn i=1 ∥h(xi)∥q′ q o´1/q′ . Theorem 4 Let the conditions of Theorem 3 hold and set g(q) = 1 p′ ∥q∥p′ p , p ∈ (1, ∞). Then for all q ∈S, with probability at least 1 −δ, L(q⊤h) ≤ˆL(q⊤h) + max(κ∆H,q, M) × O à ∥q∥p n1/p′ + r log log(∥q∥p + 3) + log(1/δ) n ! where O(·) hides a universal constant that depends only on p. 5 Discussion We have introduced and analyzed a class of regularized Bayesian mixture approaches, which construct complex composite estimators by combining hypotheses from some underlying hypothesis class using data-dependent weights. Such weighted averaging approaches have been used extensively within the Bayesian framework, as well as in more recent approaches such as Bagging and Boosting. While Bayesian methods are known, under favorable conditions, to lead to optimal estimators in a frequentist setting, their performance in agnostic settings, where no reliable assumptions can be made concerning the data generating mechanism, has not been well understood. Our data-dependent bounds allow the utilization of Bayesian mixture models in general settings, while at the same time taking advantage of the benefits of the Bayesian approach in terms of incorporation of prior knowledge. The bounds established, being independent of the cardinality of the underlying hypothesis space, can be directly applied to kernel based methods. Acknowledgments We thank Shimon Benjo for helpful discussions. The research of R.M. is partially supported by the fund for promotion of research at the Technion and by the Ollendorfffoundation of the Electrical Engineering department at the Technion. References [1] P. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: risk bounds and structural results. In Proceedings of the Fourteenth Annual Conference on Computational Learning Theory, pages 224–240, 2001. [2] P.L. Bartlett, S. Boucheron, and G. Lugosi. Model selection and error estimation. Machine Learning, 48:85–113, 2002. [3] O. Bousquet and A. Chapelle. Stability and generalization. J. Machine Learning Research, 2:499–526, 2002. [4] R. Herbrich and T. Graepel. A pac-bayesian margin bound for linear classifiers; why svms work. In Advances in Neural Information Processing Systems 13, pages 224–230, Cambridge, MA, 2001. MIT Press. [5] V. Koltchinksii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Ann. Statis., 30(1), 2002. [6] J. Langford, M. Seeger, and N. Megiddo. An improved predictive accuracy bound for averaging classifiers. In Proceeding of the Eighteenth International Conference on Machine Learning, pages 290–297, 2001. [7] D. A. McAllester. Some pac-bayesian theorems. In Proceedings of the eleventh Annual conference on Computational learning theory, pages 230–234, New York, 1998. ACM Press. [8] D. A. McAllester. PAC-bayesian model averaging. In Proceedings of the twelfth Annual conference on Computational learning theory, New York, 1999. ACM Press. [9] C. P. Robert. The Bayesian Choice: A Decision Theoretic Motivation. Springer Verlag, New York, 1994. [10] R.T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, N.J., 1970. [11] J. Shawe-Taylor, P. Bartlett, R.C. Williamson, and M. Anthony. Structural risk minimization over data-dependent hierarchies. IEEE trans. Inf. Theory, 44:1926– 1940, 1998. [12] Y. Yang. Minimax nonparametric classification - part I: rates of convergence. IEEE Trans. Inf. Theory, 45(7):2271–2284, 1999. [13] T. Zhang. Generalization performance of some learning problems in hilbert functional space. In Advances in Neural Information Processing Systems 15, Cambridge, MA, 2001. MIT Press. [14] T. Zhang. Covering number bounds of certain regularized linear function classes. Journal of Machine Learning Research, 2:527–550, 2002.
2002
83
2,291
Information Diffusion Kernels John Lafferty School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 USA lafferty@cs.cmu.edu Guy Lebanon School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 USA lebanon@cs.cmu.edu Abstract A new family of kernels for statistical learning is introduced that exploits the geometric structure of statistical models. Based on the heat equation on the Riemannian manifold defined by the Fisher information metric, information diffusion kernels generalize the Gaussian kernel of Euclidean space, and provide a natural way of combining generative statistical modeling with non-parametric discriminative learning. As a special case, the kernels give a new approach to applying kernel-based learning algorithms to discrete data. Bounds on covering numbers for the new kernels are proved using spectral theory in differential geometry, and experimental results are presented for text classification. 1 Introduction The use of kernels is of increasing importance in machine learning. When “kernelized,” simple learning algorithms can become sophisticated tools for tackling nonlinear data analysis problems. Research in this area continues to progress rapidly, with most of the activity focused on the underlying learning algorithms rather than on the kernels themselves. Kernel methods have largely been a tool for data represented as points in Euclidean space, with the collection of kernels employed limited to a few simple families such as polynomial or Gaussian RBF kernels. However, recent work by Kondor and Lafferty [7], motivated by the need for kernel methods that can be applied to discrete data such as graphs, has proposed the use of diffusion kernels based on the tools of spectral graph theory. One limitation of this approach is the difficulty of analyzing the associated learning algorithms in the discrete setting. For example, there is no obvious way to bound covering numbers and generalization error for this class of diffusion kernels, since the natural function spaces are over discrete sets. In this paper, we propose a related construction of kernels based on the heat equation. The key idea in our approach is to begin with a statistical model of the data being analyzed, and to consider the heat equation on the Riemannian manifold defined by the Fisher information metric of the model. The result is a family of kernels that naturally generalizes the familiar Gaussian kernel for Euclidean space, and that includes new kernels for discrete data by beginning with statistical families such as the multinomial. Since the kernels are intimately based on the geometry of the Fisher information metric and the heat or diffusion equation on the associated Riemannian manifold, we refer to them as information diffusion kernels. Unlike the diffusion kernels of [7], the kernels we investigate here are over continuous parameter spaces even in the case where the underlying data is discrete. As a consequence, some of the machinery that has been developed for analyzing the generalization performance of kernel machines can be applied in our setting. In particular, the spectral approach of Guo et al. [3] is applicable to information diffusion kernels, and in applying this approach it is possible to draw on the considerable body of research in differential geometry that studies the eigenvalues of the geometric Laplacian. In the following section we review the relevant concepts that are required from information geometry and classical differential geometry, define the family of information diffusion kernels, and present two concrete examples, where the underlying statistical models are the multinomial and spherical normal families. Section 3 derives bounds on the covering numbers for support vector machines using the new kernels, adopting the approach of [3]. Section 4 describes experiments on text classification, and Section 5 discusses the results of the paper. 2 Information Geometry and Diffusion Kernels Let     be a -dimensional statistical model on a set ! . For each "  ! assume the mapping #$%& "   is '( at each point in the interior of  . Let )*  + +-, . and / ,  " 1032546 "   . The Fisher information matrix 7 8 *:9 ; =< of at >? is given by 8 *:9 ; A@ , 7 )B* / , ) 9 / , <CEDGF )* 0H2B4I " B )59 0H2B4I& " J& "   " (1) or equivalently as 8 *:9 ; 1K DGF )*ML  "   )59NL & "   "PO (2) In coordinates * , 8 *:9 ;  defines a Riemannian metric on  , giving the structure of a -dimensional Riemannian manifold. One of the motivating properties of the Fisher information metric is that, unlike the Euclidean distance, it is invariant under reparameterization. For detailed treatments of information geometry we refer to [1, 6]. For many statistical models there is a natural way to associate to each data point " a parameter vector G "  in the statistical model. For example, in the case of text, under the multinomial model a document is naturally associated with the relative frequencies of the word counts. This amounts to the mapping which sends a document " to its maximum likelihood model Q R "  . Given such a mapping, we propose to apply a kernel on parameter space, SUT  "  "V W SUT ; R " M G "V  . More generally, we may associate a data point " with a posterior distribution &; & "  under a suitable prior. In the case of text, this is one way of “smoothing” the maximum likelihood model, using, for example, a Dirichlet prior. Given a kernel on parameter space, we then average over the posteriors to obtain a kernel on data: S>T  "  " V XEDGYEDZY S>T ; R[ V -; & " -; V " V  V O (3) It remains to define the kernel on parameter space. There is a fundamental choice: the kernel associated with heat diffusion on the parameter manifold under the Fisher information metric. For a manifold \ with metric 8 *]9 the Laplacian ^`_Cab  \ c$ ab  \  is given in local coordinates by ^  d e det 8gf *:9 )*ML det 88 *:9 )59 (4) where 7 8 *]9 <I 7 8 *]9 < , generalizing the classical operator div    * + + J.  . When \ is compact the Laplacian has discrete eigenvalues    b with corresponding eigenfunctions  * satisfying ^ *  *  * . When the manifold has a boundary, appropriate boundary conditions must be imposed in order that ^ is self-adjoint. Dirichlet boundary conditions set  * + Y  and Neumann boundary conditions require + . +   + Y  where  is the outer normal direction. The following theorem summarizes the basic properties for the kernel of the heat equation  ^  + + T   on \ . Theorem 1. Let \ be a geodesically complete Riemannian manifold. Then the heat kernel SUT  " ! G exists and satisfies (1) S T  " ! Z SUT "  "  , (2) 0$#$% T&  SUT  " ! G(' " G , (3) ) ^ `+ + T+* S  , (4) S>T  " ! G?-, Y SUT .  " 0/N S . 1/R2 G / , and (5) SUT  " 2 G?  ( *43 65 7 . T  *  "   * " G . We refer to [9] for a proof. Properties 2 and 3 imply that S T  " ! G solves the heat equation in " , starting from . Integrating property 3 against a function 8 " G shows that 5 T"9 8  "   , Y S>T  " ! G 8 " G . Therefore, , Y , Y SUT  " ! G 8  "  8 " G " ` , Y 8  "  ) 5 T"9 8 *  "  " ;: 8  5 T"9 8<=> since 5 T"9 is a positive operator; thus SgT  " ! Z is positive definite. Together, these properties show that S T defines a Mercer kernel. Note that when using such a kernel for classification, the discriminant function T  "   *@? * * S>T  "  " *  can be interpreted as the solution to the heat equation with initial temperature A " *  ? * * on labeled data point " * , and AB " X on unlabeled points. The following two basic examples illustrate the geometry of the Fisher information metric and its associated diffusion kernel: the multinomial corresponds to a Riemannian manifold of constant positive curvature, and the spherical normal family to a space of constant negative curvature. 2.1 The Multinomial The multinomial is an important example of how information diffusion kernels can be applied naturally to discrete data. For the multinomial family  - M , is an element of the -simplex,  CB  *43  *  d . The transformation * #$ED e * F/ * maps the -simplex to the -sphere of radius 2. The representation of the Fisher information metric given in equation (2) suggests the geometry underlying the multinomial. In particular, the information metric is given by 8 *]9 ; B   GB  H 3  H )* 03254 H ) 9 0H2B4 H I: )* /R )59 / < so that the Fisher information corresponds to the inner product of tangent vectors to the sphere, and information geometry for the multinomial is the geometry of the positive orthant of the sphere. The geodesic distance between two points M V is given by ; R[ V JDLKNM2O+O 2QPR CB  f *43  L * V *1S O (5) This metric places greater emphasis on points near the boundary, which is expected to be important for text problems, which have sparse statistics. In general for the heat kernel on a Riemannian manifold, there is an asymptotic expansion in terms of the parametrices; see for example [9]. This expands the kernel as SUT  " 2 G  KQTVU XW ZYC[]\_^  b  " ! Z KAU `ba f *43 dc *  " ! GU *fehg "U a  (6) Using the first order approximation and the explicit distance for the geodesic distance gives 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1   Figure 1: Example decision boundaries using support vector machines with information diffusion kernels for trinomial geometry on the 2-simplex (top right) and spherical normal geometry, JD (bottom right), compared with the standard Gaussian kernel (left). a simple formula for the approximate information diffusion kernel for the multinomial as SUT ; R[ V   KATVU   YC[]\ R  d U KNM2O+O 2AP b R CB  f *43  L * V *S S (7) In Figure 1 this kernel is compared with the standard Euclidean space Gaussian kernel for the case of the trinomial model, JD . 2.2 Spherical Normal Now consider the statistical family given by & -  g  6      where     is the mean and  is the scale of the variance. A calculation shows that 8 *:9 ;   b  ' *]9 . Thus, the Fisher information metric gives       B the structure of the upper half plane in hyperbolic space. The heat kernel on hyperbolic space   has a closed form [2]. For  D e d it is given by S>T  "  " V X  d D  T  d e KATVU ^ d P2# ) )  `  Y+[]\_^  b U   b KAU]` (8) and for JD e D the kernel is given by SUT  "  " V   d D  T  e D e KAT U ^ d P2# ) )  `  D ( ! YC[]\#" %$ b  B &  T '  .  ' T)( e O 2AP  !  O 2AP  ! (9) where    "  "V  is the geodesic distance between the two points in   . For  d the kernel is identical to the Gaussian kernel on  . If only the mean >* is unspecified, then the associated kernel is the standard Gaussian RBF kernel. In Figure 1 the kernel for hyperbolic space is compared with the Euclidean space Gaussian kernel for the case of a 1-dimensional normal model with unknown mean and variance, corresponding to  D . Note that the curved decision boundary for the diffusion kernel makes intuitive sense, since as the variance decreases the mean is known with increasing certainty. 3 Spectral Bounds on Covering Numbers In this section we prove bounds on the entropy and covering numbers for support vector machines that use information diffusion kernels; these bounds in turn yield bounds on the expected risk of the learning algorithms. We adopt the approach of Guo et al. [3], and make use of bounds on the spectrum of the Laplacian on a Riemannian manifold, rather than on VC dimension techniques. Our calculations give an indication of how the underlying geometry influences the entropy numbers, which are inverse to the covering numbers. We begin by recalling the main result of [3], modifying their notation slightly to conform with ours. Let \    be a compact subset of -dimensional Euclidean space, and suppose that S _ \  \  $  is a Mercer kernel. Denote by  = b = = the eigenvalues of S , i.e., of the mapping 8 #$ , Y S  ]2 G 8 " G , and let c 9   denote the corresponding eigenfunctions. We assume that 'def  P \ 9 c 9  (  . Given  points " *  \ , the SVM hypothesis class for    " *  with weight vector bounded by is defined as the collection of functions    X "   O O O  "  #$%0:   "   <  O O O : P  "   <      (10) where    is the mapping from \ to feature space defined by the Mercer kernel, and : H < and   denote the corresponding Hilbert space inner product and norm. It is of interest to obtain uniform bounds on the covering numbers       , defined as the size of the smallest  -cover of     in the metric induced by the norm  8  (   % K [ * 3     8  " *  . The following is the main result of Guo et al. [3]. Theorem 2. Given an integer   , let   denote the smallest integer for which 9 B   " 7! #" "" 7%$   ( $ and define     & ''( )    " 7 "" " 7 $* W   ( $ * W e  ( *439 * W * O Then P \,+ J..-/ Y10           . To apply this result, we will obtain bounds on the indices 2  using spectral theory in Riemannian geometry. The following bounds on the eigenvalues of the Laplacian are due to Li and Yau [8]. Theorem 3. Let \ be a compact Riemannian manifold of dimension with non-negative Ricci curvature, and assume that the boundary of \ is convex. Let      b  denote the eigenvalues of the Laplacian with Dirichlet boundary conditions. Then 3    ^  4 `     9  3 b   ^  e d 4 `   (11) where 4 is the volume of \ and 3  and 3 b are constants depending only on the dimension. Note that the manifold of the multinomial model satisfies the conditions of this theorem. Using these results we can establish the following bounds on covering numbers for information diffusion kernels. We assume Dirichlet boundary conditions; a similar result can be proven for Neumann boundary conditions. We include the constant 4  vol  \  and diffusion coefficient U in order to indicate how the bounds depend on the geometry. Theorem 4. Let \ be a compact Riemannian manifold, with volume 4 , satisfying the conditions of Theorem 3. Then the covering numbers for the Dirichlet heat kernel S T on \ satisfy 03254      X g ^^ 4 U   ` 0H2B4 GB b b ^ d  `` (12) Proof. By the lower bound in Theorem 3, the Dirichlet eigenvalues of the heat kernel SUT  " 2 G , which are given by 9  5 T  $ , satisfy 0H2B4X 9  ZU 3    ) 9 *   . Thus,  d  0H2B4 ^  9  b ` = U 3   9 f *43  ^  4 ` b  e D  03254  = U 3  e D ^  4 ` b  e D  03254  (13) where the second inequality comes from  9 *43   = , 9  "  "  9  B  . Now using the upper bound of Theorem 3, the inequality      will hold if U 3 b ^  e D 4 ` b  =  0H2B4X 9 B  = U 3  e D ^  4 ` b  e D  0H2B4  (14) or equivalently U 3 b 4   ^    e D b   3  3 b e D  GB b  ` = D 0H2B4  (15) The above inequality will hold in case  =  R D 4   U  3 b  3   GB b  0H2B4  S  GB b =  R 4    e DB U 3  0H2B4  S  GB b (16) since we may assume that 3 b = 3  ; thus,      3  ^   T 0H2B4 V`     for a new constant 3    . Plugging this bound on    into the expression for    in Theorem 2 and using  ( *439 * W 5 *    g " 5 9 * W   ( , we have after some algebra that 0H2B4 "   W (   ^ " T   (     03254      ` . Inverting the above equation in 03254  gives equation (12).  We note that Theorem 4 of [3] can be used to show that this bound does not, in fact, depend on  and  . Thus, for fixed U the covering numbers scale as 0H2B4     g " 0H2B4     )   * ( , and for fixed  they scale as 03254   g g " U   ( in the diffusion time U . 4 Experiments We compared the information diffusion kernel to linear and Gaussian kernels in the context of text classification using the WebKB dataset. The WebKB collection contains some 4000 university web pages that belong to five categories: course, faculty, student, project and staff. A “bag of words” representation was used for all three kernels, using only the word frequencies. For simplicity, all hypertext information was ignored. The information diffusion kernel is based on the multinomial model, which is the correct model under the 50 100 150 200 250 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 Number of training examples Test set error rate linear rbf diffusion 50 100 150 200 250 0.1 0.15 0.2 0.25 0.3 Number of training examples Test set error rate linear rbf diffusion Figure 2: Experimental results on the WebKB corpus, using SVMs for linear (dot-dashed) and Gaussian (dotted) kernels, compared with the information diffusion kernel for the multinomial (solid). Results for two classification tasks are shown, faculty vs. course (left) and faculty vs. student (right). The curves shown are the error rates averaged over 20-fold cross validation. (incorrect) assumption that the word occurrences are independent. The maximum likelihood mapping #$ Q G  was used to map a document to a multinomial model, simply normalizing the counts to sum to one. Figure 2 shows test set error rates obtained using support vector machines for linear, Gaussian, and information diffusion kernels for two binary classification tasks: faculty vs. course and faculty vs. student. The curves shown are the mean error rates over 20-fold cross validation and the error bars represent twice the standard deviation. For the Gaussian and information diffusion kernels we tested values of the kernels’ free parameter (  or e U ) in the set  O d  O D O R d  D R . The plots in Figure 2 use the best parameter value in the above range. Our results are consistent with previous experiments on this dataset [5], which have observed that the linear and Gaussian kernels result in very similar performance. However the information diffusion kernel significantly outperforms both of them, almost always obtaining lower error rate than the average error rate of the other kernels. For the faculty vs. course task, the error rate is halved. This result is striking because the kernels use identical representations of the documents, vectors of word counts (in contrast to, for example, string kernels). We attribute this improvement to the fact that the information metric places more emphasis on points near the boundary of the simplex. 5 Discussion Kernel-based methods generally are “model free,” and do not make distributional assumptions about the data that the learning algorithm is applied to. Yet statistical models offer many advantages, and thus it is attractive to explore methods that combine data models and purely discriminative methods for classification and regression. Our approach brings a new perspective to combining parametric statistical modeling with non-parametric discriminative learning. In this aspect it is related to the methods proposed by Jaakkola and Haussler [4]. However, the kernels we investigate here differ significantly from the Fisher kernel proposed in [4]. In particular, the latter is based on the Fisher score  , 0H2B4I Q B at a single point Q in parameter space, and in the case of an exponential family model it is given by a covariance S  "  "V c  * ) " *  @ , 7  * < * ) "V *  @ , 7  * < * . In contrast, information diffusion kernels are based on the full geometry of the statistical family, and yet are also invariant under reparameterization of the family. Bounds on the covering numbers for information diffusion kernels were derived for the case of positive curvature, which apply to the special case of the multinomial. We note that the resulting bounds are essentially the same as those that would be obtained for the Gaussian kernel on the flat -dimensional torus, which is the standard way of “compactifying” Euclidean space to get a Laplacian having only discrete spectrum; the results of [3] are formulated for the case  d , corresponding to the circle  . Similar bounds for general manifolds with curvature bounded below by a negative constant should also be attainable. While information diffusion kernels are very general, they may be difficult to compute in particular cases; explicit formulas such as equations (8–9) for hyperbolic space are rare. To approximate an information diffusion kernel it may be attractive to use the parametrices and geodesic distance ; RM V  between points, as we have done for the multinomial. In cases where the distance itself is difficult to compute exactly, a compromise may be to approximate the distance between nearby points in terms of the Kullback-Leibler divergence, using the relation b ; R[ V   :    & -  V  . The primary “degree of freedom” in the use of information diffusion kernels lies in the specification of the mapping of data to model parameters, " #$ R "  . For the multinomial, we have used the maximum likelihood mapping " #$ Q G "  FKNM4 % K [ , P "   , which is simple and well motivated. As indicated in Section 2, there are other possibilities. This remains an interesting area to explore, particularly for latent variable models. Acknowledgements This work was supported in part by NSF grant CCR-0122581. References [1] S. Amari and H. Nagaoka. Methods of Information Geometry, volume 191 of Translations of Mathematical Monographs. American Mathematical Society, 2000. [2] A. Grigor’yan and M. Noguchi. The heat kernel on hyperbolic space. Bulletin of the London Mathematical Society, 30:643–650, 1998. [3] Y. Guo, P. L. Bartlett, J. Shawe-Taylor, and R. C. Williamson. Covering numbers for support vector machines. IEEE Trans. Information Theory, 48(1), January 2002. [4] T. S. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In Advances in Neural Information Processing Systems, volume 11, 1998. [5] T. Joachims, N. Cristianini, and J. Shawe-Taylor. Composite kernels for hypertext categorisation. In Proceedings of the International Conference on Machine Learning (ICML), 2001. [6] R. E. Kass and P. W. Vos. Geometrical Foundations of Asymptotic Inference. Wiley Series in Probability and Statistics. John Wiley & Sons, 1997. [7] R. I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input spaces. In Proceedings of the International Conference on Machine Learning (ICML), 2002. [8] P. Li and S.-T. Yau. Estimates of eigenvalues of a compact Riemannian manifold. In Geometry of the Laplace Operator, volume 36 of Proceedings of Symposia in Pure Mathematics, pages 205–239, 1980. [9] R. Schoen and S.-T. Yau. Lectures on Differential Geometry, volume 1 of Conference Proceedings and Lecture Notes in Geometry and Topology. International Press, 1994.
2002
84
2,292
Boosted Dyadic Kernel Discriminants Baback Moghaddam Mitsubishi Electric Research Laboratory 201 Broadway Cambridge MA 02139 USA baback@merl.com Gregory Shakhnarovich MIT AI Laboratory 200 Technology Square Cambridge MA 02139 USA gregory@ai.mit.edu Abstract We introduce a novel learning algorithm for binary classification with hyperplane discriminants based on pairs of training points from opposite classes (dyadic hypercuts). This algorithm is further extended to nonlinear discriminants using kernel functions satisfying Mercer’s conditions. An ensemble of simple dyadic hypercuts is learned incrementally by means of a confidence-rated version of AdaBoost, which provides a sound strategy for searching through the finite set of hypercut hypotheses. In experiments with real-world datasets from the UCI repository, the generalization performance of the hypercut classifiers was found to be comparable to that of SVMs and k-NN classifiers. Furthermore, the computational cost of classification (at run time) was found to be similar to, or better than, that of SVM. Similarly to SVMs, boosted dyadic kernel discriminants tend to maximize the margin (via AdaBoost). In contrast to SVMs, however, we offer an on-line and incremental learning machine for building kernel discriminants whose complexity (number of kernel evaluations) can be directly controlled (traded offfor accuracy). 1 Introduction This paper introduces a novel algorithm for learning complex binary classifiers by superposition of simpler hyperplane-type discriminants. In this algorithm, each of the simple discriminants is based on the projection of a test point onto a vector joining a dyad, defined as a pair of training data points with opposite labels. The learning algorithm itself is based on a real-valued variant of AdaBoost [7], and the hyperplane classifiers use kernels of the type used, e.g., by support vector machines (SVMs) [9] for mapping linearly non-separable problems to high-dimensional feature spaces. When the concept class consists of linear discriminants (hyperplanes), this amounts to using a hyperplane orthogonal to the vector connecting the point in a dyad. We shall refer to such a classifier as a hypercut. By applying the same notion of linear hypercuts to a nonlinearly transformed feature space obtained by Mercertype kernels [3], we are able to implement nonlinear kernel discriminants similar in form to SVMs. In each iteration of AdaBoost, the space of all dyadic hypercuts is searched. It can be easily shown that this hypothesis space spans the subspace of the data and that it must include the optimal hyperplane discriminant. This notion is readily extended to non-linear classifiers obtained by kernel transformations, by noting that in the feature space, the optimal discriminant resides in the span of the transformed data. Therefore, for both linear and nonlinear classification, searching the space of dyadic hypercuts forms an efficient strategy for exploring the space of all hypotheses. 1.1 Related work The most general framework to consider is the theory of potential functions for pattern classification [1] in which potential fields1 of the form H(x) = X i αiyiK(x, xi) (1) are thresholded to predict classification labels, ˆy = sign(H(x)). In a probabilistic kernel regression framework recently proposed in [5], the coefficients α that minimize the classification error are obtained by maximizing J(α) = −1 2 X i,j αiαjyiyjK(xi, xj) + X i F(αi), (2) where the potential function F is concave and continuous (corresponding to positive semi-definite kernels). This framework subsumes SVMs, which correspond to the simplest case F(α) = α. Generalized linear models [6] can also be shown to be members of this class by considering logistic regression where F(α) becomes the binary entropy function and K is related to the covariance function of a Gaussian process classifier for the GLM’s intermediate variables. In this paper we propose and design classifiers with dyadic discriminants, which have potential functions of the form H(x) = X t αtK(x, xp t ) −αtK(x, xn t ), (3) where xp and xn are positively and negatively labeled data, respectively. The coefficients αt are determined not by minimizing a convex quadratic function J(α) but rather by selecting an optimal classifier in the t-th iteration of AdaBoost. Thus the potential function is constrained to the form of a weighted sum of dyadic hypercuts, or differences of kernel functions. Another way to view this is to think of a pair of opposite – polarity “basis vectors” sharing the same coefficient αt. The most closely related potential function technique to ours is that of SVMs [9], where the classification margin (and thus the bound on generalization) is maximized by a simultaneous optimization with respect to all of the training points. However, there are important differences between SVMs and our iterative hypercut algorithm. In each step of the boosting process, we do not maximize the margin of the resulting strong classifier directly, which makes for a much simpler optimization task. Meanwhile, we are assured that with AdaBoost we tend to maximize (although in an asymptotic sense) the margin of the final classifier [7]. The most important difference that distinguishes our method from SVMs (and, by extension, from the general kernel discriminant family described above) is that 1The physical analogy here is to the linear superposition of electrostatic charges of strength αi, polarity yi and location xi with distance defined by the kernel K. the points in our dyads are not typically located near the decision boundary, as is the case with support vectors. As a result, the final set of “basis vectors” used by the boosted strong classifier can be viewed as a representative subset of the data (i.e. those points needed for classification), whereas with SVMs the support vectors are simply the minimal number of training points needed to build (support) the decision boundary and are almost certainly not “typical” or high-likelihood members of either class.2 The classification complexity of a kernel-based classifier — the cost of classifying a test point — depends on the number of kernel function evaluations on which the classifier is based. In the case of SVMs, there is (usually) no direct way of controlling this number (the quadratic programming solution will automatically determine all positive Lagrange multipliers). In our boosted hypercut algorithm, however, the number of dyadic “basis vectors”, and therefore of the required kernel evaluations, is determined by the number of iterations of the boosting algorithm and can therefore be controlled. Note that we are not referring here to the complexity of training classifiers here, only to their run-time computational cost. 2 Methodology Consider a binary classification task where we are given a training set of vectors T = {x1, . . . , xM} where x ∈RN, with corresponding labels {y1, . . . , yM} where y ∈{−1, +1}. Let there be Mp samples with label +1 and Mn samples with label −1 so that M = Mp + Mn. Consider a simple linear hyperplane classifier defined by a discriminant function of the form f(x) = ⟨w · x⟩+ b (4) where sign(f(x)) ∈{+1, −1} gives the binary classification. Under certain assumptions, Gaussianity in particular, the optimal hyperplane, specified by the projection w∗and bias b∗, is easily computed using standard statistical techniques based on class means and sample covariances for linear classifiers. However, in the absence of such assumptions, one must resort to searching for the optimal hyperplane. When searching for w∗, an efficient strategy is to consider only hyperplanes whose surface normal is parallel to the line joining a dyad (xi, xj): wij = xi −xj c , yi ̸= yj, i < j (5) where yi ̸= yj by definition, i < j for uniqueness, and c is a scale factor. The vector wij is parallel to the line segment connecting the points in a dyad. Setting c = ∥xi −xj∥makes wij a unit-norm direction vector. The hypothesis space to be searched consists of | {wij} |= MpMn hypercuts, each having a free bias parameter bij which is typically determined by minimizing the weighted classification error (as we shall see in the next section). Each hypothesis is then given by the sign of the discriminant as in (4): hij(x) = sign(⟨wij · x⟩+ bij) (6) Let {hij} = {wij, bij} denote the complete set of hypercuts for a given training set. Strictly speaking, this set is uncountable since bij is continuous and arbitrary. However, since we always select one bias parameter for each hypercut wij, we do in fact end up with only MpMn classifiers. 2Although unrelated to our technique, the Relevance Vector machine [8] is another kernel learning algorithm that tends to produce “prototypical” basis vectors in the interior as opposed to the boundary of the distributions. 2.1 AdaBoost The AdaBoost algorithm [4] provides a practical framework for combining a number of weak classifiers into a strong final classifier by means of linear combination and thresholding. AdaBoost works by maintaining over the training set an iteratively evolving distribution (weights) Dt(i) based on the difficulty of classification (i.e. points which are harder to classify have greater weight). Consequently, a “weak” hypothesis h(x) : x →{+1, −1} will have classification error ϵt weighted by Dt. In our case, in each iteration t, we select from the complete set of MpMn hypercuts {hij} one which minimizes ϵt. The data are then re-weighted based on their (mis)classification to obtain an updated distribution Dt+1. The final classifier is a linear combination of the selected weak classifiers ht and has the form of a weighted “voting” scheme H(x) = sign T X i=1 αtht(x) ! (7) where αt = 1 2 ln( 1−ϵt ϵt ). In [7] a framework was developed where ht(x) can be real-valued (as opposed to binary) and is interpreted as a “confidence-rated prediction.” The sign of ht(x) is the predicted label while the magnitude | ht(x) | is the confidence. For such real-valued classifiers we have αt = 1 2 ln 1 + rt 1 −rt  (8) where the “correlation” rt = P i Dt(i) yi ht(xi) is inversely related to the error by ϵt = (1 −rt)/2. 2.2 Nonlinear Hypercuts The logical extension beyond the boosted linear dyadic discriminants described in the previous section is that of nonlinear discriminants using positive definite kernels as suggested in [3] for use with SVMs. In the resulting “reproducing kernel Hilbert spaces”, dot products between high-dimensional mappings Φ(x) : X →F are easily evaluated using Mercer kernels k(x, x′) = ⟨Φ(x) · Φ(x′)⟩. (9) This has the desirable property that any algorithm based on dot products, e.g. our linear hypercut classifier (6), can first nonlinearly transform its inputs (using kernels) and implicitly perform dot-products in the transformed space. The preimage of the linear hyperplane solution back in the input space is thus a nonlinear hypersurface. Applying the above kernel property to the hypercut concept (5) we can rewrite it in nonlinear form by considering the linear hypercut in the transformed space F where the projection operator is wij = Φ(xi) −Φ(xj), yi ̸= yj, i < j (10) (we have absorbed the scale constant c in (5) into wij for simplicity in this case).3 Due to the implicit nature of the nonlinear mapping, we can not directly evaluate wij. However, we only need its dot product with the transformed input vectors 3Since the optimal projection w∗ ij must lie in the span of {Φ(xi)}, we should restrict the search for an optimal hyperplane accordingly, e.g. by considering pair-wise hypercuts. Φ(x). Considering the linear discriminant (4) and substituting the above we obtain fij(x) = ⟨(Φ(xi) −Φ(xj)) · Φ(x)⟩+ bij, (11) which by applying the kernel property (9) is equivalent to fij(x) = k(x, xi) −k(x, xj) + bij (12) Note that fij now represents a single dyadic term in the potential function introduced in (3). The binary-valued hypercut classifier is given by a simple thresholding hij(x) = sign(fij(x)). (13) A “confidence-rated” classifier with output in the range [−1, +1] can be obtained by passing fij through a bipolar sigmoidal nonlinearity such as a hyperbolic tangent hij(x) = tanh (βfij(x)) (14) where β determines the “slope” of the sigmoid. We note that in order to obtain a continuous-valued hypercut classifier that suitably occupies the range [−1, +1] it may be necessary to experiment and adjust both constants c and β. The final classifier constructed by AdaBoost, following (7), is given by H(x) = sign T X t=1 αt tanh β  k(x, xt i) −k(x, xt j) + bt ij  ! , (15) where we have superscripted the elements of fij selected in iteration t of boosting. Note that besides the monotonic sigmoid and offset transformation, this form is essentially a (nonlinear) equivalent of the dyadic potential function of (3). If we assume, without loss of generality, that an equal number N/2 of d-dimensional training points is available from each class, defining O(N 2) hypercuts. The values of fij(x) for each hypercut and each training point (12) can be computed only once, typically in O(d), and used in every iteration of the algorithm, making the setup cost for the algorithm O(dN 3). Each iteration requires examination of all fij(xk) and takes O(N 3). To summarize, the cost of learning a classifier with K dyads is O (d + K)N 3 . It is important to note that both the setup step and the search for an optimal hypercut in each iteration are naturally parallelizable, leading to a reduction in time linear in the number of processors. 3 Experiments Before applying our algorithm to standard benchmarks, we illustrate a simple 2D example of nonlinear boosted dyadic hypercuts on a “toy” problem. Consider a classification task on the dataset of 20 points (10 for each class) shown in Figure 1. The hypercuts algorithm (using Gaussian kernels) was able to separate the classes using two iterations (two cuts) as shown in Figure 1(a). Note how the dyads of training points (connected by dashed lines) define the discriminant boundary. For comparison, we used an SVM with Gaussian kernels on the same dataset, as shown in Figure 1(b). Although the SVM has a wider margin, the same would be expected from our algorithm with additional rounds of boosting. The computational cost of classifying a point can be directly compared in terms of the number of required kernel evaluations in (2), which dominate the computation for high-dimensional data and kernels like Gaussians. For SVM, this is the number of support vectors. For hypercuts, this is the number of distinct training points (a) (b) Figure 1: A toy problem: classification based on (a) hypercuts (2 dyads) (b) SVM (4 support vectors). in the selected dyads. After n rounds of boosting this number is bounded by 2n, since a point can participate in multiple dyads. For instance, the SVM in Figure 1 requires 4 kernel evaluations, compared to 3 for the boosted hypercuts. 3.1 Experiments with real data sets We evaluated the performance of the dyadic hypercuts algorithm on a number of real-world data sets from the UCI repository [2], and compared the performance to that of two established classification methods: SVM with Gaussian RBF kernel and k-Nearest Neighbor (k-NN). We chose sets large enough for reasonable training/validation/test partitioning, and that represent binary (or easily converted to binary) classification problems. Dataset N d k-NN SVM #SV Hypercuts #k.ev. Heart 90 13 .196 ±.042 .202 ±.038 62 ±10 .202 ±.030 50 ±12 Ionosphere 120 34 .168 ±.024 .064 ±.018 73 ±7 .083 ±.022 63 ±7 WBC 200 9 .034 ±.011 .032 ±.008 50 ±26 .028 ±.007 30 ±12 WPBC 65 32 .250 ±.024 .243 ±.006 63 ±3 .253 ±.025 41 ±5 WDBC 190 30 .044 ±.015 .035 ±.013 67 ±15 .038 ±.014 47 ±12 Wine 60 13 .053 ±.030 .032 ±.022 40 ±9 .040 ±.026 23 ±4 Spam 150 57 .159 ±.025 .123 ±.016 101 ±8 .116 ±.019 73 ±15 Sonar 70 60 .227 ±.041 .226 ±.037 66 ±3 .202 ±.045 52 ±5 Pima 200 8 .267 ±.024 .244 ±.014 129 ±7 .260 ±.017 110 ±16 Table 1: The results of the experiments described in Section 3.1. N is the size of the training set, d the dimension, #SV the number of support vectors for the SVM, and #k.ev. the number of kernel evaluations required by a boosted hypercuts classifier. Means and standard deviations in 30 trials are reported for each data set. WBC,WPBC,WDBC are Wisconsin Breast Cancer, Prognosis and Diagnosis data sets, respectively. In each experiment, the data set was randomly partitioned into training, validation and test sets of similar sizes. The validation set was used to “tune” the parameters of each of the classifiers (k for k-NN, σ for RBF kernels of SVMs and hypercuts), by choosing from a suitable range the parameter value with lowest error on the validation set. Each of the three classifiers was then trained with the chosen parameter on the training set, and tested on the test set. For each data data set the above experiment was repeated 30 times. The columns of Table 1, left to right, show the following, with means and standard deviations over the 30 trials for each dataset: size of the training set, dimension, the test error 20 40 60 80 100 120 140 0.02 0.04 0.06 0.08 0.1 0.12 0.14 Iteartions of AdaBoost Classification error 27 k.ev. 58 k.ev. 72 k.ev. 78 k.ev. SVM, 96 Support Vectors Hypercuts, test error Hypercuts, training error Dataset 10% 25% 50% Heart .202 .200 .197 Ion. .178 .113 .094 WBC .028 .028 . 028 WPBC .302 .269 .266 WDBC .365 .384 .383 Wine .064 .051 .043 Spam .142 .124 .117 Sonar .248 .233 .214 Pima .269 .268 .263 Figure 2: An example of the progress of training (dotted line) and test (solid line) error in a run of hypercuts algorithm with RBF kernel on Spam data. The number of kernel evaluations in the combined classifier is shown for indicated points in the run. The dashed line shows the test error of the SVM with RBF kernel. Table 2: Test error as a function of number of kernel evaluations allowed by the user; the percentage values are relative to the number of SVs in each experiment. Averaged over 30 trials for each data set. of k-NN, the test error of SVM, the number of support vectors, the test error of hypercuts, and the number of kernel evaluations in the final hypercuts classifier. The size of the hypercuts classifier can be controlled via the number of AdaBoost iterations, thus affecting the accuracy of the classifier. In our experiments boosting was stopped after a prolonged plateau in the training error was observed; in some cases, further continuation of boosting could lead to better results. 3.2 Discussion The most important conclusion from these empirical results is that for all data sets, the RBF boosted dyadic hypercuts achieve test performance statistically equivalent to that of SVMs4, and usually better than that of k-NN classifiers, while the complexity of the trained classifier is typically lower (in some cases, which appear in bold in Table 1, the difference in complexity is significant). In addition, our experiments demonstrate the trade-offbetween the complexity and accuracy of the hypercuts. Figure 2 shows an example run of hypercuts algorithm on Spam data set, with 150 training points. After 24 iterations, the test error of the final classifier becomes consistently lower than that of SVM trained on the same training set, which found 96 support vectors. At that point the classifier requires 27 kernel evaluations (about 28% of the number of SVs). The following 115 iterations achieve further improvement of only 1.8% in test error, while increasing the required number of kernel evaluations to 78. Here the automatic criterion stopped the AdaBoost after no significant improvement in training error was observed for 25 iterations. But the user can instead specify the desired bound on the complexity of the classifier. Table 2 shows the behavior of test error as a function of the number of kernel evaluations by the classifier, averaged over all 30 trials. For some data sets, e.g. Heart and WBC, the hypercuts classifier with only 10% of the number of kernel evaluations in an SVM already achieves comparable test error. 4i.e. the difference of the means is within one standard deviation from both sides 4 Conclusions The contribution of this paper is two-fold. First, we proposed a family of simple discriminants (hypercuts), based on pairs of training points from opposite classes (dyads), and extended this family using a nonlinear mapping with Mercer-type kernels. Second, we have designed a greedy selection algorithm based on boosting with confidence-rated (real-valued) hypercut classifiers with continuous output in the interval [-1,1]. This is a new kernel based approach to classification. We have shown that this approach performs on par with SVMs, without having to solve large QP problems. In contrast, our algorithm allows the user to trade offthe classifier’s computational complexity for its accuracy, and benefits from AdaBoost’s exponential error convergence and the assurance of asymptotic margin maximization. The generalization performance of our algorithm was evaluated on a number of data sets from the UCI repository, and demonstrated to be comparable to that of established state-of-the-art algorithms (SVMs, k-NN), often with reduced classification time and reduced classifier size. We emphasize this performance advantage, since in practical applications it is often desirable to minimize complexity even at the cost of increased training time. We are currently looking into optimal strategies for sampling the hypothesis space (MpMn possible hypercuts) based on the distribution Dt(i) and forming hypercuts that are not necessarily based on training samples but rather, for example, on cluster centroids or other points derived from the input distribution. This has the potential to dramatically reduce the computational cost of learning in the boosted hypercuts algorithm, thus making it even more attractive for a practitioner. References [1] M. A. Aizerman, E. M. Braverman, and L. I. Rozonoer. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821–837, 1964. [2] C. L. Blake and C. J. Merz. UCI repository of machine learning databases. [http://www.ics.uci.edu/∼mlearn/MLRepository.html], 1998. [3] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In D. Haussler, editor, Proc. 5th Annual ACM Workshop on Computational Learning Theory, pages 144–152. ACM Press, 1992. [4] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119– 139, 1995. [5] T. Jaakkola and D. Haussler. Probabilistic kernel regression models. In D. Heckerman and J. Whittaker, editors, Proc. of 7th International Workshop on AI and Statistics. Morgan Kaufman, 1999. [6] P. McCallugh and J. Nelder. Generalized Linear Models. Chapman and Hall, London, 1983. [7] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using confidencerated predictions. In Proc. of 11th Annual Conf. on Computational Learning Theory, pages 80–91, 1998. [8] M. E. Tipping. The Relevance Vector Machine. In Advances in Neural Information Processing Systems 12, pages 652–658. MIT Press, 2000. [9] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
2002
85
2,293
Derivative observations in Gaussian Process Models of Dynamic Systems E. Solak Dept. Elec. & Electr. Eng., Strathclyde University, Glasgow G1 1QE, Scotland, UK. ercan.solak@strath.ac.uk R. Murray-Smith   Dept. Computing Science, University of Glasgow Glasgow G12 8QQ, Scotland, UK. rod@dcs.gla.ac.uk W. E. Leithead  Hamilton Institute, National Univ. of Ireland, Maynooth, Co. Kildare, Ireland. bill@icu.strath.ac.uk D. J. Leith Hamilton Institute, National Univ. of Ireland, Maynooth, Co. Kildare, Ireland doug.leith@may.ie C. E. Rasmussen Gatsby Computational Neuroscience Unit, University College London, UK edward@gatsby.ucl.ac.uk Abstract Gaussian processes provide an approach to nonparametric modelling which allows a straightforward combination of function and derivative observations in an empirical model. This is of particular importance in identification of nonlinear dynamic systems from experimental data. 1) It allows us to combine derivative information, and associated uncertainty with normal function observations into the learning and inference process. This derivative information can be in the form of priors specified by an expert or identified from perturbation data close to equilibrium. 2) It allows a seamless fusion of multiple local linear models in a consistent manner, inferring consistent models and ensuring that integrability constraints are met. 3) It improves dramatically the computational efficiency of Gaussian process models for dynamic system identification, by summarising large quantities of near-equilibrium data by a handful of linearisations, reducing the training set size – traditionally a problem for Gaussian process models. 1 Introduction In many applications which involve modelling an unknown system   from observed data, model accuracy could be improved by using not only observations of  , but also observations of derivatives e.g.  . These derivative observations might be directly available from sensors which, for example, measure velocity or acceleration rather than position, they might be prior linearisation models from historical experiments. A further practical reason is related to the fact that the computational expense of Gaussian processes increases rapidly (   ) with training set size  . We may therefore wish to use linearisations, which are cheap to estimate, to describe the system in those areas in which they are sufficiently accurate, efficiently summarising a large subset of training data. We focus on application of such models in modelling nonlinear dynamic systems from experimental data. 2 Gaussian processes and derivative processes 2.1 Gaussian processes Bayesian regression based on Gaussian processes is described by [1] and interest has grown since publication of [2, 3, 4]. Assume a set of input/output pairs,      are given, where         In the GP framework, the output values   are viewed as being drawn from a zero-mean multivariable Gaussian distribution whose covariance matrix is a function of the input vectors   Namely the output distribution is           ! #" $  A general model, which reflects the higher correlation between spatially close (in some appropriate metric) points – a smoothness assumption in target system   – uses a covariance matrix with the following structure; " &% ! ('*),+ /.  021  . % 1  3 46587   %  (1) where the norm 1:9;1 3 is defined as 1,<1 3  >=@?BAC= ED F  A HGIBJLK NM    M  The OP4 0 variables, '  M    M  5 are the hyper-parameters of the GP model, which are constrained to be non-negative. In particular 5 is included to capture the noise component of the covariance. The GP model can be used to calculate the distribution of an unknown output  Q corresponding to known input Q as  Q       :Q        P SR 8T "  where R  " Q  "VU ! XW  (2) T "  " Q  :Q Y. " Q  " U ! " ! Q (3) and W [Z         ]\N?  The mean R of this distribution can be chosen as the maximum-likelihood prediction for the output corresponding to the input Q  2.2 Gaussian process derivatives Differentiation is a linear operation, so the derivative of a Gaussian process remains a Gaussian process. The use of derivative observations in Gaussian processes is described in [5, 6], and in engineering applications in [7, 8, 9]. Suppose we are given new sets of pairs ? % ^ %  /`_a%   b Yc ^    O d e  /f  each ? % corresponding to the f points of c;g>h partial derivative of the underlying function     In the noise-free setting this corresponds to the relation _ %       %  iLjilk!m n Y [    f We now wish to find the joint probability of the vector of  ’s and _ ’s, which involves calculation of the covariance between the function and the derivative observations as well as the covariance among the derivative observations. Covariance functions are typically differentiable, so the covariance between a derivative and function observation and the one between two derivative points satisfy  _ %         %       J G  _ %   /_       %          The following identities give those relations necessary to form the full covariance matrix, for the covariance function (1),        '),+ /.  01  .  1  3 (4)  _ %       .]'CM %   % .  % 8),+ /.  021  .  1  3  (5)  _ %   `_     'CM % >7 %   . M    % .   %    .   ),+ `.  021  .  1  3 (6) −3 −2 −1 0 1 2 3 −1 −0.5 0 0.5 1 1.5 distance covariance cov(y,y) cov(ω,y) cov(ω,ω) Figure 1: The covariance functions between function and derivative points in one dimension, with hyper-parameters M     '   . The function       defines a covariance that decays monotonically as the distance between the corresponding input points  and  increases. Covariance  _ %      between a derivative point and a function point is an odd function, and does not decrease as fast due to the presence of the multiplicative distance term.  _]/_ illustrates the implicit assumption in the choice of the basic covariance function, that gradients increase with M and that the slopes of realisations will tend to have highest negative correlation at a distance of    M , giving an indication of the typical size of ‘wiggles’ in realisations of the corresponding Gaussian process . 2.3 Derivative observations from identified linearisations Given perturbation data   *.  , around an equilibrium point      , we can identify a linearisation    Z   \  , the parameters         Q of which can be viewed as observations of derivatives _    _   , and the bias term from the linearisation can be used as a function ‘observation’, i.e.      . We use standard linear regression solutions, to estimate the derivatives with a prior of  on the covariance matrix      4 U U 2W   (7)  n    >W  .       (8)  n        4  U  U  (9)   can be viewed as ‘observations’ which have uncertainty specified by the a >O 4   >O 4  covariance matrix   n for the  th derivative observations, and their associated linearisation point. With a suitable ordering of the observations (e.g.  _   _2    _    _2    ), the associated noise covariance matrix  , which is added to the covariance matrix calculated using (4)-(6), will be block diagonal, where the blocks are the   D   matrices. Use of numerical estimates from linearisations makes it easy to use the full covariance matrix, including off-diagonal elements. This would be much more involved if  were to be estimated simultaneously with other covariance function hyperparameters. In a one-dimensional case, given zero noise on observations then two function observations close together give exactly the same information, and constrain the model in the same way as a derivative observation with zero uncertainty. Data is, however, rarely noise-free, and the fact that we can so easily include knowledge of derivative or function observation uncertainty is a major benefit of the Gaussian process prior approach. The identified derivative and function observation, and their covariance matrix can locally summarise the large number of perturbation training points, leading to a significant reduction in data needed during Gaussian process inference. We can, however, choose to improve robustness by retaining any data in the training set from the equilibrium region which have a low likelihood given the GP model based only on the linearisations (e.g. responses three standard deviations away from the mean). In this paper we choose the hyper-parameters that maximise the likelihood of the occurrence of the data in the sets  ?    ?  , using standard optimisation software. Given the data sets  ?    ? and the hyper-parameters the Gaussian process can be used to infer the conditional distribution of the output as well as its partial derivatives for a given input. The ability to predict not only the mean function response, and derivatives but also to be able to predict the input-dependent variance of the function response and derivatives has great utility in the many engineering applications including optimisation and control which depend on derivative information. 2.4 Derivative and prediction uncertainty Figure 2(c) gives intuitive insight into the constraining effect of function observations, and function+derivative observations on realisations drawn from a Gaussian process prior. To further illustrate the effect of knowledge of derivative information on prediction uncertainty. We consider a simple example with a single pair of function observations       and a single derivative pair   `_  , Hyper-parameters are fixed at '[  M    5H    Figure 2(a) plots the standard deviation from models resulting from variations of function and derivatives observations. The four cases considered are 1. a single function observation, 2. a single function observation + a derivative observation, noise-free, i.e.   H  3. 150 noisy function observations with std. dev.  H  0 . 4. a single function observation + uncertain derivative observation (identified from the 150 noisy function observations above, with    0  ,   Z     \ ). −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 1 function obs + 1 noise−free derivative observation 1 function observation 1 function obs. + 1 noisy derivative observation almost indistinguishable from 150 function observations (a) The effect of adding a derivative observation on the prediction uncertainty – standard deviation of GP predictions −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 sin(α x) derivative obs. function obs. ±2σ derivative obs. ±2σ function obs. (b) Effect of including a noise-free derivative or function observation on the prediction of mean and variance, given appropriate hyperparameters. −5 0 5 −2 0 2 covariate, x dependent variable, y(x) −5 0 5 −2 0 2 covariate, x dependent variable, y(x) −5 0 5 −2 0 2 covariate, x dependent variable, y(x) (c) Examples of realisations drawn from a Gaussian process with   , left – no data, middle, showing the constraining effect of function observations (crosses), and right the effect of function & derivative observations (lines). Figure 2: Variance effects of derivative information. Note that the addition of a derivative point does not have an effect on the mean prediction in any of the cases, because the function derivative is zero. The striking effect of the derivative is on the uncertainty. In the case of prediction using function data the uncertainty increases as we move away from the function observation. Addition of a noise-free derivative observation does not affect uncertainty at  , but it does mean that uncertainty increases more slowly as we move away from 0, but if uncertainty on the derivative increases, then there is less of an impact on variance. The model based on the single derivative observation identified from the 150 noisy function observations is almost indistinguishable from the model with all 150 function observations. To further illustrate the effect of adding derivative information, consider the pairs of noisefree observations of   I   . The hyper-parameters of the model are obtained through a training involving large amounts of data, but we then perform inference using only points at . 0   0 . For illustration, the function point at  is replaced with a derivative point at the same location, and the results shown in Figure 2(b). 3 Nonlinear dynamics example As an example of a situation where we wish to integrate derivative and function observations we look at a discrete-time nonlinear dynamic system Q  .     4P  = (10) Q  ]4 (11) where  is the system state at time  ,  is the observed output, = is the control input and noise term  *  >   . A standard starting point for identification is to find linear dynamic models at various points on the manifold of equilibria. In the first part of the experiment, we wish to acquire training data by stimulating the system input = to take the system through a wide range of conditions along the manifold of equilibria, shown in Figure 3(a). The linearisations are each identified from 200 function observations   W  obtained by starting a simulation at   and perturbing the control signal about =  by  >    . We infer the system response, and the derivative response at various points along the manifold of equilibria, and plot these in Figure 4. The quadratic derivative    from the cubic true function is clearly visible in Figure 4(c), and is smooth, despite the presence of several derivative observations with significant errors, because of the appropriate estimates of derivative uncertainty. The   @= is close to constant   in Figure 4(c). Note that the function ‘observations’ derived from the linearisations have much lower uncertainty than the individual function observations. As a second part of the experiment as shown in Figure 3(b), we now add some offequilibrium function observations to the training set, by applying large control perturbations to the system, taking it through transient regions. We perform a new hyper-parameter optimisation using the using the combination of the transient, off-equilibrium observations and the derivative observations already available. The model incorporates both groups of data and has reduced variance in the off-equilibrium areas. A comparison of simulation runs from the two models with the true data is shown in Figure 5(a), shows the improvement in performance brought by the combination of equilibrium derivatives and off-equilibrium observations over equilibrium information alone. The combined model is almost identical in response to the true system response. 4 Conclusions Engineers are used to interpreting linearisations, and find them a natural way of expressing prior knowledge, or constraints that a data-driven model should conform to. Derivative observations in the form of system linearisations are frequently used in control engineering, and many nonlinear identification campaigns will have linearisations of different operating regions as prior information. Acquiring perturbation data close to equilibrium is relatively easy, and the large amounts of data mean that equilibrium linearisations can be made very accurate. While in many cases we will be able to have accurate derivative observations, they will rarely be noise-free, and the fact that we can so easily include knowledge of derivative or function observation uncertainty is a major benefit of the Gaussian process prior approach. In this paper we used numerical estimates of the full covariance matrix for each linearisation, which were different for every linearisation. The analytic inference of derivative information from a model, and importantly, its uncertainty is potentially of great importance to control engineers designing or validating robust control laws, e.g. [8]. Other applications of models which base decisions on model derivatives will have similar potential benefits. Local linearisation models around equilibrium conditions are, however, not sufficient for specifying global dynamics. We need observations away from equilibrium in transient regions, which tend to be much sparser as they are more difficult to obtain experimentally, and the system behaviour tends to be more complex away from equilibrium. Gaussian processes, with robust inference, and input-dependent uncertainty predictions, are especially interesting in sparsely populated off-equilibrium regions. Summarising the large quantities of near-equilibrium data by derivative ‘observations’ should signficantly reduce the computational problems associated with Gaussian processes in modelling dynamic systems. We have demonstrated with a simulation of an example nonlinear system that Gaussian process priors can combine derivative and function observations in a principled manner which is highly applicable in nonlinear dynamic systems modelling tasks. Any smoothing procedure involving linearisations needs to satisfy an integrability constraint, which has not been solved in a satisfactory fashion in other widely-used approaches (e.g. multiple model [10], or Takagi-Sugeno fuzzy methods [11]), but which is inherently solved within the Gaussian process formulation. The method scales to higher input dimensions O well, adding only an extra O derivative observations + one function observation for each linearisation. In fact the real benefits may become more obvious in higher dimensions, with increased quantities of training data which can be efficiently summarised by linearisations, and more severe problems in blending local linearisations together consistently. References [1] A. O’Hagan. On curve fitting and optimal design for regression (with discussion). Journal of the Royal Statistical Society B, 40:1–42, 1978. [2] C. K. I. Williams and C. E. Rasmussen. Gaussian processes for regression. In Neural Information Processing Systems - 8, pages 514–520, Cambridge, MA, 1996. MIT press. [3] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to linear prediction and beyond. In M. I. Jordan, editor, Learning and Inference in Graphical Models, pages 599–621. Kluwer, 1998. [4] D. J. C. MacKay. Introduction to Gaussian Processes. NIPS’97 Tutorial notes., 1999. [5] A. O’Hagan. Some Bayesian numerical analysis. In J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. M. Smith, editors, Bayesian Statistics 4, pages 345–363. Oxford University Press, 1992. [6] C. E. Rasmussen. Gaussian processes to speed up Hybrid Monte Carlo for expensive Bayesian integrals. Draft: available at http://www.gatsby.ucl.ac.uk/ edward/pub/gphmc.ps.gz, 2003. [7] R. Murray-Smith, T. A. Johansen, and R. Shorten. On transient dynamics, off-equilibrium behaviour and identification in blended multiple model structures. In European Control Conference, Karlsruhe, 1999, pages BA–14, 1999. [8] R. Murray-Smith and D. Sbarbaro. Nonlinear adaptive control using non-parametric Gaussian process prior models. In 15th IFAC World Congress on Automatic Control, Barcelona, 2002. [9] D. J. Leith, W. E. Leithead, E. Solak, and R. Murray-Smith. Divide & conquer identification: Using Gaussian process priors to combine derivative and non-derivative observations in a consistent manner. In Conference on Decision and Control, 2002. [10] R. Murray-Smith and T. A. Johansen. Multiple Model Approaches to Modelling and Control. Taylor and Francis, London, 1997. [11] T. Takagi and M. Sugeno. Fuzzy identification of systems and its applications for modeling and control. IEEE Trans. on Systems, Man and Cybernetics, 15(1):116–132, 1985. Acknowledgements The authors gratefully acknowledge the support of the Multi-Agent Control Research Training Network by EC TMR grant HPRN-CT-1999-00107, support from EPSRC grant Modern statistical approaches to off-equilibrium modelling for nonlinear system control GR/M76379/01, support from EPSRC grant GR/R15863/01, and Science Foundation Ireland grant 00/PI.1/C067. Thanks to J.Q. Shi and A. Girard for useful comments. −2 −1 0 1 2 −2 −1 0 1 2 −1.5 −1 −0.5 0 0.5 1 1.5 x u (a) Derivative observations from linearisations identified from the perturbation data. 200  per linearisation point with noisy  (      ). −2 −1 0 1 2 −2 −1 0 1 2 −1.5 −1 −0.5 0 0.5 1 1.5 x u (b) Derivative observations on equilibrium, and off-equilibrium function observations from a transient trajectory. Figure 3: The manifold of equilibria on the true function. Circles indicate points at which a derivative observation is made. Crosses indicate a function observation −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 (a) Function observations −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −0.5 0 0.5 1 1.5 2 (b) Derivative observations  −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 (c) Derivative observations   Figure 4: Inferred values of function and derivatives, with  contours, as and  are varied along manifold of equilibria (c.f. Fig. 3) from  to  . Circles indicate the locations of the derivative observations points, lines indicate the uncertainty of observations (  standard deviations.) 0 20 40 60 80 100 120 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 time y true system GP with off−equilibrium data Equilibrium data GP (a) Simulation of dynamics. GP trained with both on and off-equilibrium data is close to true system, unlike model based only on equilibrium data. −2 −1 0 1 2 −2 −1 0 1 2 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 (b) Inferred mean and  surfaces using linearisations and off-equilibrium data. The trajectory of the simulation shown in a) is plotted for comparison. Figure 5: Modelling results
2002
86
2,294
Global Versus Local Methods in Nonlinear Dimensionality Reduction Vin de Silva Department of Mathematics, Stanford University, Stanford. CA 94305 silva@math.stanford.edu Joshua B. Tenenbaum Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge. MA 02139 jbt@ai.mit.edu Abstract Recently proposed algorithms for nonlinear dimensionality reduction fall broadly into two categories which have different advantages and disadvantages: global (Isomap [1]), and local (Locally Linear Embedding [2], Laplacian Eigenmaps [3]). We present two variants of Isomap which combine the advantages of the global approach with what have previously been exclusive advantages of local methods: computational sparsity and the ability to invert conformal maps. 1 Introduction In this paper we discuss the problem of nonlinear dimensionality reduction (NLDR): the task of recovering meaningful low-dimensional structures hidden in high-dimensional data. An example might be a set of pixel images of an individual’s face observed under different pose and lighting conditions; the task is to identify the underlying variables (pose angles, direction of light, etc.) given only the high-dimensional pixel image data. In many cases of interest, the observed data are found to lie on an embedded submanifold of the high-dimensional space. The degrees of freedom along this submanifold correspond to the underlying variables. In this form, the NLDR problem is known as “manifold learning”. Classical techniques for manifold learning, such as principal components analysis (PCA) or multidimensional scaling (MDS), are designed to operate when the submanifold is embedded linearly, or almost linearly, in the observation space. More generally there is a wider class of techniques, involving iterative optimization procedures, by which unsatisfactory linear representations obtained by PCA or MDS may be “improved” towards more successful nonlinear representations of the data. These techniques include GTM [4], self organising maps [5] and others [6,7]. However, such algorithms often fail when nonlinear structure cannot simply be regarded as a perturbation from a linear approximation; as in the Swiss roll of Figure 3. In such cases, iterative approaches tend to get stuck at locally optimal solutions that may grossly misrepresent the true geometry of the situation. Recently, several entirely new approaches have been devised to address this problem. These methods combine the advantages of PCA and MDS—computational efficiency; few free parameters; non-iterative global optimisation of a natural cost function—with the ability to recover the intrinsic geometric structure of a broad class of nonlinear data manifolds. These algorithms come in two flavors: local and global. Local approaches (LLE [2], Laplacian Eigenmaps [3]) attempt to preserve the local geometry of the data; essentially, they seek to map nearby points on the manifold to nearby points in the low-dimensional representation. Global approaches (Isomap [1]) attempt to preserve geometry at all scales, mapping nearby points on the manifold to nearby points in low-dimensional space, and faraway points to faraway points. The principal advantages of the global approach are that it tends to give a more faithful representation of the data’s global structure, and that its metric-preserving properties are better understood theoretically. The local approaches have two principal advantages: (1) computational efficiency: they involve only sparse matrix computations which may yield a polynomial speedup; (2) representational capacity: they may give useful results on a broader range of manifolds, whose local geometry is close to Euclidean, but whose global geometry may not be. In this paper we show how the global geometric approach, as implemented in Isomap, can be extended in both of these directions. The results are computational efficiency and representational capacity equal to or in excess of existing local approaches (LLE, Laplacian Eigenmaps), but with the greater stability and theoretical tractability of the global approach. Conformal Isomap (or C-Isomap) is an extension of Isomap which is capable of learning the structure of certain curved manifolds. This extension comes at the cost of making a uniform sampling assumption about the data. Landmark Isomap (or L-Isomap) is a technique for approximating a large global computation in Isomap by a much smaller set of calculations. Most of the work focuses on a small subset of the data, called the landmark points. The remainder of the paper is in two sections. In Section 2, we describe a perspective on manifold learning in which C-Isomap appears as the natural generalisation of Isomap. In Section 3 we derive L-Isomap from a landmark version of classical MDS. 2 Isomap for conformal embeddings 2.1 Manifold learning and geometric invariants We can view the problem of manifold learning as an attempt to invert a generative model for a set of observations. Let be a  -dimensional domain contained in the Euclidean space  , and let    be a smooth embedding, for some   . The object of manifold learning is to recover and  based on a given set  of observed data in  . The observed data arise as follows. Hidden data  are generated randomly in , and are then mapped by  to become the observed data, so       . The problem as stated is ill-posed: some restriction is needed on  if we are to relate the observed geometry of the data to the structure of the hidden variables !" and itself. We will discuss two possibilities. The first is that  is an isometric embedding in the sense of Riemannian geometry; so  preserves infinitesmal lengths and angles. The second possibility is that  is a conformal embedding; it preserves angles but not lengths. Equivalently, at every point $# there is a scalar %&'  )( such that infinitesimal vectors at  get magnified in length by a factor %*  . The class of conformal embeddings includes all isometric embeddings as well as many other families of maps, including stereographic projections such as the Mercator projection. One approach to solving a manifold learning problem is to identify which aspects of the geometry of are invariant under the mapping  . For example, if  is an isometric embedding then by definition infinitesimal distances are preserved. But more is true. The length of a path in is defined by integrating the infinitesimal distance metric along the path. The same is true in   , so  preserves path lengths. If ,+.- are two points in , then the shortest path between  and lying inside is the same length as the shortest path between '  and   along   . Thus geodesic distances are preserved. The conclusion is that is isometric with   , regarded as metric spaces under geodesic distance. Isomap exploits this idea by constructing the geodesic metric for   approximately as a matrix, using the observed data alone. To solve the conformal embedding problem, we need to identify an observable geometric invariant of conformal maps. Since conformal maps are locally isometric up to a scale factor %*  , it is natural to try to estimate %*  at each point   in the observed data. By rescaling, we can then restore the original metric structure of the data and proceed as in Isomap. We can do this by noting that a conformal map  rescales local volumes in by a factor %&'  . Hence if the hidden data are sampled uniformly in , the local density of the observed data will be  %* . . It follows that the conformal factor %*  can be estimated in terms of the observed local data density, provided that the original sampling is uniform. C-Isomap implements a version of this idea which is independent of the dimension  . This uniform sampling assumption may appear to be a severe restriction, but we believe it reflects a necessary tradeoff in dealing with a larger class of maps. Moreover, as we illustrate below, our algorithm appears in practice to be robust to moderate violations of this assumption. 2.2 The Isomap and C-Isomap algorithms There are three stages to Isomap [1]: 1. Determine a neighbourhood graph  of the observed data !   in a suitable way. For example,  might contain ,  iff  is one of the  nearest neighbours of , (and vice versa). Alternatively,  might contain the edge ' iff      , for some  . 2. Compute shortest paths in the graph for all pairs of data points. Each edge    in the graph is weighted by its Euclidean length   , or by some other useful metric. 3. Apply MDS to the resulting shortest-path distance matrix to find a new embedding of the data in Euclidean space, approximating . The premise is that local metric information (in this case, lengths of edges   in the neighbourhood graph) is regarded as a trustworthy guide to the local metric structure in the original (latent) space. The shortest-paths computation then gives an estimate of the global metric structure, which can be fed into MDS to produce the required embedding. It is known that Step 2 converges on the true geodesic structure of the manifold given sufficient data, and thus Isomap yields a faithful low-dimensional Euclidean embedding whenever the function  is an isometry. More precisely, we have (see [8]): Theorem. Let be sampled from a bounded convex region in   , with respect to a density function      . Let  be a  -smooth isometric embedding of that region in  . Given  + )( , for a suitable choice of neighbourhood size parameter  or  , we have   recovered distance original distance    with probability at least   , provided that the sample size is sufficiently large. [The formula is taken to hold for all pairs of points simultaneously.] C-Isomap is a simple variation on Isomap. Specifically, we use the  -neighbours method in Step 1, and replace Step 2 with the following: 2a. Compute shortest paths in the graph for all pairs of data points. Each edge    in the graph is weighted by ,          . Here    is the mean distance of   to its  nearest neighbours. Using similar arguments to those in [8], one can prove a convergence theorem for CIsomap. The exact formula for the weights is not critical in the asymptotic analysis. The point is that the rescaling factor        is an asymptotically accurate approximation to the conformal scaling factor in the neighbourhood of  and  . Theorem. Let be sampled uniformly from a bounded convex region in   . Let  be a   -smooth conformal embedding of that region in  . Given  + ( , for a suitable choice of neighbourhood size parameter  , we have   recovered distance original distance    with probability at least   , provided that the sample size is sufficiently large. It is possible but unpleasant to find explicit lower bounds for the sample size. Qualitatively, we expect to require a larger sample size for C-Isomap since it depends on two approximations—local data density and geodesic distance—rather than one. In the special case where the conformal embedding is actually an isometry, it is therefore preferable to use Isomap rather than C-Isomap. This is borne out in practice. 2.3 Examples We ran C-Isomap, Isomap, MDS and LLE on three “fishbowl” examples with different data distributions, as well as a more realistic simulated data set. We refer to Figure 1. Fishbowls: These three datasets differ only in the probability density used to generate the points. For the conformal fishbowl (column 1), 2000 points were generated randomly uniformly in a circular disk and then projected stereographically (hence conformally mapped) onto a sphere. Note the high concentration of points near the rim. There is no metrically faithful way of embedding a curved fishbowl inside a Euclidean plane, so classical MDS and Isomap cannot succeed. As predicted, C-Isomap does recover the original disk structure of (as does LLE). Contrast with the uniform fishbowl (column 2), with data points sampled using a uniform measure on the fishbowl itself. In this situation C-Isomap behaves like Isomap, since the rescaling factor is approximately constant; hence it is unable to find a topologically faithful 2-dimensional representation. The offset fishbowl (column 3) is a perturbed version of the conformal fishbowl; points are sampled in using a shallow Gaussian offset from center, then stereographically projected onto a sphere. Although the theoretical conditions for perfect recovery are not met, C-Isomap is robust enough to find a topologically correct embedding. LLE, in contrast, produces topological errors and metric distortion in both cases where the data are not uniformly sampled in (columns 2 and 3). Face images: Artificial images of a face were rendered as    pixel images and rasterized into 16384-dimensional vectors. The images varied randomly and independently in two parameters: left-right pose angle  and distance from camera  . There is a natural family of conformal transformations for this data manifold, if we ignore perspective distortions in the closest images: namely     , for  ( , which has the effect of shrinking or magnifying the apparent size of images by a constant factor. Sampling uniformly in  and in   gives a data set approximately satisfying the required conditions for C-Isomap. We generated 2000 face images in this way, spanning the range indicated by Figure 2. All four algorithms returned a two-dimensional embedding of the data. As expected, C-Isomap returns the cleanest embedding, separating the two degrees of freedom reliably along the horizontal and vertical axes. Isomap returns an embedding which narrows predictably as the face gets further away. The LLE embedding is highly distorted. conformal fishbowl MDS Isomap: k = 15 C−Isomap: k = 15 LLE: k = 15 uniform fishbowl MDS Isomap: k = 15 C−Isomap: k = 15 LLE: k = 15 offset fishbowl MDS Isomap: k = 15 C−Isomap: k = 15 LLE: k = 15 face images MDS Isomap: k = 15 C−Isomap: k = 15 LLE: k = 15 Figure 1: Four dimensionality reduction algorithms (MDS, Isomap, C-Isomap, and LLE) are applied to three versions of a toy “fishbowl” dataset, and to a more complex data manifold of face images. Figure 2: A set of 2000 face images were randomly generated, varying independently in two parameters: distance and left-right pose. The four extreme cases are shown. 3 Isomap with landmark points The Isomap algorithm has two computational bottlenecks. The first is calculating the shortest-paths distance matrix  . Using Floyd’s algorithm this is    ; this can be improved to      by implementing Dijkstra’s algorithm with Fibonacci heaps (  is the neighbourhood size). The second bottleneck is the MDS eigenvalue calculation, which involves a full matrix and has complexity     . In contrast, the eigenvalue computations in LLE and Laplacian Eigenmaps are sparse (hence considerably cheaper). L-Isomap addresses both of these inefficiencies at once. We designate of the data points to be landmark points, where . Instead of computing  , we compute the matrix   of distances from each data point to the landmark points only. Using a new procedure LMDS (Landmark MDS), we find a Euclidean embedding of the data using   instead of  . This leads to an enormous savings when is much less than , since   can be computed using Dijkstra in      time, and LMDS runs in    . LMDS is feasible precisely because we expect the data to have a low-dimensional embedding. The first step is to apply classical MDS to the landmark points only, embedding them faithfully in  . Each remaining point  can now be located in  by using its known distances from the landmark points as constraints. This is analogous to the Global Positioning System technique of using a finite number of distance readings to identify a geographic location. If    and the landmarks are in general position, then there are enough constraints to locate  uniquely. The landmark points may be chosen randomly, with taken to be sufficiently larger than the minimum    to ensure stability. 3.1 The Landmark MDS algorithm LMDS begins by applying classical MDS [9,10] to the landmarks-only distance matrix  . We recall the procedure. The first step is to construct an “inner-product” matrix    ! "     ; here " is the matrix of squared distances and   is the “centering” matrix defined by the formula #  "  %$   . Next find the eigenvalues and eigenvectors of & . Write   for the positive eigenvalues (labelled so that ('    *))+) , ), and .  for the corresponding eigenvectors (written as column vectors); non-positive eigenvalues are ignored. Then for  0/ the required optimal  -dimensional embedding vectors are given as the columns of the matrix: 1  23 3 3 4 5 6'87 .:9 ' 5   7 . 9  ... 5   7 .9  ;=< < < > The embedded data are automatically mean-centered with principal components aligned with the axes, most significant first. If ? has no negative eigenvalues, then the / dimensional embedding is perfect; otherwise there is no exact Euclidean embedding. The second stage of LMDS is to embed the remaining points in @ . Let "A denote the column vector of squared distances between a data point  and the landmark points. The embedding vector  is related linearly to A by the formula:     1CB D  "A& where D  is the column mean of  and 1 B is the pseudoinverse transpose of 1 : 1CB  2 3 3 3 4 .:9 '  5 6' .:9   5   ... .9   5   ; < < < > Original points Swiss roll embedding LLE: k=18 LLE: k=14 LLE: k=10 LLE: k=6 L−Isomap: k=8 20 landmarks L−Isomap: k=8 10 landmarks L−Isomap: k=8 4 landmarks L−Isomap: k=8 3 landmarks Figure 3: L-Isomap is stable over a wide range of values for the sparseness parameter (the number of landmarks). Results from LLE are shown for comparision. The final (optional) stage is to use PCA to realign the data with the coordinate axes. A full discussion of LMDS will appear in [11]. We note two results here: 1. If  is a landmark point, then the embedding given by LMDS is consistent with the original MDS embedding. 2. If the distance matrix   can be represented exactly by a Euclidean configuration in  , and if the landmarks are chosen so that their affine span in that configuration is  -dimensional (i.e. in general position), then LMDS will recover the configuration exactly, up to rotation and translation. A good way to satisfy the affine span condition is to pick    landmarks randomly, plus a few extra for stability. This is important for Isomap, where the distances are inherently slightly noisy. The robustness of LMDS to noise depends on the matrix norm 1 B    5   . If   is very small, then all the landmarks lie close to a hyperplane and LMDS performs poorly with noisy data. In practice, choosing a few extra landmark points gives satisfactory results. 3.2 Example Figure 3, shows the results of testing L-Isomap on a Swiss roll data set. 2000 points were generated uniformly in a rectangle (top left) and mapped into a Swiss roll configuration in  . Ordinary Isomap recovers the rectangular structure correctly provided that the neighbourhood parameter is not too large (in this case   works). The tests show that this peformance is not significantly degraded when L-Isomap is used. For each , we chose landmark points at random; even down to 4 landmarks the embedding closely approximates the (non-landmark) Isomap embedding. The configuration of three landmarks was chosen especially to illustrate the affine distortion that may arise if the landmarks lie close to a subspace (in this case, a line). For three landmarks chosen at random, results are generally much better. In contrast, LLE is unstable under changes in its sparseness parameter  (neighbourhood size). To be fair,  is principally a topological parameter and only incidentally a sparseness parameter for LLE. In L-Isomap, these two roles are separately fulfilled by  and . 4 Conclusion Local approaches to nonlinear dimensionality reduction such as LLE or Laplacian Eigenmaps have two principal advantages over a global approach such as Isomap: they tolerate a certain amount of curvature and they lead naturally to a sparse eigenvalue problem. However, neither curvature tolerance nor computational sparsity are explicitly part of the formulation of the local approaches; these features emerge as byproducts of the goal of trying to preserve only the data’s local geometric structure. Because they are not explicit goals but only convenient byproducts, they are not in fact reliable features of the local approach. The conformal invariance of LLE can fail in sometimes surprising ways, and the computational sparsity is not tunable independently of the topological sparsity of the manifold. In contrast, we have presented two extensions to Isomap that are explicitly designed to remove a well-characterized form of curvature and to exploit the computational sparsity intrinsic to low-dimensional manifolds. Both extensions are amenable to algorithmic analysis, with provable conditions under which they return accurate results; and they have been tested successfully on challenging data sets. Acknowledgments This work was supported in part by NSF grant DMS-0101364, and grants from Schlumberger, MERL and the DARPA Human ID program. The authors wish to thank Thomas Vetter for providing the range and texture maps for the synthetic face; and Lauren Schmidt for her help in rendering the actual images using Curious Labs’ “Poser” software. References [1] Tenenbaum, J.B., de Silva, V. & Langford, J.C (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290: 2319–2323. [2] Roweis, S. & Saul, L. (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290: 2323–2326. [3] Belkin, M. & Niyogi, P. (2002) Laplacian eigenmaps and spectral techniques for embedding and clustering. In T.G. Dietterich, S. Becker and Z. Ghahramani (eds.), Advances in Neural Information Processing Systems 14. MIT Press. [4] Bishop, C., Svensen, M. & Williams, C. (1998) GTM: The generative topographic mapping. Neural Computation 10(1). [5] Kohonen, T. (1984) Self Organisation and Associative Memory. Springer-Verlag, Berlin. [6] Bregler, C. & Omohundro, S.M. (1995) Nonlinear image interpolation using manifold learning. In G. Tesauro, D.S. Touretzky & T.K. Leen (eds.), Advances in Neural Information Processing Systems 7: 973–980. MIT Press. [7] DeMers, D. & Cottrell, G. (1993) Non-linear dimensionality reduction In S. Hanson, J. Cowan & L. Giles (eds.), Advances in Neural Information Processing Systems 5: 580–590. Morgan-Kaufmann. [8] Bernstein, M., de Silva, V., Langford, J.C. & Tenenbaum, J.B. (December 2000) Graph approximations to geodesics on embedded manifolds. Preprint may be downloaded at the URL: http://isomap.stanford.edu/BdSLT.pdf [9] Torgerson, W.S. (1958) Theory and Methods of Scaling. Wiley, New York. [10] Cox, T.F. & Cox M.A.A. (1994) Multidimensional Scaling. Chapman & Hall, London. [11] de Silva, V. & Tenenbaum, J.B. (in preparation) Sparse multidimensional scaling using landmark points.
2002
87
2,295
Learning Graphical Models with Mercer Kernels Francis R. Bach Division of Computer Science University of California Berkeley, CA 94720 fbach@cs.berkeley.edu Michael I. Jordan Computer Science and Statistics University of California Berkeley, CA 94720 jordan@cs.berkeley.edu Abstract We present a class of algorithms for learning the structure of graphical models from data. The algorithms are based on a measure known as the kernel generalized variance (KGV), which essentially allows us to treat all variables on an equal footing as Gaussians in a feature space obtained from Mercer kernels. Thus we are able to learn hybrid graphs involving discrete and continuous variables of arbitrary type. We explore the computational properties of our approach, showing how to use the kernel trick to compute the relevant statistics in linear time. We illustrate our framework with experiments involving discrete and continuous data. 1 Introduction Graphical models are a compact and efficient way of representing a joint probability distribution of a set of variables. In recent years, there has been a growing interest in learning the structure of graphical models directly from data, either in the directed case [1, 2, 3, 4] or the undirected case [5]. Current algorithms deal reasonably well with models involving discrete variables or Gaussian variables having only limited interaction with discrete neighbors. However, applications to general hybrid graphs and to domains with general continuous variables are few, and are generally based on discretization. In this paper, we present a general framework that can be applied to any type of variable. We make use of a relationship between kernel-based measures of “generalized variance” in a feature space, and quantities such as mutual information and pairwise independence in the input space. In particular, suppose that each variable  in our domain is mapped into a high-dimensional space   via a map   . Let       and consider the set of random variables   in feature space. Suppose that we compute the mean and covariance matrix of these variables and consider a set of Gaussian variables,    , that have the same mean and covariance. We showed in [6] that a canonical correlation analysis of    yields a measure, known as “kernel generalized variance,” that characterizes pairwise independence among the original variables    , and is closely related to the mutual information among the original variables. This link led to a new set of algorithms for independent component analysis. In the current paper we pursue this idea in a different direction, considering the use of the kernel generalized variance as a surrogate for the mutual information in model selection problems. Effectively, we map data into a feature space via a set of Mercer kernels, with different kernels for different data types, and treat all data on an equal footing as Gaussian in feature space. We briefly review the structure-learning problem in Section 2, and in Section 4 and Section 5 we show how classical approaches to the problem, based on MDL/BIC and conditional independence tests, can be extended to our kernel-based approach. In Section 3 we show that by making use of the “kernel trick” we are able to compute the sample covariance matrix in feature space in linear time in the number of samples. Section 6 presents experimental results. 2 Learning graphical models Structure learning algorithms generally use one of two equivalent interpretations of graphical models [7]: the compact factorization of the joint probability distribution function leads to local search algorithms while conditional independence relationships suggest methods based on conditional independence tests. Local search. In this approach, structure learning is explicitly cast as a model selection problem. For directed graphical models, in the MDL/BIC setting of [2], the likelihood is penalized by a model selection term that is equal to   times the number of parameters necessary to encode the local distributions. The likelihood term can be decomposed and expressed as follows:      , with        ! , where   is the set of parents of node  in the graph to be scored and      is the empirical mutual information between the variable  and the vector  . These mutual information terms and the number of parameters for each local conditional distributions are easily computable in discrete models, as well as in Gaussian models. Alternatively, in a full Bayesian framework, under assumptions about parameter independence, parameter modularity, and prior distributions (Dirichlet for discrete networks, inverse Wishart for Gaussian networks), the log-posterior probability of a graph given the data can be decomposed in a similar way [1, 3]. Given that our approach is based on the assumption of Gaussianity in feature space, we could base our development on either the MDL/BIC approach or the full Bayesian approach. In this paper, we extend the MDL/BIC approach, as detailed in Section 4. Conditional independence tests. In this approach, conditional independence tests are performed to constrain the structure of possible graphs. For undirected models, going from the graph to the set of conditional independences is relatively easy: there is an edge between  and #" if and only if  and $" are independent given all other variables [7]. In Section 5, we show how our approach could be used to perform independence tests and learn an undirected graphical model. We also show how this approach can be used to prune the search space for the local search of a directed model. 3 Gaussians in feature space In this section, we introduce our Gaussianity assumption and show how to approximate the mutual information, as required for the structure learning algorithms. 3.1 Mercer Kernels A Mercer kernel on a space % is a function &  '( from %  to ) such that for any set of points  +*,*+*+  in % , the /.0 matrix 1 , defined by 1  "  &   " , is positive semidefinite. The matrix 1 is usually referred to as the Gram matrix of the points    . Given a Mercer kernel &  '( , it is possible to find a space  and a map  from % to  , such that &  2( is the dot product in  between   and  ( (see, e.g., [8]). The space  is usually referred to as the feature space and the map  as the feature map. We will use the notation  to denote the dot product of and  in feature space  . We also use the notation  to denote the representative of in the dual space of  . For a discrete variable which takes values in  +*,*+*   , we use the trivial kernel &  '(   , which corresponds to a feature space of dimension  . The feature map is      +*,*+*,  . Note that this mapping corresponds to the usual embedding of a multinomial variable of order  in the vector space )  . For continuous variables, we use the Gaussian kernel &  2(      . The feature space has infinite dimension, but as we will show, the data only occupy a small linear manifold and this linear subspace can be determined adaptively in linear time. Note that an alternative is to use the kernel &  2(  ( , which corresponds to simply modeling the data as Gaussian in input space. 3.2 Notation Let ,*+*+*,  be ! random variables with values in spaces % ,*+*,*+ %  . Let us assign a Mercer kernel &  to each of the input spaces %  , with feature space   and feature map   . The random vector of feature images     +*+*,*    #"    +*,*+*,    $ has a covariance matrix % defined by blocks, with block %  " being the covariance matrix between        and  "   "  #" . Let       +*,*+*     denote a jointly Gaussian vector with the same mean and covariance as     ,*+*,*    . The vector  will be used as the random vector on which the learning of graphical model structure is based. Note that the sufficient statistics for this vector are            "  #"   , and are inherently pairwise. No dependency involving strictly more than two variables is modeled explicitly, which makes our scoring metric easy to compute. In Section 6, we present empirical evidence that good models can be learned using only pairwise information. 3.3 Computing sample covariances using kernel trick We are given a random sample  +*,*+*   of elements of % . *+*+* . %  . By mapping into the feature spaces, we define  ! elements &      &  . We assume that for each  the data in feature space    +*+*,*+    have been centered, i.e.,  &  &  ' . The sample covariance matrix ( %  " is then equal to ( %  "   & )&   )& "  . Note that a Gaussian with covariance matrix ( % has zero variance along directions that are orthogonal to the images of the data. Consequently, in order to compute the mutual information, we only need to compute the covariance matrix of the projection of  onto the linear span of the data, that is, for all +* -,. :  )/   ( %  " $0 "    1 &  )/    &    & "  $0 "    1 &  1  / &  1 " 0 &     / 1  1 " 0  (1) where / denotes the  .  vectors with only zeros except at position , , and 1  is the Gram matrix of the centered points, the so-called centered Gram matrix of the  -th component, defined from the Gram matrix 2  of the original (non-centered) points as 1      -43 2     -43 , where 3 is a  . matrix composed of ones [8]. From Eq. (1), we see that the sample covariance matrix of  in the “data basis” has blocks 1  1 " . 3.4 Regularization When the feature space has infinite dimension (as in the case of a Gaussian kernel on ) ), then the covariance we are implicitly fitting with a kernel method has an infinite number of parameters. In order to avoid overfitting and control the capacity of our models, we regularize by smoothing the Gaussian  by another Gaussian with small variance (for an alternative interpretation and further details, see [6]). Let be a small constant. We add to   an isotropic Gaussian with covariance   in an orthonormal basis. In the data basis, the covariance of this Gaussian is exactly the block diagonal matrix with blocks  1  . Consequently, our regularized Gaussian covariance  % has blocks  %  "  1  1 " if   * and  %    1    1  . Since is a small constant, we can use  %    1       1     1      , which leads to a more compact correlation matrix , with blocks  "   " for    * , and     , where   1   1     - . These cross-correlation matrices have exact dimension  , but since the eigenvalues of 1  are softly thresholded to zero or one by the regularization, the effective dimension is      1   1     - . This dimensionality   will be used as the dimension of our Gaussian variables for the MDL/BIC criterion, in Section 4. 3.5 Efficient implementation Direct manipulation of  .  matrices would lead to algorithms that scale as    . Gram matrices, however, are known to be well approximated by matrices of low rank  . The approximation is exact when the feature space has finite dimension  (e.g., with discrete kernels), and  can be chosen less than  . In the case of continuous data with the Gaussian kernel, we have shown that  can be chosen to be upper bounded by a constant independent of  [6]. Finding a low-rank decomposition can thus be done through incomplete Cholesky decomposition in linear time in  (for a detailed treatment of this issue, see [6]). Using the incomplete Cholesky decomposition, for each matrix 1  we obtain the factorization 1        , where   is an  .   matrix with rank   , where    . We perform a singular value decomposition of   to obtain an  .   matrix   with orthogonal columns (i.e., such that        ), and an   .   diagonal matrix   such that 1             . We have    1     - 1         , where where   is the diagonal matrix obtained from the diagonal matrix   by applying the function  "!     to its elements. Thus   has a correlation matrix with blocks  " #      "  " in the new basis defined by the columns of the matrices   , and these blocks will be used to compute the various mutual information terms. 3.6 KGV-mutual information We now show how to compute the mutual information between  +*,*+*,    , and we make a link with the mutual information of the original variables ,*+*,*   . Let ( +*,*+* (  be ! jointly Gaussian random vectors with covariance matrix $ , defined in terms of blocks $  " &% (' (  (!" . The mutual information between the variables ( +*+*,*+(  is equal to (see, e.g., [9]):  ( +*,*+* (       ) $ ) ) $ )+*,*-*.) $   )  (2) where ) /0) denotes the determinant of the matrix / . The ratio of determinants in this expression is usually referred to as the generalized variance, and is independent of the basis which is chosen to compute $ . Following Eq. (2), the mutual information between  +*,*+*+    , which depends solely on the distribution of , is equal to 21  +*+*,*+ $      ) ) ) )+*,*-*,)   ) * (3) We refer to this quantity as the 1  -mutual information (KGV stands for kernel generalized variance). It is always nonnegative and can also be defined for partitions of the variables into subsets, by simply partitioning the correlation matrix accordingly. The KGV has an interesting relationship to the mutual information among the original variables, ,*+*,*   . In particular, as shown in [6], in the case of two discrete variables, the KGV is equal to the mutual information up to second order, when expanding around the manifold of distributions that factorize in the trivial graphical model (i.e. with independent components). Moreover, in the case of continuous variables, when the width  of the Gaussian kernel tend to zero, the KGV necessarily tends to a limit, and also provides a second-order expansion of the mutual information around independence. This suggests that the KGV-mutual information might also provide a useful, computationally-tractable surrogate for the mutual information more generally, and in particular substitute for mutual information terms in objective functions for model selection, where even a rough approximation might suffice to rank models. In the remainder of the paper, we investigate this possibility empirically. 4 Structure learning using local search In this approach, an objective function    ) measures the goodness of fit of the directed graphical model  , and is minimized. The MDL/BIC objective function for our Gaussian variables is easily derived. Let      be the set of parents of node  in  . We have       , with         )       ) )    ) )   )   !       (4) where !   "      " . Given the scoring metric  , we are faced with an NPhard optimization problem on the space of directed acyclic graphs [10]. Because the score decomposes as a sum of local scores, local greedy search heuristics are usually exploited. We adopt such heuristics in our simulations, using hillclimbing. It is also possible to use Markov-chain Monte Carlo (MCMC) techniques to sample from the posterior distribution defined by   )     within our framework; this would in principle allow us to output several high-scoring networks. 5 Conditional independence tests using KGV In this section, we indicate how conditional independence tests can be performed using the KGV, and show how these tests can be used to estimate Markov blankets of nodes. Likelihood ratio criterion. In the case of marginal independence, the likelihood ratio criterion is exactly equal to a power of the mutual information (see, e.g, [11] in the case of Gaussian variables). This generalizes easily to conditional independence, where the likelihood ratio criterion to test the conditional independence of ( and  given is equal to       2(      2(    2   , where  is the number of samples and the mutual information terms are computed using empirical distributions. Applied to our Gaussian variables   , we obtain a test statistic based on linear combination of KGV-mutual information terms:  1  2(     1  2(   1  2  . Theoretical threshold values exist for conditional independence tests with Gaussian variables [7], but instead, we prefer to use the value given by the MDL/BIC criterion, i.e.,     "! (where   and  ! are the dimensions of the Gaussians), so that the same decision regarding conditional independence is made in the two approaches (scoring metric or independence tests) [12]. Markov blankets. For Gaussian variables, it is well-known that some conditional independencies can be read out from the inverse of the joint covariance matrix [7]. More precisely, If ( +*,*+*+(  are ! jointly Gaussian random vectors with dimensions   , and with covariance matrix $ defined in terms of blocks $  " % (' (  (!" , then (  and (!" are independent given all the other variables if and only if the block +* of 1  $  is equal to zero. Thus in the sample case, we can read out the edges of the undirected model directly from 1 , using the test statistic  "       1  1 1  1    1   with the threshold value       . Applied to the variables (      and for all pairs of nodes, we can find an undirected graphical model in polynomial time, and thus a set of Markov blankets [4]. We may also be interested in constructing a directed model from the Markov blankets; however, this transformation is not always possible [7]. Consequently, most approaches use heuristics to define a directed model from a set of conditional independencies [4, 13]. Alternatively, as a pruning step in learning a directed graphical model, the Markov blanket can be safely used by only considering directed models whose moral graph is covered by the undirected graph. 6 Experiments We compare the performance of three hillclimbing algorithms for directed graphical models, one using the KGV metric (with  '$* '  and    ), one using the MDL/BIC metric of [2] and one using the BDe metric of [1] (with equivalent prior sample size    ). When the domain includes continuous variables, we used two discretization strategies; the first one is to use K-means with a given number of clusters, the second one uses the adaptive discretization scheme for the MDL/BIC scoring metric of [14]. Also, to parameterize the local conditional probabilities we used mixture models (mixture of Gaussians, mixture of softmax regressions, mixture of linear regressions), which provide enough flexibility at reasonable cost. These models were fitted using penalized maximum likelihood, and invoking the EM algorithm whenever necessary. The number of mixture components was less than four and determined using the minimum description length (MDL) principle. When the true generating network is known, we measure the performance of algorithms by the KL divergence to the true distribution; otherwise, we report log-likelihood on held-out test data. We use as a baseline the log-likelihood for the maximum likelihood solution to a model with independent components and multinomial or Gaussian densities as appropriate (i.e., for discrete and continuous variables respectively). Toy examples. We tested all three algorithms on a very simple generative model on ! binary nodes, where nodes  through !   point to node ! . For each assignment ( of the !   parents, we set     ) ( by sampling uniformly at random in  '$   . We also studied a linear Gaussian generative model with the identical topology, with regression weights chosen uniformly at random in       . We generated    ' ' ' samples. We report average results (over 20 replications) in Figure 1 (left), for ! ranging from  to  ' . We see that on the discrete networks, the performance of all three algorithms is similar, degrading slightly as ! increases. On the linear networks, on the other hand, the discretization methods degrade significantly as ! increases. The KGV approach is the only approach of the three capable of discovering these simple dependencies in both kinds of networks. Discrete networks. We used three networks commonly used as benchmarks1, the ALARM network (37 variables), the INSURANCE network (27 variables) and the HAILFINDER network (56 variables). We tested various numbers of samples  . We performed 40 replications and report average results in Figure 1 (right). We see that the performance of our metric lies between the (approximate Bayesian) BIC metric and the (full Bayesian) BDe 1Available at http://www.cs.huji.ac.il/labs/compbio/Repository/. 2 4 6 8 10 0 0.5 1 m 2 4 6 8 10 0 0.5 1 m Network N (  ) BIC BDe KGV ALARM 0.5 0.85 0.47 0.66 1 0.42 0.25 0.39 4 0.17 0.07 0.15 16 0.04 0.02 0.06 INSURANCE 0.5 1.84 0.92 1.53 1 0.93 0.52 0.83 4 0.27 0.15 0.40 16 0.05 0.04 0.19 HAILFINDER 0.5 2.98 2.29 2.99 1 1.70 1.32 1.77 4 0.63 0.48 0.63 16 0.25 0.17 0.32 Figure 1: (Top left) KL divergence vs. size of discrete network ! : KGV (plain), BDe (dashed), MDL/BIC (dotted). (Bottom left) KL divergence vs. size of linear Gaussian network: KGV (plain), BDe with discretized data (dashed), MDL/BIC with discretized data (dotted x), MDL/BIC with adaptive discretization (dotted +). (Right) KL divergence for discrete network benchmarks. Network N D C d-5 d-10 KGV ABALONE 4175 1 8 10.68 10.53 11.16 VEHICLE 846 1 18 21.92 21.12 22.71 PIMA 768 1 8 3.18 3.14 3.30 AUSTRALIAN 690 9 6 5.26 5.11 5.40 BREAST 683 1 10 15.00 15.03 15.04 BALANCE 625 1 4 1.97 2.03 1.88 HOUSING 506 1 13 14.71 14.25 14.16 CARS1 392 1 7 6.93 6.58 6.85 CLEVE 296 8 6 2.66 2.57 2.68 HEART 270 9 5 1.34 1.36 1.32 Table 1: Performance for hybrid networks.  is the number of samples, and  and % are the number of discrete and continuous variables, respectively. The best performance in each row is indicated in bold font. metric. Thus the performance of the new metric appears to be competitive with standard metrics for discrete data, providing some assurance that even in this case pairwise sufficient statistics in feature space seem to provide a reasonable characterization of Bayesian network structure. Hybrid networks. It is the case of hybrid discrete/continuous networks that is our principal interest—in this case the KGV metric can be applied directly, without discretization of the continuous variables. We investigated performance on several hybrid datasets from the UCI machine learning repository, dividing them into two subsets, 4/5 for training and 1/5 for testing. We also log-transformed all continuous variables that represent rates or counts. We report average results (over 10 replications) in Table 1 for the KGV metric and for the BDe metric—continuous variables are discretized using K-means with 5 clusters (d-5) or 10 clusters (d-10). We see that although the BDe methods perform well in some problems, their performance overall is not as consistent as that of the KGV metric. 7 Conclusion We have presented a general method for learning the structure of graphical models, based on treating variables as Gaussians in a high-dimensional feature space. The method seamlessly integrates discrete and continuous variables in a unified framework, and can provide improvements in performance when compared to approaches based on discretization of continuous variables. The method also has appealing computational properties; in particular, the Gaussianity assumption enables us make only a single pass over the data in order to compute the pairwise sufficient statistics. The Gaussianity assumption also provides a direct way to approximate Markov blankets for undirected graphical models, based on the classical link between conditional independence and zeros in the precision matrix. While the use of the KGV as a scoring metric is inspired by the relationship between the KGV and the mutual information, it must be emphasized that this relationship is a local one, based on an expansion of the mutual information around independence. While our empirical results suggest that the KGV is also an effective surrogate for the mutual information more generally, further theoretical work is needed to provide a deeper understanding of the KGV in models that are far from independence. Finally, our algorithms have free parameters, in particular the regularization parameter and the width of the Gaussian kernel for continuous variables. Although the performance is empirically robust to the setting of these parameters, learning those parameters from data would not only provide better and more consistent performance, but it would also provide a principled way to learn graphical models with local structure [15]. Acknowledgments The simulations were performed using Kevin Murphy’s Bayes Net Toolbox for MATLAB. We would like to acknowledge support from NSF grant IIS-9988642, ONR MURI N0001400-1-0637 and a grant from Intel Corporation. References [1] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20(3):197–243, 1995. [2] W. Lam and F. Bacchus. Learning Bayesian belief networks: An approach based on the MDL principle. Computational Intelligence, 10(4):269–293, 1994. [3] D. Geiger and D. Heckerman. Learning Gaussian networks. In Proc. UAI, 1994. [4] J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2000. [5] S. Della Pietra, V. J. Della Pietra, and J. D. Lafferty. Inducing features of random fields. IEEE Trans. PAMI, 19(4):380–393, 1997. [6] F. R. Bach and M. I. Jordan. Kernel independent component analysis. Journal of Machine Learning Research, 3:1–48, 2002. [7] S. L. Lauritzen. Graphical Models. Clarendon Press, 1996. [8] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2001. [9] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley & Sons, 1991. [10] D. M. Chickering. Learning Bayesian networks is NP-complete. In Learning from Data: Artificial Intelligence and Statistics 5. Springer-Verlag, 1996. [11] T. W. Anderson. An Introduction to Multivariate Statistical Analysis. Wiley & Sons, 1984. [12] R. G. Cowell. Conditions under which conditional independence and scoring methods lead to identical selection of Bayesian network models. In Proc. UAI, 2001. [13] D. Margaritis and S. Thrun. Bayesian network induction via local neighborhoods. In Adv. NIPS 12, 2000. [14] N. Friedman and M. Goldszmidt. Discretizing continuous attributes while learning Bayesian networks. In Proc. ICML, 1996. [15] N. Friedman and M. Goldszmidt. Learning Bayesian networks with local structure. In Learning in Graphical Models. MIT Press, 1998.
2002
88
2,296
Stochastic Neighbor Embedding Geoffrey Hinton and Sam Roweis Department of Computer Science, University of Toronto 10 King’s College Road, Toronto, M5S 3G5 Canada hinton,roweis  @cs.toronto.edu Abstract We describe a probabilistic approach to the task of placing objects, described by high-dimensional vectors or by pairwise dissimilarities, in a low-dimensional space in a way that preserves neighbor identities. A Gaussian is centered on each object in the high-dimensional space and the densities under this Gaussian (or the given dissimilarities) are used to define a probability distribution over all the potential neighbors of the object. The aim of the embedding is to approximate this distribution as well as possible when the same operation is performed on the low-dimensional “images” of the objects. A natural cost function is a sum of Kullback-Leibler divergences, one per object, which leads to a simple gradient for adjusting the positions of the low-dimensional images. Unlike other dimensionality reduction methods, this probabilistic framework makes it easy to represent each object by a mixture of widely separated low-dimensional images. This allows ambiguous objects, like the document count vector for the word “bank”, to have versions close to the images of both “river” and “finance” without forcing the images of outdoor concepts to be located close to those of corporate concepts. 1 Introduction Automatic dimensionality reduction is an important “toolkit” operation in machine learning, both as a preprocessing step for other algorithms (e.g. to reduce classifier input size) and as a goal in itself for visualization, interpolation, compression, etc. There are many ways to “embed” objects, described by high-dimensional vectors or by pairwise dissimilarities, into a lower-dimensional space. Multidimensional scaling methods[1] preserve dissimilarities between items, as measured either by Euclidean distance, some nonlinear squashing of distances, or shortest graph paths as with Isomap[2, 3]. Principal components analysis (PCA) finds a linear projection of the original data which captures as much variance as possible. Other methods attempt to preserve local geometry (e.g. LLE[4]) or associate high-dimensional points with a fixed grid of points in the low-dimensional space (e.g. self-organizing maps[5] or their probabilistic extension GTM[6]). All of these methods, however, require each high-dimensional object to be associated with only a single location in the low-dimensional space. This makes it difficult to unfold “many-to-one” mappings in which a single ambiguous object really belongs in several disparate locations in the low-dimensional space. In this paper we define a new notion of embedding based on probable neighbors. Our algorithm, Stochastic Neighbor Embedding (SNE) tries to place the objects in a low-dimensional space so as to optimally preserve neighborhood identity, and can be naturally extended to allow multiple different low-d images of each object. 2 The basic SNE algorithm For each object, , and each potential neighbor,  , we start by computing the asymmetric probability,  , that would pick  as its neighbor:             (1) The dissimilarities,    , may be given as part of the problem definition (and need not be symmetric), or they may be computed using the scaled squared Euclidean distance (“affinity”) between two high-dimensional points, !"$#$! :     %&% !   !  %'%  ()   (2) where )  is either set by hand or (as in some of our experiments) found by a binary search for the value of )  that makes the entropy of the distribution over neighbors equal to *'+,.- . Here, - is the effective number of local neighbors or “perplexity” and is chosen by hand. In the low-dimensional space we also use Gaussian neighborhoodsbut with a fixed variance (which we set without loss of generality to be /  ) so the induced probability 01 that point picks point  as its neighbor is a function of the low-dimensional images 23 of all the objects and is given by the expression: 0   4  %'% 2   2  %&%       4  %'% 2  2  %&%   (3) The aim of the embedding is to match these two distributions as well as possible. This is achieved by minimizing a cost function which is a sum of Kullback-Leibler divergences between the original ( 5 ) and induced ( 06 ) distributions over neighbors for each object: 7 98  8    *'+, : 0  ;8 9<>= @?  %&% A   (4) The dimensionality of the 2 space is chosen by hand (much less than the number of objects). Notice that making 0  large when   is small wastes some of the probability mass in the 0 distribution so there is a cost for modeling a big distance in the high-dimensionalspace with a small distance in the low-dimensional space, though it is less than the cost of modeling a small distance with a big one. In this respect, SNE is an improvement over methods like LLE [4] or SOM [5] in which widely separated data-points can be “collapsed” as near neighbors in the low-dimensional space. The intuition is that while SNE emphasizes local distances, its cost function cleanly enforces both keeping the images of nearby objects nearby and keeping the images of widely separated objects relatively far apart. Differentiating C is tedious because 2  affects 0  via the normalization term in Eq. 3, but the result is simple: B 7 B 2   ( 8   2  2     0 DCEFG  0HG  (5) which has the nice interpretation of a sum of forces pulling 2" toward 2 or pushing it away depending on whether  is observed to be a neighbor more or less often than desired. Given the gradient, there are many possible ways to minimize 7 and we have only just begun the search for the best method. Steepest descent in which all of the points are adjusted in parallel is inefficient and can get stuck in poor local optima. Adding random jitter that decreases with time finds much better local optima and is the method we used for the examples in this paper, even though it is still quite slow. We initialize the embedding by putting all the low-dimensional images in random locations very close to the origin. Several other minimization methods, including annealing the perplexity, are discussed in sections 5&6. 3 Application of SNE to image and document collections As a graphic illustration of the ability of SNE to model high-dimensional, near-neighbor relationships using only two dimensions, we ran the algorithm on a collection of bitmaps of handwritten digits and on a set of word-author counts taken from the scanned proceedings of NIPS conference papers. Both of these datasets are likely to have intrinsic structure in many fewer dimensions than their raw dimensionalities: 256 for the handwritten digits and 13679 for the author-word counts. To begin, we used a set of  digit bitmaps from the UPS database[7] with  examples from each of the five classes 0,1,2,3,4. The variance of the Gaussian around each point in the (  -dimensional raw pixel image space was set to achieve a perplexity of 15 in the distribution over high-dimensional neighbors. SNE was initialized by putting all the 2" in random locations very close to the origin and then was trained using gradient descent with annealed noise. Although SNE was given no information about class labels, it quite cleanly separates the digit groups as shown in figure 1. Furthermore, within each region of the low-dimensional space, SNE has arranged the data so that properties like orientation, skew and stroke-thickness tend to vary smoothly. For the embedding shown, the SNE cost function in Eq. 4 has a value of  nats; with a uniform distribution across lowdimensional neighbors, the cost is  *'+,  (          nats. We also applied principal component analysis (PCA)[8] to the same data; the projection onto the first two principal components does not separate classes nearly as cleanly as SNE because PCA is much more interested in getting the large separations right which causes it to jumble up some of the boundaries between similar classes. In this experiment, we used digit classes that do not have very similar pairs like 3 and 5 or 7 and 9. When there are more classes and only two available dimensions, SNE does not as cleanly separate very similar pairs. We have also applied SNE to word-document and word-author matrices calculated from the OCRed text of NIPS volume 0-12 papers[9]. Figure 2 shows a map locating NIPS authors into two dimensions. Each of the 676 authors who published more than one paper in NIPS vols. 0-12 is shown by a dot at the position 2  found by SNE; larger red dots and corresponding last names are authors who published six or more papers in that period. Distances   were computed as the norm of the difference between log aggregate author word counts, summed across all NIPS papers. Co-authored papers gave fractional counts evenly to all authors. All words occurring in six or more documents were included, except for stopwords giving a vocabulary size of 13649. (The bow toolkit[10] was used for part of the pre-processing of the data.) The )  were set to achieve a local perplexity of  ( neighbors. SNE seems to have grouped authors by broad NIPS field: generative models, support vector machines, neuroscience, reinforcement learning and VLSI all have distinguishable localized regions. 4 A full mixture version of SNE The clean probabilistic formulation of SNE makes it easy to modify the cost function so that instead of a single image, each high-dimensional object can have several different versions of its low-dimensional image. These alternative versions have mixing proportions that sum to  . Image-version  of object has location 2  and mixing proportion 5 . The low-dimensional neighborhood distribution for is a mixture of the distributions induced by each of its image-versions across all image-versions of a potential neighbor  : 0  8 : 8   4 $ %&% 2  2 %'%       4 $ %&% 2    2  %'%   (6) In this multiple-image model, the derivatives with respect to the image locations 23 are straightforward; the derivatives w.r.t the mixing proportions   are most easily expressed Figure 1: The result of running the SNE algorithm on  (  -dimensional grayscale images of handwritten digits. Pictures of the original data vectors !  (scans of handwritten digit) are shown at the location corresponding to their low-dimensional images 23 as found by SNE. The classes are quite well separated even though SNE had no information about class labels. Furthermore, within each class, properties like orientation, skew and strokethickness tend to vary smoothly across the space. Not all points are shown: to produce this display, digits are chosen in random order and are only displayed if a   x   region of the display centered on the 2-D location of the digit in the embedding does not overlap any of the   x   regions for digits that have already been displayed. (SNE was initialized by putting all the  in random locations very close to the origin and then was trained using batch gradient descent (see Eq. 5) with annealed noise. The learning rate was 0.2. For the first 3500 iterations, each 2-D point was jittered by adding Gaussian noise with a standard deviation of   after each position update. The jitter was then reduced to  for a further    iterations.) Dayan Sejnowski Jordan Hinton Williams Ghahramani Bengio LeCun Graf Simard Denker Guyon Vapnik Smola Muller Scholkopf Solla Bishop Jaakkola Tishby Zemel Mozer Pouget Sun Giles Lee Chen Meir Alspector Mjolsness Rangarajan Gold Cauwenberghs Jabri Andreou Seung Lee HarrisMurray Ruppin Meilijson MeadLazzaro Horn Bialek Li Brown Eeckman Ruderman Mel Cowan Baird Saad Platt Bartlett Shawe−Taylor Williamson Singh Barto Kearns Saul Singer Tresp Leen Moody WolpertOpper Barber Morgan Viola Nowlan Movellan Cottrell Waibel Bourlard Lippmann Doya Bell Spence Maass Moore Thrun Principe Obermayer Sutton Kawato Warmuth Sollich Atkeson CohnKowalczyk Amari Abu−Mostafa Yang Kailath Stork Baldi Smyth MacKay Pomerleau Touretzky Tenenbaum Bower Koch Barber Moore Thrun Warmuth Sollich Abu−Mostafa Coolen Cohn Neuneier AtkesonGoodman Tesauro Ahmad Pentland Lippmann Schraudolph Touretzky Pomerleau Maass Baluja Chauvin Munro Sanger Shavlik Lewicki Schmidhuber Baldi Omohundro MacKay Smyth Robinson Krogh Buhmann Hertz Pearlmutter Tenenbaum Cottrell Movellan Kailath Yang Wiles Figure 2: Embedding of NIPS authors into two dimensions. Each of the 676 authors who published more than one paper in NIPS vols. 0-12 is show by a dot at the location 2  found by the SNE algorithm. Larger red dots and corresponding last names are authors who published six or more papers in that period. The inset in upper left shows a blowup of the crowded boxed central portion of the space. Dissimilarities between authors were computed based on squared Euclidean distance between vectors of log aggregate author word counts. Co-authored papers gave fractional counts evenly to all authors. All words occurring in six or more documents were included, except for stopwords giving a vocabulary size of 13649. The NIPS text data is available at http://www.cs.toronto.edu/ roweis/data.html. in terms of   , the probability that version  of picks version  of  : @      4 $ %&% 2    2   %&%   9       %'% 2  2   %'%   (7) The effect on 06 of changing the mixing proportion for version  of object  is given by B 0  B      8  $ C 8  :         8 @ (8) where      if   and  otherwise. The effect of changing    on the cost, C, is B 7 B      8  8    0  B 0  B    (9) Rather than optimizing the mixing proportions directly, it is easier to perform unconstrained optimization on “softmax weights” defined by     4       4    . As a “proof-of-concept”, we recently implemented a simplified mixture version in which every object is represented in the low-dimensional space by exactly two components that are constrained to have mixing proportions of   . The two components are pulled together by a force which increases linearly up to a threshold separation. Beyond this threshold the force remains constant.1 We ran two experiments with this simplified mixture version of SNE. We took a dataset containing  pictures of each of the digits 2,3,4 and added   hybrid digit-pictures that were each constructed by picking new examples of two of the classes and taking each pixel at random from one of these two “parents”. After minimization,  of the hybrids and only   of the non-hybrids had significantly different locations for their two mixture components. Moreover, the mixture components of each hybrid always lay in the regions of the space devoted to the classes of its two parents and never in the region devoted to the third class. For this example we used a perplexity of   in defining the local neighborhoods, a step size of for each position update of   times the gradient, and used a constant jitter of    . Our very simple mixture version of SNE also makes it possible to map a circle onto a line without losing any near neighbor relationships or introducing any new ones. Points near one “cut point” on the circle can mapped to a mixture of two points, one near one end of the line and one near the other end. Obviously, the location of the cut on the two-dimensional circle gets decided by which pairs of mixture components split first during the stochastic optimization. For certain optimization parameters that control the ease with which two mixture components can be pulled apart, only a single cut in the circle is made. For other parameter settings, however, the circle may fragment into two or more smaller line-segments, each of which is topologically correct but which may not be linked to each other. The example with hybrid digits demonstrates that even the most primitive mixture version of SNE can deal with ambiguous high-dimensional objects that need to be mapped to two widely separated regions of the low-dimensional space. More work needs to be done before SNE is efficient enough to cope with large matrices of document-word counts, but it is the only dimensionality reduction method we know of that promises to treat homonyms sensibly without going back to the original documents to disambiguate each occurrence of the homonym. 1We used a threshold of     . At threshold the force was     nats per unit length. The low-d space has a natural scale because the variance of the Gaussian used to determine    is fixed at 0.5. 5 Practical optimization strategies Our current method of reducing the SNE cost is to use steepest descent with added jitter that is slowly reduced. This produces quite good embeddings, which demonstrates that the SNE cost function is worth minimizing, but it takes several hours to find a good embedding for just  datapoints so we clearly need a better search algorithm. The time per iteration could be reduced considerably by ignoring pairs of points for which all four of   #  G #G0  #G0 G are small. Since the matrix   is fixed during the learning, it is natural to sparsify it by replacing all entries below a certain threshold with zero and renormalizing. Then pairs H#  for which both 5 and FG are zero can be ignored from gradient calculations if both 0  and 0 G are small. This can in turn be determined in logarithmic time in the size of the training set by using sophisticated geometric data structures such as K-D trees, ball-trees and AD-trees, since the 0 depend only on 42  2 . Computational physics has attacked exactly this same complexity when performing multibody gravitational or electrostatic simulations using, for example, the fast multipole method. In the mixture version of SNE there appears to be an interesting way of avoiding local optima that does not involve annealing the jitter. Consider two components in the mixture for an object that are far apart in the low-dimensional space. By raising the mixing proportion of one and lowering the mixing proportion of the other, we can move probability mass from one part of the space to another without it ever appearing at intermediate locations. This type of “probability wormhole” seems like a good way to avoid local optima that arise because a cluster of low-dimensional points must move through a bad region of the space in order to reach a better one. Yet another search method, which we have used with some success on toy problems, is to provide extra dimensions in the low-dimensional space but to penalize non-zero values on these dimensions. During the search, SNE will use the extra dimensions to go around lower-dimensional barriers but as the penalty on using these dimensions is increased, they will cease to be used, effectively constraining the embedding to the original dimensionality. 6 Discussion and Conclusions Preliminary experiments show that we can find good optima by first annealing the perplexities )   (using high jitter) and only reducing the jitter after the final perplexity has been reached. This raises the question of what SNE is doing when the variance, )   , of the Gaussian centered on each high-dimensional point is very big so that the distribution across neighbors is almost uniform. It is clear that in the high variance limit, the contribution of :*&+ ,  : 06  to the SNE cost function is just as important for distant neighbors as for close ones. When )   is very large, it can be shown that SNE is equivalent to minimizing the mismatch between squared distances in the two spaces, provided all the squared distances from an object are first normalized by subtracting off their “antigeometric” mean,    :    ;8   @                    (10)     %&% !  ! %&%  )  #      *&+ , 8    4         # (11)      %'% 2   2  %'%  )  #       *'+, 8    4 $          (12) where  is the number of objects. This mismatch is very similar to “stress” functions used in nonmetric versions of MDS, and enables us to understand the large-variance limit of SNE as a particular variant of such procedures. We are still investigating the relationship to metric MDS and to PCA. SNE can also be seen as an interesting special case of Linear Relational Embedding (LRE) [11]. In LRE the data consists of triples (e.g. Colin has-mother Victoria) and the task is to predict the third term from the other two. LRE learns an N-dimensional vector for each object and an NxN-dimensional matrix for each relation. To predict the third term in a triple, LRE multiplies the vector representing the first term by the matrix representing the relationship and uses the resulting vector as the mean of a Gaussian. Its predictive distribution for the third term is then determined by the relative densities of all known objects under this Gaussian. SNE is just a degenerate version of LRE in which the only relationship is “near” and the matrix representing this relationship is the identity. In summary, we have presented a new criterion, Stochastic Neighbor Embedding, for mapping high-dimensional points into a low-dimensional space based on stochastic selection of similar neighbors. Unlike self-organizing maps, in which the low-dimensional coordinates are fixed to a grid and the high-dimensional ends are free to move, in SNE the high-dimensional coordinates are fixed to the data and the low-dimensional points move. Our method can also be applied to arbitrary pairwise dissimilarities between objects if such are available instead of (or in addition to) high-dimensional observations. The gradient of the SNE cost function has an appealing “push-pull” property in which the forces acting on 2 to bring it closer to points it is under-selecting and further from points it is over-selecting as its neighbor. We have shown results of applying this algorithm to image and document collections for which it sensibly placed similar objects nearby in a low-dimensional space while keeping dissimilar objects well separated. Most importantly, because of its probabilistic formulation, SNE has the ability to be extended to mixtures in which ambiguous high-dimensionalobjects (such as the word “bank”) can have several widely-separated images in the low-dimensional space. Acknowledgments We thank the anonymous referees and several visitors to our poster for helpful suggestions. Yann LeCun provided digit and NIPS text data. This research was funded by NSERC. References [1] T. Cox and M. Cox. Multidimensional Scaling. Chapman & Hall, London, 1994. [2] J. Tenenbaum. Mapping a manifold of perceptual observations. In Advances in Neural Information Processing Systems, volume 10, pages 682–688. MIT Press, 1998. [3] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290:2319–2323, 2000. [4] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323–2326, 2000. [5] T. Kohonen. Self-organization and Associative Memory. Springer-Verlag, Berlin, 1988. [6] C. Bishop, M. Svensen, and C. Williams. GTM: The generative topographic mapping. Neural Computation, 10:215, 1998. [7] J. J. Hull. A database for handwritten text recognition research. IEEE Transaction on Pattern Analysis and Machine Intelligence, 16(5):550–554, May 1994. [8] I. T. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 1986. [9] Yann LeCun. Nips online web site. http://nips.djvuzone.org, 2001. [10] Andrew Kachites McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/ mccallum/bow, 1996. [11] A. Paccanaro and G.E. Hinton. Learning distributed representations of concepts from relational data using linear relational embedding. IEEE Transactions on Knowledge and Data Engineering, 13:232–245, 2000.
2002
89
2,297
Classifying Patterns of Visual Motion a Neuromorphic Approach Jakob Heinzle and Alan Stocker Institute of Neuroinformatics University and ETH Z¨urich Winterthurerstr. 190, 8057 Z¨urich, Switzerland  jakob,alan  @ini.phys.ethz.ch Abstract We report a system that classifies and can learn to classify patterns of visual motion on-line. The complete system is described by the dynamics of its physical network architectures. The combination of the following properties makes the system novel: Firstly, the front-end of the system consists of an aVLSI optical flow chip that collectively computes 2-D global visual motion in real-time [1]. Secondly, the complexity of the classification task is significantly reduced by mapping the continuous motion trajectories to sequences of ’motion events’. And thirdly, all the network structures are simple and with the exception of the optical flow chip based on a Winner-Take-All (WTA) architecture. We demonstrate the application of the proposed generic system for a contactless man-machine interface that allows to write letters by visual motion. Regarding the low complexity of the system, its robustness and the already existing front-end, a complete aVLSI system-on-chip implementation is realistic, allowing various applications in mobile electronic devices. 1 Introduction The classification of continuous temporal patterns is possible using Hopfield networks with asymmetric weights [2], but classification is restricted to periodic trajectories with a wellknown start and end point. Also purely feed-forward network architectures were proposed [3]. However, such networks become unfeasibly large for practical applications. We simplify the task by first mapping the continuous visual motion patterns to sequences of motion events. A motion event is characterized by the occurrence of visual motion in one out of a pre-defined set of directions. Known approaches for sequence classification can be divided into two major categories: The first group typically applies standard Hopfield networks with time-dependent weight matrices [4, 5]. These networks are relatively inefficient in storage capacity, using many units per stored pattern. The second approach relies on time-delay elements and some form of coincidence detectors that respond dominantly to the correctly time-shifted events of a known sequence [6, 7]. These approaches allow a compact network architecture. Furthermore, they require neither the knowledge of the start  corresponding author; www.ini.unizh.ch/˜alan and end point of a sequence nor a reset of internal states. The sequence classification network of our proposed system is based on the work of Tank and Hopfield [6], but extended to be time-continuous and to show increased robustness. Finally, we modify the network architecture to allow the system to learn arbitrary sequences of a particular length. 2 System architecture Sequence classification network Direction selective network Optical flow chip System output A B C time N E S my mx W 0 N E S W τ2τ3 τ1 τ2τ3 τ1 τ2τ3 τ1 τ2τ3 τ1 NWE Figure 1: The complete classification system. The input to the system is a real-world moving visual stimulus and the output is the activity of units representing particular trajectory classes. The system contains three major stages of processing as shown in Figure 1: the optical flow chip estimates global visual motion, the direction selective network (DSN) maps the estimate to motion events and the sequence classification network (SCN) finally classifies the sequences of these events. The architecture reflects the separation of the task into the classification in motion space (DSN) and, consecutively, the classification in time (SCN). Classification in both cases relies on identical WTA networks differing in their inputs only. The outputs of the DSN and the SCN are ’quasi-discrete’ - both signals are continuous-time but due to the non-linear amplification of the WTA represent discrete information. 2.1 The optical flow chip The front-end of the classification system consists of the optical flow chip [1, 8], that estimates 2D visual motion. Due to adaptive circuitry, the estimate of visual motion is fairly independent of illumination conditions. The estimation of visual motion requires the integration of visual information within the image space in order to solve for inherent visual ambiguities. For the purpose of the here presented classification system, the integration of visual information is set to take place over the complete image space. Thus, the resulting estimate represents the global visual motion perceived. The output signals of the chip are two analog voltages  and  that represent at any instant the two components of the actual global motion vector. The output signals are linear to the perceived motion within a range of  volts. The resolvable speed range is 1-3500 pix/sec, thus spans more than three orders of magnitude. The continuous-time voltage trajectory     is the input to the direction selective network. 2.2 The direction selective network (DSN) The second stage transforms the trajectory   into a sequence of motion events, where an event means that the motion vector points into a particular region of motion space. Motion space is divided into a set of regions each represented by a unit of the DSN (see Figure 2a). Each direction selective unit (DSU) receives highest input when is within the corresponding region. In the following we choose four motion directions referred to as north (N), east (E), south (S) and west (W) and a central region for zero motion. The WTA behavior of the DSN can be described by minimizing the cost function [9]                     !#" (1)  $  &%'   ( *),+#. /103254  76 98 where   and   are the excitatory and inhibitory weights between the DSU [8]. The units have a sigmoidal activation function   0 ;:  . Following gradient descent, the dynamics of the units are described by < 6=: 6     ( :                 $ %'  (2) where < and ( are the capacitance and resistance of the units. The preferred direction of the >@?  DSU is given by the angle A CB ;>   ED . The input to the DSU is %' GFIH H7JLK=M " A ON if H A ON HQPSR "  if H A ON HQT R " (3) where  H H  N  is the motion estimate in polar coordinates. The input to the zero motion unit is %VU  % thresh  H H . In Figure 2b we compare the outputs of a DSU to a b c N E S my mx W 0 activity -0.3 0 0.3 -0.3 0 0.3 0 1 motion E-W [Volts] motion N-S [Volts] -0.3 0 0.3 -0.3 0 0.3 0 1 activity motion E-W [Volts] motion N-S [Volts] Figure 2: The direction selective network. a) The WTA architecture of the DSN. Filled connections are excitatory, empty ones are inhibitory. Dotted lines show the regions in motion space where the different units win. b) The response of the N-DSU to constant input is shown as surface plot, while the responses of the same unit to dynamic motion trajectories (circles and straight lines) are plotted as lines. Differences between constant and dynamic inputs are marginal. c) The output of the zero motion unit to constant input. constant and varying input . The dynamic response is close to the steady state as long as the time-constant of the DSN is smaller than the typical time-scale of   . 2.3 The sequence classification network (SCN) The classification of the temporal structure of the DSN output is the task of the SCN. The network uses time-delays to ”concentrate information in time” [6] (see Figure 3b). In equivalence with the regions in motion space these time-delays form ’regions’ in time. The number of units (SCU) of the SCN is equal to the number of trajectory classes the system is able to classify. We use   time-delays, where is the number of events of the longest sequence to be classified. The time interval  delay between two maxima of the time-delay functions is the characteristic time-scale of the sequence classification. Again, the SCN is a WTA network with a cost function equivalent to  (1), except that an additional term   is introduced to provide constant input. The SCU have an activation function    0   and follow the dynamics < 6  6     (           (4)                    The last term is equivalent to the input term $ %  in (2).     are the weights of the connections between the DSN and the SCN and      U         6 is the delayed output of the DSU. The time-delay functions are the same as in [6]1. Note that  is the only additional term compared to the dynamics in (2). It allows to set a detection threshold to the sequence classification. Figure 3a shows an outline of the SCN and its connectivity. For example, if the sequence NW-E has to be classified, the inputs from the E-DSU delayed by  delay, from the W-DSU by  delay and from the N-DSU by  delay are excitatory, while all the others are inhibitory. All excitatory as well as all inhibitory weights are equal with excitation being twice as strong as inhibition. The additional time-delay is always inhibitory. It prevents the first motion event from overruling the rest of the sequence and is crucial for the exact classification of short sequences. N E S W τ2τ3 τ1 τ2τ3 τ1 τ2τ3 τ1 τ2τ3 τ1 NWE WTA b a N W E simultaneous input delayed motion events N W E motion events 3xTdelay 2xTdelay Tdelay time Figure 3: The sequence classification network. a) Outline of its WTA structure (shown within the dashed line) and its input stage (k=3). The time-delays between the DSU and the SRU are numbered in units of  delay. Filled dots are excitatory connections while empty ones are inhibitory. The additional inhibitory delay is not shown. The marked unit recognizes the sequence N-W-E. b) A sequence is classified by delaying consecutive motion events such that they provide a simultaneous excitatory input. 1  "!$#&%(' delay )+*-,&. / !10 ) !32 &4 delay )657,8."/ :9 0;!<2 &4 delay ) ! , where 0 *>=@? A B@B CDAFE 3 Performance of the system We measure the performance of the system in two different ways. Firstly, we analyze the robustness to time warping. Knowing the response properties of the optical flow chip [8] we simulate its output to analyze systematically the two other stages of the system. Secondly, we test the complete system including the optical flow chip under real conditions. Here, only a qualitative assessment can be given. 3.1 Robustness to time warping We simulate the visual motion trajectories as a sum of Gaussians in time, thus    U   6    "  ? 2   / ? where 6             . The important parameters are the width of the Gaussians N  and the time difference   between the centers of two neighboring Gaussians. Three schemes are tested: changes of N  only, changes of  only and a linear stretch in time, i.e. a change in both parameters. Time is always measured in units of the characteristic time-delay  delay. For fixed     delay, N  can be decreased down to D  delay for sequences of length two and down to  D  delay for longer sequences. Fixing N   delay, classification is still guaranteed for varying   according to Figure 4a; e.g. for a sequence of length three and input strength U   volts,   can maximally increase by   . For three and four events (gray and white bars in Figure 4). Linear time stretches change the total input to the system. This causes the asymmetry seen in Figure 4b. Short sequences are relatively more robust to any change in   than longer sequences2 input [Volts] 0.1 0.2 0.3 input [Volts] no class. time warp 0.1 0.2 0.3 0% -50% +50% +100% +150% no class. -50% 0% +50% +100% +150% b a time warp Figure 4: Time warping. The histograms shows the maximal acceptable time warping. The results are shown for three different trajectory lengths (black: two motion events, gray: three events, white: four events) and three different input strengths (maximal output voltages of the optical flow chip). a) N  is held at  D  delay while  is changed. b) Time is stretched linearly and therefore the duration of the events is proportional to   . No classification is possible for sequences of length four at very low input levels. The system cannot distinguish between the sequences e.g. N-W-E-W and N-W-W-W. In this case, the sum of the weighted integrals of the delay functions of both sequences leads to an equivalent input to the SCN. However, if two adjacent events are not allowed to be the same, this problem does not occur. 2Imagine the time warp being C A . For a sequence with five events and more, the time shift becomes larger than ' delay for some of the events, which leads to inhibition instead of excitation. 3.2 Real world application - writing letters with patterns of hand movements The complete system was applied to classify visual motion patterns elicited by hand movements in front of the optical flow chip. Using sequences of three events we are able to classify 36 valid sequences and therefore encode the alphabet. Figure 5 shows a typical visual motion pattern (assigned to the letter ’H’) and the corresponding signals at all stages of processing. a b -0.2 0.2 0 -0.2 0.2 0 motion [Volts] motion [Volts] 4 5 0 3 -0.2 0.2 0 1 2 motion [Volts] 0 0.5 1 4 5 0 3 1 2 SCU activity 0 0.5 1 4 5 0 3 1 2 DSU activity c d time [T ] delay time [T ] delay time [T ] delay Figure 5: Tracking a signal through all stages. a) The output of the optical flow chip to a moving hand in a N-S vs. E-W motion plot. The marks on the trajectory show different time stamps. b) The same trajectory including the time stamps in a motion vs. time plot (N-S motion: solid line, E-W motion: dashed line). Time is given in units of  delay. c) The output of the DSN showing classification in motion space. (N: solid line, E: dashed, W: dotted). d) The output of the SCN. Here, the unit that recognizes the trajectory class ’H’ is shown by the solid line. The detection threshold is set at 0.8 maximal activity. The system runs on a 166Mhz Pentium PC using MatLab (TheMathworks Inc.). The signal of the optical flow chip is read into the computer using an AD-card. All simulations are done with simple forward integration of the differential equations. 4 Learning motion trajectories We expanded the system to be able to learn visual motion patterns. We model each set of four synapses connecting the four DSU to a single SCU with the same time-delay by a competitive network of four synapse units (see Figure 6) with very slow time constants. We impose on the output of the four units that their sum     equals  . The cost function x -1 + τ3 τ3 τ3 τ3 N E S W x x x time [sec] weights 0 5 10 15 20 0 wexc -winh activity 0 1 0.5 0 5 10 15 20 a b c Figure 6: Learning trajectory classes. a) Schematics of the competitive network of a set of synapses. The dashed line shows one synapse: the synaptic weight  , the input to the synapse unit and its output . Multiplication by the output signal of the SCU is indicated by the “x” in the small square, the linear mapping by the bold line from the synapse output to the weight. b) Output of the SCU during the repetitive presentation of a particular trajectory. Initial weights were random. c) Learning the synaptic weights associated with one particular time-delay. is given by     A       "   ( ) -  . / 0 2 4  6 (5)  $       &      where the synapse units have an sigmoidal activation function  0  and     ,  and   are defined as in (2) and (4). The synaptic dynamics are given by < 6   6     (     A        $       &   (6) Since the activity of the synapse units    is always between 0 and 1 a linear mapping to the actual synaptic weights is performed:                  . To allow activation of the SCU with unlearned synapses we choose     "         , where     is the strongest possible inhibitory weight. This assures that the weights are all slightly positive before learning.   increases with increasing learning progress. The input term in (6) is the product of: the input weight ( $ ), the delayed input to the synapse (     ) and the output of the SCU (   ) (see Figure 6a). The term      is included to enable learning only if the sequence is completed. The weight of a particular synapse is increased if both, the input to the synapse and the activity of the target SCU are high. The reduction of the other weights is due to the competitive network behavior. The learning mechanism is tested using simulated and real world inputs. Under the restriction that trajectories must differ by more than one event the system is able to learn sequences of length three. Sequences that differ by only one event are learnt by the same SCU, thus subsequent sequences overwrite previous learned ones. In Figure 6b,c the learning process of one particular trajectory class of three events is shown. This trajectory is part of a set of six trajectories that were learned during one simulation cycle, where each input trajectory was consecutively presented five times. 5 Conclusions and outlook We have shown a strikingly simple3 network system that reliably classifies distinct visual motion patterns. Clearly, the application of the optical flow chip substantially reduces the remaining computational load and allows real-time processing. A remarkable feature of our system is that - with the exception of the visual motion frontend, but including the learning rule - all networks have competitive dynamics and are based on the classical Winner-Take-All architecture. WTA networks are shown to be compactly implemented in aVLSI [10]. Thus, given also the small network size, it seems very likely to allow a complete aVLSI system-on-chip integration, not considering the learning mechanism. Such a single chip system would represent a very efficient computational device, requiring minimal space, weight and power. The ’quasi-discretization’ in visual motion space that emerges from the non-linear amplification in the direction selective network could be refined to include not only more directions but also different speed-levels. That way, richer sets of trajectories can be classified. Many applications in mobile electronic devices are imaginable that require (or desire) a touchless interface. Commercial applications in people control and surveillance seem feasible and are already considered. Acknowledgments This work is supported by the Human Frontiers Science Project grant no. RG00133/2000-B and ETHZ Forschungskredit no. 0-23819-01. References [1] A. Stocker and R. J. Douglas. Computation of smooth optical flow in a feedback connected analog network. Advances in Neural Information Processing Systems, 11:706–712, 1999. [2] L. G. Sotelino, M. Saerens, and H. Bersini. Classification of temporal trajectories by continuous-time recurrent nets. Neural Networks, 7(5):767–776, 1994. [3] D. T. Lin, J. E. Dayhoff, and P. A. Ligomenides. Trajectory recognition with a time-delay neural network. International Joint Conference on Neural Networks, Baltimore, III:197–202, 1992. [4] H. Gutfreund and M. Mezard. Processing of temporal sequences in neural networks. Phys. Rev. Letters, 61(2):235–238, July 1988. [5] D.-L. Lee. Pattern sequence recognition using a time-varying hopfield network. IEEE Trans. on Neural Networks, 13(2):330–342, March 2002. [6] D. W. Tank and J. J. Hopfield. Neural computation by concentrating information in time. Proc. Natl. Acad. Sci. USA, 84:1896–1900, April 1987. [7] J. J. Hopfield and C. D. Brody. What is a moment? Transient synchrony as a collective mechanism for spatiotemporal integration. Proc. Natl. Acad. Sci. USA, 98:1282–1287, January 2001. [8] A. Stocker. Constraint optimization networks for visual motion perception - analysis and synthesis. PhD thesis, ETH Z¨urich, No. 14360, 2001. [9] J. J. Hopfield. Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA, 81:3088–3092, May 1984. [10] R. Hahnloser, R. Sarpeshkar, M. Mahowald, R. Douglas, and S. Seung. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405:947–951, June 2000. 3e.g. the presented man-machine interface consists only of 31 units and 4x4 time-delays, not counting the network elements in the optical flow chip.
2002
9
2,298
Learning in Spiking Neural Assemblies David Barber Institute for Adaptive and Neural Computation Edinburgh University 5 Forrest Hill, Edinburgh, EH1 2QL, U.K. dbarber@anc.ed.ac.uk Abstract We consider a statistical framework for learning in a class of networks of spiking neurons. Our aim is to show how optimal local learning rules can be readily derived once the neural dynamics and desired functionality of the neural assembly have been specified, in contrast to other models which assume (sub-optimal) learning rules. Within this framework we derive local rules for learning temporal sequences in a model of spiking neurons and demonstrate its superior performance to correlation (Hebbian) based approaches. We further show how to include mechanisms such as synaptic depression and outline how the framework is readily extensible to learning in networks of highly complex spiking neurons. A stochastic quantal vesicle release mechanism is considered and implications on the complexity of learning discussed. 1 Introduction Models of individual neurons range from simple rate based approaches to spiking models and further detailed descriptions of protein dynamics within the cell[9, 10, 13, 6, 12]. As the experimental search for the neural correlates of memory increasingly consider multi-cell observations, theoretical models of distributed memory become more relevant[12]. Despite increasing complexity of neural description, many theoretical models of learning are based on correlation Hebbian assumptions – that is, changes in synaptic efficacy are related to correlations of preand post-synaptic firing[9, 10, 14]. Whilst such learning rules have some theoretical justification in toy neural models, they are not necessarily optimal in more complex cases in which the dynamics of the cell contains historical information, such as modelled by synaptic facilitation and depression, for example[1]. It is our belief that appropriate synaptic learning rules should appear as a natural consequence of the neurodynamical system and some desired functionality – such as storage of temporal sequences. It seems clear that, as the brain operates dynamically through time, relevant cognitive processes are plausibly represented in vivo as temporal sequences of spikes in restricted neural assemblies. This paradigm has heralded a new research front in dynamic systems of spiking neurons[10]. However, to date, many learning algorithms assume Hebbian learning, and assess its performance in a given model[8, 6, 14]. h(1) h(2) h(t) v(1) v(2) v(t) (a) Deterministic Hiddens neuron j neuron i Internal Dynamics axon (deterministic) Highly Complex stochastic firing . . (b) Neural firing model Figure 1: (a) A first order Dynamic Bayesian Network with deterministic hidden states (represented by diamonds). (b) The basic simplification for neural firing. Recent work[13] has taken into account some of the complexities in the synaptic dynamics, including facilitation and depression, and derived appropriate learning rules. However, these are rate based models, and do not capture the detailed stochastic firing effects of individual neurons. Other recent work [4] has used experimental observations to modify Hebbian learning rules to make heuristic rules consistent with empirical observations[11]. However, as more and more detail of cellular processes are experimentally discovered, it would be satisfying to see learning mechanisms as naturally derivable consequences of the underlying cellular constraints. This paper is a modest step in this direction, in which we outline a framework for learning in spiking systems which can handle highly complex cellular processes. The major simplifying assumption is that internal cellular processes are deterministic, whilst communication between cells can be stochastic. The central aim of this paper is to show that optimal learning algorithms are derivable consequences of statistical learning criteria. Quantitative agreement with empirical data would require further realistic constraints on the model parameters but is not a principled hindrance to our framework. 2 A Framework for Learning A neural assembly of V neurons is represented by a vector v(t) whose components vi(t), i = 1, . . . , V represent the state of neuron i at time t. Throughout we assume that vi(t) ∈{0, 1}, for which vi(t) = 1 means that neuron i spikes at time t, and vi(t) = 0 denotes no spike. The shape of an action potential is assumed therefore not to carry any information. This constraint of a binary state firing representation could be readily relaxed without great inconvenience to multiple or even continuous states. Our stated goal is to derive optimal learning rules for an assumed desired functionality and a given neural dynamics. To make this more concrete, we assume that the task is sequence learning (although generalistions to other forms of learning, including input-output type dynamics are readily achievable[2]). We make the important assumption that the neural assembly has a sequence of states V = {v(1), v(2), . . . , v(t = T)} that it wishes to store (although how such internal representations are known is in itself a fundamental issue that needs to be ultimately addressed). In addition to the neural firing states, V, we assume that there are hidden/latent variables which influence the dynamics of the assembly, but which cannot be directly observed. These might include protein levels within a cell, for example. These variables may also represent environmental conditions external to the cell and common to groups of cells. We represent a sequence of hidden variables by H = {h(1), h(2), . . . , h(T)}. The general form of our model is depicted in fig(1)[a] and comprises two components 1. Neural Conditional Independence : p(v(t + 1)|v(t), h(t)) = VY i=1 p(vi(t + 1)|v(t), h(t), θv) (1) This distribution specifies that all the information determining the probability that neuron i fires at time t + 1 is contained in the immediate past firing of the neural assembly at time v(t) and the hidden states h(t). The distribution is parameterised by θv, which can be learned from a training sequence (see below). Here time simply discretises the dynamics. In principle, a unit of time in our model may represent a fraction of millisecond. 2. Deterministic Hidden Variable Updating : h(t + 1) = f (v(t + 1), v(t), h(t), θh) (2) This equation specifies that the next hidden state of the assembly h(t + 1) depends on a vector function f of the states v(t+1), v(t), h(t). The function f is parameterised by θh which is to be learned. This model is a special case of Dynamic Bayesian networks, in which the hidden variables are deterministic functions of their parental states and is treated in more generality in [2]. The model assumptions are depicted in fig(1)[b] in which potentially complex deterministic interactions within a neuron can be considered, with lossy transmission of this information between neurons in the form of stochastic firing. Whilst the restriction to deterministic hidden dynamics appears severe, it has the critical advantage that learning in such models can be achieved by deterministic forward propagation through time. This is not the case in more general Dynamic Bayesian networks where an integral part of the learning procedure involves, in principal, both forward and backward temporal passes (non-causal learning), and also imposes severe restrictions on the complexity of the hidden unit dynamics due to computational difficulties[7, 2]. A central ingredient of our approach is that it deals with individual spike events, and not just spiking-rates as used in other studies[13]. The key mechanism for learning in statistical models is maximising the log-likelihood L(θv, θh|V) of a sequence V, L(θv, θh|V) = log p(v(1)|θv) + T −1 X t=1 log p(v(t + 1)|v(t), h(t), θv) (3) where the hidden unit values are calculated recursively using (2). Training multiple sequences Vµ, µ = 1, . . . P is straightforward using the log-likelihood P µ L(θv, θh|Vµ). To maximise the log-likelihood, it is useful to evaluate the derivatives with respect to the model parameters. These can be calculated as follows : dL dθv = ∂p(v(1)|θv) ∂θv + T −1 X t=1 ∂ ∂θv log p(v(t + 1)|v(t), h(t), θv) (4) dL dθh = T −1 X t=1 ∂ ∂h(t) log p(v(t + 1)|v(t), h(t), θv)dh(t) dθh (5) dh(t) dθh = ∂f(t) ∂θh + ∂f(t) ∂h(t −1) dh(t −1) dθh (6) where f(t) ≡f(v(t), v(t −1), h(t −1), θh). Hence : 1. Learning can be carried out by forward propagation through time. In a biological system it is natural to use gradient ascent training θ ←θ+ηdL/dθ where the learning rate η is chosen small enough to ensure convergence to a local optimum of the likelihood. This batch training procedure is readily convertible to an online form if needed. 2. Highly complex functions f and tables p(v(t + 1)|v(t), h(t)) may be used. In the remaining sections, we apply this framework to some simple models and show how optimal learning rules can be derived for old and new theoretical models. 2.1 Stochastically Spiking Neurons We assume that neuron i fires depending on the membrane potential ai(t) through p(vi(t + 1) = 1|v(t), h(t)) = p(vi(t + 1) = 1|ai(t)). (More complex dependencies on environmental variables are also clearly possible). To be specific, we take throughout p(vi(t + 1) = 1|ai(t)) = σ (ai(t)), where σ(x) = 1/(1 + e−x). The probability of the quiescent state is 1 minus this probability, and we can conveniently write p(vi(t + 1)|ai(t)) = σ ((2vi(t + 1) −1)ai(t)) (7) which follows from 1 −σ(x) = σ(−x). The choice of the sigmoid function σ(x) is not fundamental and is simply analytically convenient. The log-likelihood of a sequence of visible states V is L = T −1 X t=1 V X i=1 log σ ((2vi(t + 1) −1)ai(t)) (8) and the (online) gradient of the log-likelihood is then dL(t + 1) dwij = (vi(t + 1) −σ(ai(t))) dai(t) dwij (9) where we used the fact that vi ∈{0, 1}. The batch gradient is simply given by summing the above online gradient over time. Here wij are parameters of the membrane potential (see below). We take (9) as common to the remainder in which we model the membrane potential ai(t) with increasing complexity. 2.2 A simple model of the membrane potential Perhaps the simplest membrane potential model is the Hopfield potential ai(t) ≡ V X j=1 wijvj(t) −bi (10) where wij characterizes the synaptic efficacy from neuron j (pre-synaptic) to neuron i (post-synaptic), and bi is a threshold. The model is depicted in fig(2)[a]. Applying ai(t −1) ai(t) ai(t + 1) v(t −1) v(t) v(t + 1) (a) Hopfield Graph xi(t −1) xi(t) xi(t + 1) ai(t −1) ai(t) ai(t + 1) v(t −1) v(t) v(t + 1) (b) Hopfield with Dynamic Synapses Figure 2: (a) The graph for a simple Hopfield membrane potential shown only for a single membrane potential. The potential is a deterministic function of the network state and (the collection of) membrane potentials influences the next state of the network. (b) Dynamic synapses correspond to hidden variables which influence the membrane potential and update themselves, depending on the firing of the network. Only one membrane potential and one synaptic factor is shown. our framework to this model to learn a temporal sequence V by adjustment of the parameters wij (the bi are fixed for simplicity), we obtain the (batch) learning rule wnew ij = wij + η dL dwij , dL dwij = T −1 X t=1 (vi(t + 1) −σ(ai(t))) vj(t), (11) where the learning rate η is chosen empirically to be sufficiently small to ensure convergence. Note that in the above rule vi(t + 1) refers to the desired known training pattern, and σ(ai(t)) can be interpreted as the average instantaneous firing rate of neuron i at time t + 1 when its inputs are clamped to the known desired values of the network at time t. This is a form of Delta Rule (or Rescorla-Wagner) learning[12]. The above learning rule can be seen as a modification of the standard Hebb learning rule wij = PT −1 t=1 vi(t + 1)vj(t). However, the rule (11) can store a sequence of V linearly independent patterns, much greater than the 0.26V capacity of the Hebb rule[5]. Biologically, the rule (11) could be implemented by measuring the difference between the desired training state vi(t + 1) of neuron i, and the instantaneous firing rate of neuron i when all other neurons, j ̸= i are clamped in training states vj(t). Simulations with this model and comparison with other training approaches are given in [3]. 3 Dynamic Synapses In more realistic synaptic models, neurotransmitter generation depends on a finite rate of cell subcomponent production, and the quantity of vesicles released is affected by the history of firing[1]. The depression mechanism affects the impact of spiking on the membrane potential response by moderating terms in the membrane potential ai(t) of the form P j wijvj(t) to P j wijxj(t)vj(t), for depression factors xj(t) ∈[0, 1]. A simple dynamics for these depression factors is[15, 14] xj(t + 1) = xj(t) + δt µ1 −xj(t) τ −Uxj(t)vj(t) ¶ (12) neuron number Original t 10 20 5 10 15 20 25 30 35 40 45 50 Reconstruction t 10 20 5 10 15 20 25 30 35 40 45 50 x values t 10 20 5 10 15 20 25 30 35 40 45 50 Hebb Reconstruction t 10 20 5 10 15 20 25 30 35 40 45 50 Figure 3: Learning with depression : U = 0.5, τ = 5, δt = 1, η = 0.25. where δt, τ, and U represent time scales, recovery times and spiking effect parameters respectively. Note that these depression factor dynamics are exactly of the form of hidden variables that are not observed, consistent with our framework in section (2), see fig(2)[b]. Whilst some previous models have considered learning rules for dynamic synapses using spiking-rate models [13, 15] we consider learning in a stochastic spiking model. Also, in contrast to a previous study which assumes that the synaptic dynamics modulates baseline Hebbian weights[14], we show below that it is straightforward to include dynamic synapses in a principled way using our learning framework. Since the depression dynamics in this model do not explicitly depend on wij, the gradients are simple to calculate. Note that synaptic facilitation is also straightforward to include in principle[15]. For the Hopfield potential, the learning dynamics is simply given by equations (9,12), with dai(t) dwij = xj(t)vj(t). In fig(3) we demonstrate learning a random temporal sequence of 20 time steps for an assembly of 50 neurons. After learning wij with our rule, we initialised the trained network in the first state of the training sequence. The remaining states of the sequence were then correctly recalled by iteration of the learned model. The corresponding generated factors xi(t) are also plotted. For comparison, we plot the results of using the dynamics having set the wij using a temporal Hebb rule. The poor performance of the correlation based Hebb rule demonstrates the necessity, in general, to couple a dynamical system with an appropriate learning mechanism which, in this case at least, is readily available. 4 Leaky Integrate and Fire models Leaky integrate and fire models move a step towards biological realism in which the membrane potential increments if it receives an excitatory stimulus (wij > 0), and decrements if it receives an inhibitory stimulus (wij < 0). A model that incorporates such effects is ai(t) =  αai(t −1) + X j wijvj(t) + θrest (1 −α)  (1 −vi(t −1)) + vi(t −1)θfired (13) Since vi ∈{0, 1}, if neuron i fires at time t −1 the potential is reset to θfired at time t. Similarly, with no synaptic input, the potential equilibrates to θrest with time constant −1/ log α. Here α ∈[0, 1] represents membrane leakage characteristic of this class of models. a(t −1) a(t) a(t + 1) r(t −1) r(t) r(t + 1) v(t −1) v(t) v(t + 1) Figure 4: Stochastic vesicle release (synaptic dynamic factors not indicated). Despite the apparent increase in complexity of the membrane potential over the simple Hopfield case, deriving appropriate learning dynamics for this new system is straightforward since, as before, the hidden variables (here the membrane potentials) update in a deterministic fashion. The membrane derivatives are dai(t) dwij = (1 −vi(t −1)) µ αdai(t −1) dwij + vj(t) ¶ (14) By initialising the derivative dai(t=1) dwij = 0, equations (9,13,14) define a first order recursion for the gradient which can be used to adapt wij in the usual manner wij ←wij + ηdL/dwij. We could also apply synaptic dynamics to this case by replacing the term vj(t) in (14) by xj(t)vj(t). A direct consequence of the above learning rule (explored in detail elsewhere) is a spike time dependent learning window in qualitative agreement with experimental results[11], a pleasing corollary of our approach, and is consistent with our belief that such observed plasticity has at its core a simple learning rule. 5 A Stochastic Vesicle Release Model Neurotransmitter release can be highly stochastic and it would be desirable to include this mechanism in our models. A simple model of quantal release of transmitter from pre-synaptic neuron j to post-synaptic neuron i is to release a vesicle with probability p(rij(t) = 1|xij(t), vj(t)) = xij(t)vj(t)Rij (15) where, in analogy with (12), xij(t + 1) = xij(t) + δt µ1 −xij(t) τ −Uxij(t)rij(t) ¶ (16) and Rij ∈[0, 1] is a plastic release parameter. The membrane potential is then governed in integrate and fire models by ai(t) =  αai(t −1) + X j wijrij(t) + θrest (1 −α)  (1 −vi(t −1)) + vi(t −1)θfired (17) This model is schematically depicted in fig(4). Since the unobserved stochastic release variables rij(t) are hidden, this model does not have fully deterministic hidden dynamics. In general, learning in such models is more complex and would require both forward and backward temporal propagations including, undoubtably, graphical model approximation techniques[7]. 6 Discussion Leaving aside the issue of stochastic vesicle release, a further step in the evolution of membrane complexity is to use Hodgkin-Huxley type dynamics[9]. Whilst this might appear complex, in principle, this is straightforward since the membrane dynamics can be represented by deterministic hidden dynamics. Explicitly summing out the hidden variables would then give a representation of Hodgkin-Huxley dynamics analogous to that of the Spike Response Model (see Gerstner in [10]). Deriving optimal learning in assemblies of stochastic spiking neurons can be achieved using maximum likelihood. This is straightforward in cases for which the latent dynamics is deterministic. It is worth emphasising, therefore, that almost arbitrarily complex spatio-temporal patterns may potentially be learned – and generated under cued retrieval – for very complex neural dynamics. Whilst this framework cannot deal with arbitrarily complex stochastic interactions, it can deal with learning in a class of interesting neural models, and concepts from graphical models can be useful in this area. A more general stochastic framework would need to examine approximate causal learning rules which, despite not being fully optimal, may perform well. Finally, our assumption that the brain operates optimally (albeit within severe constraints) enables us to drop other assumptions about unobserved processes, and leads to models with potentially more predictive power. References [1] L.F. Abbott, J.A. Varela, K. Sen, and S.B. Nelson, Synaptic depression and cortical gain control, Science 275 (1997), 220–223. [2] D. Barber, Dynamic Bayesian Networks with Deterministic Latent Tables, Neural Information Processing Systems (2003). [3] D. Barber and F. Agakov, Correlated sequence learning in a network of spiking neurons using maximum likelihood, Tech. Report EDI-INF-RR-0149, School of Informatics, 5 Forrest Hill, Edinburgh, UK, 2002. [4] C. Chrisodoulou, G. Bugmann, and T.G. Clarkson, A Spiking Neuron Model : Applications and Learning, Neural Networks 15 (2002), 891–908. [5] A. D¨uring, A.C.C. Coolen, and D. Sherrington, Phase diagram and storage capacity of sequence processing neural networks, Journal of Physics A 31 (1998), 8607–8621. [6] W. Gerstner, R. Ritz, and J.L. van Hemmen, Why Spikes? Hebbian Learning and retrieval of time-resolved excitation patterns, Biological Cybernetics 69 (1993), 503– 515. [7] M. I. Jordan, Learning in Graphical Models, MIT Press, 1998. [8] R. Kempter, W. Gerstner, and J.L. van Hemmen, Hebbian learning and spiking neurons, Physical Review E 59 (1999), 4498–4514. [9] C. Koch, Biophysics of Computation, Oxford University Press, 1998. [10] W. Maass and C. Bishop, Pulsed Neural Networks, MIT Press, 2001. [11] H. Markram, J. Lubke, M. Frotscher, and B. Sakmann, Regulation of synaptic efficacy by coindence of postsynaptic APs and EPSPs, Science 275 (1997), 213–215. [12] S.J. Martin, P.D. Grimwood, and R.G.M. Morris, Synaptic Plasticity and Memory: An Evaluation of the Hypothesis, Annual Reviews Neuroscience 23 (2000), 649–711. [13] T. Natschl¨ager, W. Maass, and A. Zador, Efficient Temporal Processing with Biologically Realistic Dynamic Synapses, Tech Report (2002). [14] L. Pantic, J.T. Joaquin, H.J. Kappen, and S.C.A.M. Gielen, Associatice Memory with Dynamic Synapses, Neural Computation 14 (2002), 2903–2923. [15] M. Tsodyks, K. Pawelzik, and H. Markram, Neural Networks with Dynamic Synapses, Neural Computation 10 (1998), 821–835.
2002
90
2,299
A Model for Real-Time Computation in Generic Neural Microcircuits Wolfgang Maass , Thomas Natschl¨ager Institute for Theoretical Computer Science Technische Universitaet Graz, Austria  maass, tnatschl  @igi.tu-graz.ac.at Henry Markram Brain Mind Institute EPFL, Lausanne, Switzerland henry.markram@epfl.ch Abstract A key challenge for neural modeling is to explain how a continuous stream of multi-modal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrate-and-fire neurons in real-time. We propose a new computational model that is based on principles of high dimensional dynamical systems in combination with statistical learning theory. It can be implemented on generic evolved or found recurrent circuitry. 1 Introduction Diverse real-time information processing tasks are carried out by neural microcircuits in the cerebral cortex whose anatomical and physiological structure is quite similar in many brain areas and species. However a model that could explain the potentially universal computational capabilities of such recurrent circuits of neurons has been missing. Common models for the organization of computations, such as for example Turing machines or attractor neural networks, are not suitable since cortical microcircuits carry out computations on continuous streams of inputs. Often there is no time to wait until a computation has converged, the results are needed instantly (“anytime computing”) or within a short time window (“real-time computing”). Furthermore biological data prove that cortical microcircuits can support several real-time computational tasks in parallel, a fact that is inconsistent with most modeling approaches. In addition the components of biological neural microcircuits, neurons and synapses, are highly diverse [1] and exhibit complex dynamical responses on several temporal scales. This makes them completely unsuitable as building blocks of computational models that require simple uniform components, such as virtually all models inspired by computer science or artificial neural nets. Finally computations in common computational models are partitioned into discrete steps, each of which require convergence to some stable internal state, whereas the dynamics of cortical microcircuits appears to be continuously changing. In this article we present a new conceptual framework for the organization of computations in cortical microcircuits that is not only compatible with all these constraints, but actually requires these biologically realistic features of neural computation. Furthermore like Turing machines this conceptual approach is supported by theoretical results that prove the universality of the computational model, but for the biologically more relevant case of real-time computing on continuous input streams.  The work was partially supported by the Austrian Science Fond FWF, project #P15386. PSfrag replacementsA      0 0.1 0.2 0.3 0.4 0.5 0 0.5 1 1.5 2 2.5 d(u,v)=0 d(u,v)=0.1 d(u,v)=0.2 d(u,v)=0.4 state distance time [sec] PSfrag replacements B Figure 1: A Structure of a Liquid State Machine (LSM), here shown with just a single readout. B Separation property of a generic neural microcircuit. Plotted on the  -axis is the value of       , where    denotes the Euclidean norm, and   ,   denote the liquid states at time  for Poisson spike trains and  as inputs, averaged over many and  with the same distance     .     is defined as distance (  -norm) between low-pass filtered versions of and  . 2 A New Conceptual Framework for Real-Time Neural Computation Our approach is based on the following observations. If one excites a sufficiently complex recurrent circuit (or other medium) with a continuous input stream  , and looks at a later time  at the current internal state  of the circuit, then  is likely to hold a substantial amount of information about recent inputs !" (for the case of neural circuit models this was first demonstrated by [2]). We as human observers may not be able to understand the “code” by which this information about  is encoded in the current circuit state  , but that is obviously not essential. Essential is whether a readout neuron that has to extract such information at time  for a specific task can accomplish this. But this amounts to a classical pattern recognition problem, since the temporal dynamics of the input stream  has been transformed by the recurrent circuit into a high dimensional spatial pattern  . A related approach for artificial neural nets was independently explored in [3]. In order to analyze the potential capabilities of this approach, we introduce the abstract model of a Liquid State Machine (LSM), see Fig. 1A. As the name indicates, this model has some weak resemblance to a finite state machine. But whereas the finite state set and the transition function of a finite state machine have to be custom designed for each particular computational task, a liquid state machine might be viewed as a universal finite state machine whose “liquid” high dimensional analog state  changes continuously over time. Furthermore if this analog state  is sufficiently high dimensional and its dynamics is sufficiently complex, then it has embedded in it the states and transition functions of many concrete finite state machines. Formally, an LSM # consists of a filter (i.e. a function that maps input streams $% onto streams   , where  may depend not just on  , but in a quite arbitrary nonlinear fashion also on previous inputs  ; in mathematical terminology this is written '&( )* ), and a (potentially memoryless) readout function that maps at any time  the filter output  (i.e., the “liquid state”) into some target output   . Hence the LSM itself computes a filter that maps $% onto    . In our application to neural microcircuits, the recurrently connected microcircuit could be viewed in a first approximation as an implementation of a general purpose filter (for example some unbiased analog memory), from which different readout neurons extract and recombine diverse components of the information contained in the input $% . The liquid state  is that part of the internal circuit state at time  that is accessible to readout neurons. An example where $% consists of 4 spike trains is shown in Fig. 2. The generic microcircuit model (270 neurons) was drawn from the distribution discussed in section 3. input 0.2 0.4 0 0.6 0 0.8 0.2 0.4 0 3 0 0.15 0 0.2 0.4 0.6 0.8 1 0.1 0.3 time [sec] PSfrag replacements  : sum of rates of inputs 1&2 in the interval [  -30 ms,  ]  : sum of rates of inputs 3&4 in the interval [  -30 ms,  ]  : sum of rates of inputs 1-4 in the interval [  -60 ms,  -30 ms]  : sum of rates of inputs 1-4 in the interval [  -150 ms,  ]  : spike coincidences of inputs 1&3 in the interval [  -20 ms,  ]  : nonlinear combination           ! : nonlinear combination #"       $&%     ('*)  +   $&,.  Figure 2: Multi-tasking in real-time. Input spike trains were randomly generated in such a way that at any time  the input contained no information about preceding input more than 30 ms ago. Firing rates /  were randomly drawn from the uniform distribution over [0 Hz, 80 Hz] every 30ms, and input spike trains 1 and 2 were generated for the present 30 ms time segment as independent Poisson spike trains with this firing rate /  . This process was repeated (with independent drawings of /  and Poission spike trains) for each 30 ms time segment. Spike trains 3 and 4 were generated in the same way, but with independent drawings of another firing rate 0 /  every 30 ms. The results shown in this figure are for test data, that were never before shown to the circuit. Below the 4 input spike trains the target (dashed curves) and actual outputs (solid curves) of 7 linear readout neurons are shown in real-time (on the same time axis). Targets were to output every 30 ms the actual firing rate (rates are normalized to a maximum rate of 80Hz) of spike trains 1&2 during the preceding 30 ms ( 21 ), the firing rate of spike trains 3&4 (  ), the sum of 31 and  in an earlier time interval [  -60ms,  -30ms] ( . ) and during the interval [  -150 ms,  ] ( % ), spike coincidences between inputs 1&3 ( 54  is defined as the number of spikes which are accompanied by a spike in the other spike train within 5 ms during the interval [  -20 ms,  ]), a simple nonlinear combinations 76 and a randomly chosen complex nonlinear combination 98 of earlier described values. Since that all readouts were linear units, these nonlinear combinations are computed implicitly within the generic microcircuit model. Average correlation coefficients between targets and outputs for 200 test inputs of length 1 s for 1 to 8 were 0.91, 0.92, 0.79, 0.75, 0.68, 0.87, and 0.65. In this case the 7 readout neurons 1 to :8 (modeled for simplicity just as linear units with a membrane time constant of 30 ms, applied to the spike trains from the neurons in the circuit) were trained to extract completely different types of information from the input stream $% , which require different integration times stretching from 30 to 150ms. Since the readout neurons had a biologically realistic short time constant of just 30 ms, additional temporally integrated information had to be contained at any instance  in the current firing state  of the recurrent circuit (its “liquid state”). In addition a large number of nonlinear combinations of this temporally integrated information are also “automatically” precomputed in the circuit, so that they can be pulled out by linear readouts. Whereas the information extracted by some of the readouts can be described in terms of commonly discussed schemes for “neural codes”, this example demonstrates that it is hopeless to capture the dynamics or the information content of the primary engine of the neural computation, the liquid state of the neural circuit, in terms of simple coding schemes. 3 The Generic Neural Microcircuit Model We used a randomly connected circuit consisting of leaky integrate-and-fire (I&F) neurons, 20% of which were randomly chosen to be inhibitory, as generic neural microcircuit model.1 Parameters were chosen to fit data from microcircuits in rat somatosensory cortex (based on [1], [4] and unpublished data from the Markram Lab). 2 It turned out to be essential to keep the connectivity sparse, like in biological neural systems, in order to avoid chaotic effects. In the case of a synaptic connection from to  we modeled the synaptic dynamics according to the model proposed in [4], with the synaptic parameters  (use),  (time constant for depression),  (time constant for facilitation) randomly chosen from Gaussian distributions that were based on empirically found data for such connections. 3 We have shown in [5] that without such synaptic dynamics the computational power of these microcircuit models decays significantly. For each simulation, the initial conditions of each I&F neuron, i.e. the membrane voltage at time  & , were drawn randomly (uniform distribution) from the interval [13.5mV, 15.0mV]. The “liquid state”  of the recurrent circuit consisting of  neurons was modeled by an  -dimensional vector computed by applying a low pass filter with a time constant of 30 ms to the spike trains generated by the  neurons in the recurrent microcicuit. 1The software used to simulate the model is available via www.lsm.tugraz.at . 2Neuron parameters: membrane time constant 30 ms, absolute refractory period 3 ms (excitatory neurons), 2 ms (inhibitory neurons), threshold 15 mV (for a resting membrane potential assumed to be 0), reset voltage 13.5 mV, constant nonspecific background current    nA, input resistance 1 M  . Connectivity structure: The probability of a synaptic connection from neuron  to neuron  (as well as that of a synaptic connection from neuron  to neuron  ) was defined as  "!$#&%' )(*,+*-'. % + , where . is a parameter which controls both the average number of connections and the average distance between neurons that are synaptically connected (we set .& 0/ , see [5] for details). We assumed that the neurons were located on the integer points of a 3 dimensional grid in space, where #1 )(*,+ is the Euclidean distance between neurons  and  . Depending on whether  and  were excitatory ( 2 ) or inhibitory (  ), the value of  was 0.3 ( 232 ), 0.2 ( 23 ), 0.4 ( 2 ), 0.1 ( 4 ). 3Depending on whether  and  were excitatory ( 2 ) or inhibitory (  ), the mean values of these three parameters (with # , 5 expressed in seconds, s) were chosen to be .5, 1.1, .05 ( 232 ), .05, .125, 1.2 ( 26 ), .25, .7, .02 ( 2 ), .32, .144, .06 ( 4 ). The SD of each parameter was chosen to be 50% of its mean. The mean of the scaling parameter 7 (in nA) was chosen to be 30 (EE), 60 (EI), -19 (IE), -19 (II). In the case of input synapses the parameter 7 had a value of 18 nA if projecting onto a excitatory neuron and 9 nA if projecting onto an inhibitory neuron. The SD of the 7 parameter was chosen to be 100% of its mean and was drawn from a gamma distribution. The postsynaptic current was modeled as an exponential decay  "!98 -;:"<+ with :<= 0 ms ( :,<9 ?> ms) for excitatory (inhibitory) synapses. The transmission delays between liquid neurons were chosen uniformly to be 1.5 ms ( 232 ), and 0.8 ms for the other connections. 4 Towards a non-Turing Theory for Real-Time Neural Computation Whereas the famous results of Turing have shown that one can construct Turing machines that are universal for digital sequential offline computing, we propose here an alternative computational theory that is more adequate for analyzing parallel real-time computing on analog input streams. Furthermore we present a theoretical result which implies that within this framework the computational units of the system can be quite arbitrary, provided that sufficiently diverse units are available (see the separation property and approximation property discussed below). It also is not necessary to construct circuits to achieve substantial computational power. Instead sufficiently large and complex “found” circuits (such as the generic circuit used as the main building block for Fig. 2) tend to have already large computational power, provided that the reservoir from which their units are chosen is sufficiently rich and diverse. Consider a class of basis filters  (that may for example consist of the components that are available for building filters of neural LSMs, such as dynamic synapses). We say that this class has the point-wise separation property if for any two input functions $%   $% with  &  ! for some   there exists some with      &   * .4 There exist completely different classes of filters that satisfy this point-wise separation property: =  all delay lines  , =  all linear filters  , and biologically more relevant =  models for dynamic synapses  (see [6]). The complementary requirement that is demanded from the class of functions from which the readout maps are to be picked is the well-known universal approximation property: for any continuous function and any closed and bounded domain one can approximate on this domain with any desired degree of precision by some  . An example for such a class is &  feedforward sigmoidal neural nets  . A rigorous mathematical theorem [5], states that for any class of filters that satisfies the point-wise separation property and for any class of functions that satisfies the universal approximation property one can approximate any given real-time computation on time-varying inputs with fading memory (and hence any biologically relevant real-time computation) by a LSM # whose filter is composed of finitely many filters in , and whose readout map is chosen from the class . This theoretial result supports the following pragmatic procedure: In order to implement a given real-time computation with fading memory it suffices to take a filter whose dynamics is “sufficiently complex”, and train a “sufficiently flexible” readout to assign for each time  and state  &  )* the target output   . Actually, we found that if the neural microcircuit model is not too small, it usually suffices to use linear readouts. Thus the microcircuit automatically assumes “on the side” the computational role of a kernel for support vector machines. For physical implementations of LSMs it makes more sense to study instead of the theoretically relevant point-wise separation property the following qualitative separation property as a test for the computational capability of a filter : how different are the liquid states   &  )* and   &   * for two different input histories $%     . This is evaluated in Fig. 1B for the case where     $% are Poisson spike trains and is a generic neural microcircuit model. It turns out, that the difference between the liquid states scales roughly proportionally to the difference between the two input histories. This appears to be desirable from the practical point of view, since it implies that saliently different input histories can be distinguished more easily and in a more noise robust fashion by the readout. We propose to use such evaluation of the separation capability of neural microcircuits as a new standard test for their computational capabilities. 4Note that it is not required that there exists a single  which achieves this separation for any two different input histories  " + ,  " + . 5 A Generic Neural Microcircuit on the Computational Test Stand The theoretical results sketched in the preceding section can be interpreted as saying that there are no strong a priori limitations for the power of neural microcircuits for real-time computing with fading memory, provided they are sufficiently large and their components are sufficiently heterogeneous. In order to evaluate this somewhat surprising theoretical prediction, we use a well-studied computational benchmark task for which data have been made publicly available5: the speech recognition task considered in [7] and [8]. The dataset consists of 500 input files: the words “zero”, “one”, ..., “nine” are spoken by 5 different (female) speakers, 10 times by each speaker. The task was to construct a network of I&F neurons that could recognize each of the 10 spoken words . Each of the 500 input files had been encoded in the form of 40 spike trains, with at most one spike per spike train 6 signaling onset, peak, or offset of activity in a particular frequency band. A network was presented in [8] that could solve this task with an error 7 of 0.15 for recognizing the pattern “one”. No better result had been achieved by any competing networks constructed during a widely publicized internet competition [7]. The network constructed in [8] transformed the 40 input spike trains into linearly decaying input currents from 800 pools, each consisting of a “large set of closely similar unsynchronized neurons” [8]. Each of the 800 currents was delivered to a separate pair of neurons consisting of an excitatory “  -neuron” and an inhibitory “  -neuron”. To accomplish the particular recognition task some of the synapses between  -neurons (  -neurons) are set to have equal weights, the others are set to zero. A particular achievement of this network (resulting from the smoothly and linearly decaying firing activity of the 800 pools of neurons) is that it is robust with regard to linear timewarping of the input spike pattern. We tested our generic neural microcircuit model on the same task (in fact on exactly the same 500 input files). A randomly chosen subset of 300 input files was used for training, the other 200 for testing. The generic neural microcircuit model was drawn from the distribution described in section 3, hence from the same distribution as the circuit drawn for the completely different task discussed in Fig. 2, with randomly connected I&F neurons located on the integer points of a     column. The synaptic weights of 10 linear readout neurons which received inputs from the 135 I&F neurons in the circuit were optimized (like for SVMs with linear kernels) to fire whenever the input encoded the spoken word . Hence the whole circuit consisted of 145 I&F neurons, less than    of the size of the network constructed in [8] for the same task 8. Nevertheless the average error achieved after training by these randomly generated generic microcircuit models was 0.14 (measured in the same way, for the same word ”one”), hence slightly better than that of the 30 times larger network custom designed for this task. The score given is the average for 50 randomly drawn generic microcircuit models. The comparison of the two different approaches also provides a nice illustration of the 5http://moment.princeton.edu/ mus/Organism/Competition/digits data.html 6The network constructed in [8] required that each spike train contained at most one spike. 7The error (or “recognition score”)  for a particular word  was defined in [8] by          , where ! #" ( %$ " ) is the number of false (correct) positives and & (' and )$ ' are the numbers of false and correct negatives. We use the same definition of error to facilitate comparison of results. The recognition scores of the network constructed in [8] and of competing networks of other researchers can be found at http://moment.princeton.edu/ ˜mus/Organism/Docs/winners.html. For the competition the networks were allowed to be constructed especially for their task, but only one single pattern for each word could be used for setting the synaptic weights. Since our microcircuit models were not prepared for this task, they had to be trained with substantially more examples. 8If one assumes that each of the 800 ”large” pools of neurons in that network would consist of just 5 neurons, it contains together with the * and + -neurons 5600 neurons. 0 45 90 135 0 0.2 0.4 time [s] 0 20 40 "one", speaker 5 PSfrag replacements input microcircuit readout  0 0.2 0.4 time [s] "one", speaker 3 PSfrag replacements 0 0.2 time [s] "five", speaker 1 PSfrag replacements 0 0.2 time [s] "eight", speaker 4 PSfrag replacements Figure 3: Application of our generic neural microcircuit model to the speech recognition from [8]. Top row: input spike patterns. Second row: spiking response of the 135 I&F neurons in the neural microcircuit model. Third row: output of an I&F neuron that was trained to fire as soon as possible when the word “one” was spoken, and as little as possible else. difference between offline computing, real-time computing, and any-time computing. Whereas the network of [8] implements an algorithm that needs a few hundred ms of processing time between the end of the input pattern and the answer to the classification task (450ms in the example of Fig. 2 in [8]), the readout neurons from the generic neural microcircuit were trained to provide their answer (through firing or non-firing) immediately when the input pattern ended. In fact, as illustrated in Fig. 3, one can even train the readout neurons quite successfully to provide provisional answers long before the input pattern has ended (thereby implementing an ”anytime” algorithm). More precisely, each of the 10 linear readout neurons was trained to recognize the spoken word at any multiple of 20 ms while the word was spoken. An error score of 1.4 was achieved for this anytime speech recognition task. We also compared the noise robustness of the generic microcircuit models with that of [8], which had been constructed to be robust with regard to linear time warping of the input pattern. Since no benchmark input data were available to calculate this noise robustness, we constructed such data by creating as templates 10 patterns consisting each of 40 randomly drawn Poisson spike trains at 4 Hz over 0.5 s. Noisy variations of these templates were created by first multiplying their time scale with a randomly drawn factor from      ) (thereby allowing for a 9 fold time warp), and subsequently dislocating each spike by an amount drawn independently from a Gaussian distribution with mean 0 and SD 32 ms. These spike patterns were given as inputs to the same generic neural microcircuit models consisting of 135 I&F neurons as discussed before. 10 linear readout neurons were trained (with 1000 randomly drawn training examples) to recognize which of the 10 templates had been used to generate a particular input. On 500 novel test examples (drawn from same distribution) they achieved an error of 0.09 (average performance of 30 randomly generated microcircuit models). As a consequence of achieving this noise robustness generically, rather then by a construction tailored to a specific type of noise, we found that the same generic microcircuit models are also robust with regard to nonlinear time warp of the input. For the case of nonlinear (sinusoidal) time warp 9 an average (50 microcircuits) error of 0.2 9A spike at time 8 was transformed into a spike at time 8   8 +     8   /)+    / 8  +*+ with  / Hz,  randomly drawn from [0.5,2],  randomly drawn from  ( /  and chosen such that   +  . is achieved. This demonstrates that it is not necessary to build noise robustness explicitly into the circuit. A randomly generated microcircuit model has at least the same noise robustness as a circuit especially constructed to achieve that. This test had implicitly demonstrated another point. Whereas the network of [8] was only able to classify spike patterns consisting of at most one spike per spike train, a generic neural microcircuit model can classify spike patterns without that restriction. It can for example also classify the original version of the speech data encoded into onsets, peaks, and offsets in various frequency bands, before all except the first events of each kind were artificially removed to fit the requirements of the network from [8]. The performance of the same generic neural microcircuit model on completely different computational tasks (recall of information from preceding input segments, movement prediction and estimation of the direction of movement of extended moving objects) turned out to be also quite remarkable, see [5], [9] and [10]. Hence this microcircuit model appears to have quite universal capabilities for real-time computing on time-varying inputs. 6 Discussion We have presented a new conceptual framework for analyzing computations in generic neural microcircuit models that satisfies the biological constraints listed in section 1. Thus for the first time one can now take computer models of neural microcircuits, that can be as realistic as one wants to, and use them not just for demonstrating dynamic effects such as synchronization or oscillations, but to really carry out demanding computations with these models. Furthermore our new conceptual framework for analyzing computations in neural circuits not only provides theoretical support for their seemingly universal capabilities for real-time computing, but also throws new light on key concepts such as neural coding. Finally, since in contrast to virtually all computational models the generic neural microcircuit models that we consider have no preferred direction of information processing, they offer an ideal platform for investigating the interaction of bottom-up and top-down processing of information in neural systems. References [1] A. Gupta, Y. Wang, and H. Markram. Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex. Science, 287:273–278, 2000. [2] D. V. Buonomano and M. M. Merzenich. Temporal information transformed into a spatial code by a neural network with realistic properties. Science, 267:1028–1030, Feb. 1995 1995. [3] H. Jaeger. The ”echo state” approach to analysing and training recurrent neural networks. German National Research Center for Information Technology, Report 148, 2001. [4] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of neocortical pyramidal neurons. Proc. Natl. Acad. Sci., 95:5323–5328, 1998. [5] W. Maass, T. Natschl¨ager, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neur. Comp., 14:2531–2560, 2002. [6] W. Maass and E. D. Sontag. Neural systems as nonlinear filters. Neur. Comp., 12:1743–1772, 2000. [7] J. J. Hopfield and C. D. Brody. What is a moment? “cortical”sensory integration over a brief interval. Proc. Natl. Acad. Sci. USA, 97(25):13919–13924, 2000. [8] J. J. Hopfield and C. D. Brody. What is a moment? transient synchrony as a collective mechanism for spatiotemporal integration. Proc. Natl. Acad. Sci. USA, 98(3):1282–1287, 2001. [9] W. Maass, R. A. Legenstein, and H. Markram. A new approach towards vision suggested by biologically realistic neural microcircuit models. In H. H. Buelthoff, S. W. Lee, T. A. Poggio, and C. Wallraven, editors, Proc. of the 2nd International Workshop on Biologically Motivated Computer Vision 2002, volume 2525 of LNCS, pages 282–293. Springer, 2002. [10] W. Maass, T. Natschl¨ager, and H. Markram. Computational models for generic cortical microcircuits. In J. Feng, editor, Computational Neuroscience: A Comprehensive Approach. CRCPress, 2002. to appear.
2002
91